I like the meeting format described in this Business Week article on Marissa Mayer, VP of Search Products & User Experience at Google.
New features are digitally projected onto the right side of a conference room wall, big as a movie screen. Everything Mayer and others say is transcribed and projected on the left. Underneath both looms a giant mega-timer. Everyone gets an average deadline of 10 minutes. Mayer and her team add and subtract to the feature as time runs down. It is iteration at lightning speed.
While the formal quick pitch format is unnecessary in small groups, what makes sense is that this format allows for new features to be proposed with frequency high up the chain of command in a large organization with some amount of structure in terms of time restraints. I assume engineers working on the project can pitch directly to Mayer. The critique format also allows a good deal of iteration while exploring ideas so they can be worked on some more and revisited. This excerpt describes the process:
What's most fascinating to me is the projection of the demo on the screen and the immediate capture of the discussion, which I assume goes directly into that internal project management system they talk about. That's excellent. Capturing these trasactions of verbal communications, although using brute force methods of manual transcription, is what knowledge management is about. The post-processing and information retrieval in their system is what glues it all together. That they're openly capturing everything in these meetings is what makes it effective KM work. It's not so hard to imagine all the pieces fit together into a product or at least a process that could be sold as an idea for realistic KM at work:
- WebEx type presentation software
- Transcription (manual now, voice recognition later in the retrieval system?)
- Search mechanism with some simple hooks for metadata (parsing for based on minimal formatting conventions, e.g. "[field name]:")
If enterprise meetings all went this way, mining of the types of tacit information usually floating around in conversations might begin to mean a bit more. Taken too far it could make a lot of information also mean less I suppose, but who cares as long as we have bigger hammers to tackle the signal to noise problem in retrieval. So I'm wondering if that type of process could be adopted as a model and be rolled into a solution. I want to see more under the hood at Google.