Something I worry about with generative AI in business and commercial use: almost no one fully reads anything in those environments.
Now imagine when even the author hasn't read what was written... yikes. How does AI writing and reading impact this reality?
I used to write long memos—significant ones—maybe once a year. I'd send them to thousands. That scale alone signals, "someone else will read it." I hoped direct reports and close colleagues would read them. I could count on 2 or 3 people to definitely read them.
Bill would read. Steve would read—but only if we discussed it in person, because that's how he worked.
Just an old memo featured in Hardcore Software .
I knew this, so I always made a slide version. I'd use it in dozens of team meetings. But even then, for months after sending a memo, I'd be referring members of the team back to what was in it. Could I have done better, of course. I did the best I could at the time. I figured once a year people could read 20-30 pages for their job.
People want context. They want the big ideas. But getting an organization—of any size—to actually read is almost impossible.
The only reliable thing people read? Org memos. And even then, if one (as I often did) didn’t include an org chart picture—rather than just words—people would skim or skip and wait for (hopefully) a tree graph in the email.
And these were from the “big boss,” sending out “big strategy.” So if you think folks in big orgs are reading 40-page PRDs, budget plans, new product proposals, or deal docs deeply and regularly… you're probably kidding yourself. I know how the Amazon process has evolved from friends there. It too is breaking down which is a bummer as I am a huge fan of that.
Now enter AI. What happens when it's doing the writing—and not even the author has deep knowledge of what was written?
... continue reading