(This is part 4 of a 5-part series of posts attempting to explain the AI landscape in a practical and simple way. See other posts here.)
I hate going to the dentist because everything my dentist does to me hurts.
I remember getting a cavity fixed in high school, and afterward, I ran into my friend’s crazy Uncle Trunk (actual name). I mentioned that I couldn’t eat solid food the rest of that day, and you’ll never guess his response: That’s why I don’t go to the dentist. If you don’t go, they can’t find any problems.
I did not take that as advice.
What does this have to do with Generative AI?
The more I talk with people, the more I sense that some are reluctant to have a more public conversation around AI out of fear. Having to let people go, or the thought of losing our own job, doesn’t feel good to think about. Compound that with security and privacy concerns, unclear regulations, legal implications, old tech stacks, lack of talent and our general fear of change, and it can feel overwhelming. So some folks are taking the Uncle Trunk approach: don’t acknowledge, don’t have to worry.
We’ve seen first-hand how executives are scrambling to adapt and devise AI road maps, and they face complex challenges to adopting AI (lack of talent, legacy tech stacks, crazy expanding AI tech landscape, fear of change, industry regulations, privacy/security concerns, mounting competitive pressure, etc.).
But if you approach AI thoughtfully–being inclusive and human-centered–you can prime your people to be adaptable while also reassuring them of their value. Just like you don’t run from a bear, explicit education is the best way to alleviate people’s fears. The more clearly you articulate the role of AI in your company, and the more you involve your teams in the AI integration process, the more trust you will foster to help you navigate a rough road.
Without question, titles and org charts are going to change. People will lose jobs, and losing a job is scary (been there, done that). The International Monetary Fund recently released a report stating that globally, more than 40% of jobs will soon be impacted by AI (up to 60% in advanced economies).
So what should you be doing today?
A great starting point would be to have someone who understands potential applications of Generative AI looking at your current job descriptions. I’d bet there are plenty of bullet points of things that can (or will so be able to) be intelligently automated. There probably won’t be any current roles you won’t need at all a year from now, but maybe instead of a team of 5 for a particular function, you can stick to 3 roles there. Think of Generative AI as an assistant that helps with some of the repetitive work–you’ll still need people with the earned skills to know what good looks like, but they’ll get plenty of help with time-consuming and tedious parts of the job.
This won’t do anything to re-skill your current people whose roles will be disrupted, but at least will prevent you from continuing to bring on more people into positions that will soon change. As for your current teams, I’d try to double down on building a constant test-and-learn culture, and continue to advise widespread education so you can put them in position to land on their feet. Keep in mind that “content marketer” wasn’t a thing in 2005, and now most companies have hordes of them in-house.
A massive need that I see quickly becoming commonplace will be for an individual (or team) to constantly reassess your AI toolkit based on your most common use cases. The big players like OpenAI (Chat GPT), Google (Gemini), Anthropic (Claude), Perplexity, and others are essentially playing leapfrog with capabilities with every new release. You can’t change your tech stack with every change of the wind, but if you are using multiple tools that have crossover in what they can do, there will be some competitive advantage in staying ahead of the curve on which to use at any point in time. This could be as simple as having a series of prompts that you provide to each tool quarterly, or after each new version is released, to reevaluate which tool supports your use cases the best. (Fun startup idea: build an aggregator of AI tools that automatically deploys the best for any given use case–summarization, customer support messaging, transcription, translation, etc.).
(A closely related aside: there is a fun tool called Chatbot Arena that updates rankings of language models based on continuous user feedback: https://chat.lmsys.org/.)
An example of someone doing it right is Publicis (assuming they follow through on their promises). The global ad/PR agency recently announced that it is investing $326 million in AI infrastructure over the next 3 years and that AI-enabled efficiency gains will fund the entire investment (not by cutting staff). They are calling the program CoreAI, and it will be rolled out across 5 pillars: Insight, Media, Creative, Software and Operations. They are committing to Generative AI as a key business driver, but also seem to be reassuring their people that AI will never replace great creative minds, but will just push boundaries further and enable them to do more.
This is what your executive team should be thinking about, right now. And in case you’re wondering, I haven’t seen Uncle Trunk in years, but I hear he still has all of his teeth. Go figure.
In the next (and final) post of this series, I’ll get into my overall thoughts about Generative AI.