Laurence Van Elegem
8 min readSep 15, 2023

--

The jobs of anyone working in content will be affected by generative AI. We will need to find ways to adapt and make it work. And for that, these two talents that will probably end up being crucial in the coming months:

1. Asking questions. According to some people “there are no stupid questions”. I’m not sure that I agreed with that statement in the past, but in the future, the intelligence of your questions will definitely matter. Except now, we’ll call it prompting, because that sounds way cooler. Questions have always been underrated, in fact. Those who question are those who don’t accept the status quo and change what needs changing.

2. Curation or filtering: Human knowledge will still remain important for now in the way that we’ll need to recognize what is right, what is wrong, what is new and interesting or what is yesterday’s information. In other words: if we look at the Johari window model, content curators will need to learn to:

o Discard the Arena’s or open areas (which are known to self and to others)

o Select the façades or hidden areas (known to self but not known to others)

o Minimize their own blind spots (not known to self but known to others)

o And perhaps even find ways to have the generative AI uncover the unknown unknowns

You might say that the curator part is just another fancy word for “middle men”, about which we love to say that they will all be cut out at some point. And maybe that will happen, but for now, those middle men that are able to filter the scarce value out of the current content abundance are the ones that will remain standing. Like Christie’s, that filters value and quality from an abundance of existing art. Middle men that ‘just’ offer trust as a currency (eg: banks) will probably be cut out if for instance Web3 will happen, unless they find other ways to prove their value. But that’s a different story.

The future of content

I think we will have some very interesting times ahead in all content creation related industries, like streaming, company blogs or hard media. Certainly in business environments, inbound (pull) and outbound (push) marketing will be severely reshuffled.

Will we arrive at a point that searching, browsing and scrolling will slow down and perhaps disappear and that mostly asking for or ordering content will remain? What we called pull or inbound were and still are actually still very much pushy and that might change will generative AI systems:

1. The amount of content created will rise exponentially when the bigger part of the population will learn to use generative AI systems. This growth is not new, of course, but these systems will accelerate it exponentially.

2. It will become very hard to differentiate your content when there will be so much competition. If people already were suffering from content fatigue (and they were), this will only become bigger.

3. So we may come to a point when we will no longer read content, unless we ‘order’ it ourselves in generative AI systems. Just like we no longer see banners on websites, our eyes and brains may block blogs or articles that appear on social media. The only type of content that will survive is that of the extremities: extremely funny, appalling, sad, eye opening etc. Again, that is not new, but it will grow, and accelerate. Chances are that this will increase the polarization that has already been happening.

4. This evolution towards more ephemeral types of content has in fact already been happening. We used to own books, music and movies (well, the carriers anyway, but once we purchased them, they were ours). Then we started streaming them, and access became the new ownership. Now access to fixed content might become increasingly irrelevant. It will all be about access to content creating tools.

5. What few people seem to talk and think about is the role of social media in this story. The current model is this: we make content for our audiences and then push it via social media. But what if “made”, searchable content disappears in an “on demand” content model? Where people ask generative AI to write an article about the best washing machines of 2023? Or order a fiction series about a group of Gen Z’ers that travel to the nearest galaxy only to find out that one of them is zompire, a vampire infected by a zombie virus. What will be the role of social media here, if this “push” part disappears? Especially in business environments, as these are the players that are willing to pay for pushing? I think we’ll probably need to differentiate information from emotion. We’ll probably still want to share emotions with others: funny TikTok dances, pictures of our newborns, things that made us laugh… In short: these are the things that ‘speak’ to our System 1 thinking (the fast part). But sharing information and ready-made content — the stuff of Systems 2 thinking (the slow part) — might disappear, or at least drastically change. And with that the business models of the social media.

Daniel Kahneman’s Thinking Fast and Slow Model:

Who will feed whom?

What perhaps intrigues me the most here is the “feeding” part of these systems. Generative AI is able to create texts, pictures and videos based on centuries (well, not for all types, but at least for text (seeing that a lot of old text have also been digitized by now, I mean)) of human built content. It feeds on that data. And the more data it can “eat”, the smarter it becomes.

Now, if we may perhaps arrive at a point where humans will make very few types of content — except for the outrageous, the arts (I really believe and hope that man-made art will survive, just like arts and crafts survived the age of automation) and perhaps (but I’m not sure) hard media — who will feed the machines?

Or will they have learned at one point to:

o Survive with a lot less ‘brain’ food, seeing that they will have become so smart by learning by then, that they will become very creative.

o Or feed themselves without human intervention:

o Through synthetic media: the pieces that they themselves created, in some sort of endless feedback loop. Which reeks of inbreeding (never a good thing and here was already an example of that), but might be avoided if machines become exceedingly creative.

o Through IoT data input: location data, audio and visual data, customer reviews, employee review data, raw product data (from company databases, invoices, tracing sensors…). In fact: if we will arrive at a point that we’ll have cameras almost everywhere and intelligent systems that are able to ‘read’ situations and report on them, this might replace a lot of journalists. Hence my remark about hard media perhaps no longer being able to survive. I’m not being snarky here, seeing that I believe that the same will happen with company blogs at that point, which might still be a long way off (if only because of privacy concerns).

The Passive Economy

Generative AI will not only change the jobs and behaviour of content creators. It has the potential to deeply affect the audiences:

1. If social media might lose their advert business models, they may have to pay for these channels. We’ve already seen this happening anyways with for instance the blue check models of Twitter, Instagram and Facebook.

2. If Systems 1 Thinking — the fast, emotional part — will come to dominate the social media (see above), we might increase their emotionality and their polarizing dynamic. Especially now that we see the rise of emotion(al) AI and affective computing — Amazon, Meta and many others have made investments here — that are able to read, recognize and use emotions.

3. The “search” model may not have been flawless — echo chambers and filter bubbles, anyone — but it still put us in the driver’s seat. We still had to read different web pages, comparing and finding patterns. This was still pretty active model, even if we had little control over what types of content we were presented. Now, the “order” or “on demand” model of generative AI (Style “give me the 5 most popular restaurants of Tokyo” or “Write a story about a girl and her pet robot”), is very a lot more passive.

Now, that passivity is not new. It is a direction that we have been experiencing for some time now if we think about the evolution towards zero interfaces. Amazon’s concept of ambient intelligence, where sensors capture all your information — where you are, how you feel emotionally, how healthy you are, who you’re with, what you crave, how you look, how much you need to pay for what… — without you consciously participating in sharing that content, is already a big step in the direction of this Passive Economy.

Some would call that hyper convenience and the removal of all friction. And it is that, in many ways. But on the other hand, I also see humanity moving in a direction where we are increasingly less active in thinking, creating, searching, comparing in a critical manner and I do not know if that is the best possible way to evolve in.

So, at this point, I have no answers and a lot of questions:

o Will we be less incentivized to create if machines become creative?

o Will we become less critical if we no longer need to read a lot and compare sources?

o Will we become more passive creatures, and will that make us less or more happy?

o Will we become even more vulnerable to fake news, now that it is so much easier to make it convincingly and we are only presented with one truth (as an answer to our prompts) instead of many different links? Will we even check sources anymore?

o Are humans evolving fast enough for their tools?

But questions are good, right? They’re what helps us (cope with) change. And our inquisitory skills will be especially helpful in the era of generative AI anyways (prompting, people, prompting, … the number one skills of the coming years (maybe)).

According to the history books the hunter-gatherer or forager era stopped with the introduction of agriculture about 10.000 à 12.000 BC. But we kept on hunting and gathering information, though. So perhaps the Generative Computing era is what really marks the end of the hunter-gatherer era. Is that a tad bit dramatic as a conclusion? Yes, it is, but I’m sticking with it because it might make you think. And that’s all I really want.

First published in March 2023 on the nexxworks blog

--

--

Laurence Van Elegem

What’s next for society, technology & organizations? #SystemsThinking #Complexity