Not too long ago, I posted on several social media social platforms asking what questions YOU had on AI.
Reddit | )Facebook | )Twitter | )LinkedIn)
I’ve taken all of the 100+ responses, put them in nice neat little (5!) categories, and – wow – do I have a deluge for you.
By far, the most asked question was “what tools are available today?”
The easiest life is that of automated content analysis. For a hundred years, we’ve relied on a label on a film canister, a sharpie scrawl on the spine of a videotape case, or the name of a file sitting on your desktop.
Sure, while this tells you the general contents of that media, **it’s not specific. **
“What locations were they at?”
“What was said in each scene?”
“Were they wearing pants?”
This is where AI’s got your back. With content analysis, sometimes called automated metadata tagging, AI can analyze your media, recognize logos, objects, people, and their emotions, plus, transcribe what was said, and generate these time-based metadata tags. It’s a little bit like having Sherlock Holmes on your team, solving the mystery of the missing footage.
Because there is always a better take somewhere….right?
AI-assisted color grading is also starting to gain traction. With the help of AI models, colorists can quickly analyze the color palette of a video and create a consistent base grade.
This can save a ton of time when you’re dealing with various bits of media with different formats and color profiles. It does the balance pass so you can start to create …quicker.
AI can also suggest starter options for more creative color grades.
Like…adding a touch of “teal and orange”.
)
Colorists can also use existing content as an example to tell the AI the look you’re aiming for. Check out Colourlab.ai.)
Additionally, AI tools are revolutionizing audio post-production, too. Noise reduction algorithms powered by AI can effectively remove unwanted background noise, enhance dialogue clarity, and improve overall audio quality. While this tech has been around for a while, adding AI models to existing algorithms is making these tools even more powerful.
These models also help with Frankenbiting.)
Text-to-speech tools that also allow generated audio to sound like someone else – also called voice cloning – is a fantastic tool for Frankenbiting. For most of us commercially, it’s great right now for scratch narration or to replace a word or two, but it’s not quite at the point where you can easily adjust intonation. However, when that is possible, editors may possibly have a new skill to learn – crafting the performance of the voice-over in the timeline. Check out Elevenlabs.)
Text-to-image and text-to-video products are rapidly being developed, just look at services like Runway), Midjourney), Dall-E), Kaiber), and a host of others. And while we can’t generate believable video b-roll on the fly (unless you like the current generative AI aesthetic) once that boundary is crossed, this will change the way editors work.
OK, instead of adding…can we subtract?
Rotoscoping tools are readily available in industry-standard tools like Photoshop. We can erase objects in near real-time and also use tools like Generative Fill) for AI to “guess” what is missing right outside of the frame.
Now, Large Language Models – or LLMs – are the key for AI to understand what you want when you ask it a question. LLMs are also imperative for our next skill….text to code. Now, why is that important to you?
Many motion graphics software packages have an underlying code base you can use to script the actions you want. So, until text-to-video becomes more mature, you can use AI to generate code and scripts to tell the motion graphics tool what you want to do. Check out KlutzGPT).
All of the aforementioned model types are just a sampling of the broad categories of specific AI models. In fact, HuggingFace), a repository for various AI models and datasets, has nearly 40 categories of models) for you to download, train, and experiment with to assist you on your next project.
“It is not the strongest species that survive, nor the most intelligent but the ones most adaptable to change”, once wrote Leo Megginson. And while he’s not wrong, I much prefer my attention span appropriate “Adapt or Die”, by Charles Darwin.
AI and its many variants are exploding. AI has already become the fastest-adopted business technology in history).
So, how can you adapt to this?
Well, the first step is to not panic.
DON’T PANIC.
If you’ve used any type of AI, you can totally see the current deficits. Whether it’s factual hallucinations or relatives with 20 fingers and 37 toes, AI will be evolving, which gives you time to learn and use AI tools as your sidekick, and not your replacement.
)
As AI for the general public is still in its infancy, it’s critical for us to learn about the AI models we’re utilizing. This means what data was used to train them, and who has curated that same data the models were trained on? This level of openness not only empowers us to make better decisions when selecting model providers but also fosters a culture of responsibility. We’ll discuss this a bit more later.
As with any moment in the zeitgeist, it’s imperative that we understand the difference between what’s the real deal, and what’s part of the hype machine. We’re likely to see the “AI” label slapped on many tools. Now, you may not think this is a big deal, but it can be.
To start off, it’s a matter of investment and value. You want to ensure you’re getting the whizzbang AI capabilities you’ve paid for, and not just a repackaged widget marketed as AI.
Genuine AI tools can also perform complex tasks, adapt to new situations, and sometimes even learn from those experiences, leading to more efficient processes and smoother outcomes. Non-AI tools or overhyped plugins, however, may not deliver their promised results, leading to major disappointment and potential setbacks in your project or business.
Plus, understanding the capabilities of AI tools can lead to better usage. Knowing what an AI tool can and can’t do allows you to utilize it to its fullest potential.
On the other hand, blatantly mislabeling a non-AI tool as AI can lead to wasted time or underuse.
Lastly, it’s really about ethics and transparency. Misrepresenting a tool’s capabilities is deceptive marketing, plain and simple. This is already a major problem – and a rarely policed one at that – in the tech world. It erodes trust between providers and users and can lead to skepticism about the entire field of AI.
Now, speaking of Ethics….
As we leverage AI to push the boundaries of creativity, we’re faced with new dilemmas, around privacy, security, bias, and yes, our core ethics.
These aren’t just questions for the tech wizards or the philosophical ponderers. They’re issues that each one of us, as contributors to the creative world, needs to grapple with.
Why?
Because the decisions we make today will shape the digital landscape of tomorrow.
If we start with your creative tools, such as the ability for voice cloning and face swapping – which do have legitimate applications in our industry – they also raise ethical issues such as privacy, consent, authenticity, and accountability.
For example, voice cloning and face swapping could be used to create fake news, deep fakes, or malicious impersonations that could harm the reputation or even the safety of the original speakers or actors. Plus, bad actors could undermine the trust and credibility of the creative content and its sources.
To solve this, we need explicit consent from the original person *before *their voice or face is used. No more unsolicited ‘borrowing’. These technologies aren’t a license to steal.
Next, we need transparency. Any content using these technologies should have some form of credit or disclaimer, to ensure the audience knows what’s up.
Finally, accountability and traceability are key. Keeping track of source data and synthetic outputs ensures responsible use. This means finally deciding on and implementing some form of chain of custody solution, such as the content authenticity initiative.
In essence, we’re talking about a culture of responsibility in AI usage, balancing the scales of creativity and ethics.
From a macro perspective, data privacy and security have become increasingly critical ethical concerns within the AI sphere. As AI systems extensively rely on vast amounts of your personal data, the issues of data ownership and protection have exploded. To handle this, it’s imperative to establish and enforce stringent data protection guidelines.
The problem is, we’ve already been using the internet for a few decades now, and a good chunk of your data is already out there. That forum terms-of-service that you just scrolled past and clicked** I ACCEPT** on, has your data, which they can potentially monetize in any number of ways – including using it to train new AI models.
Now, ethical online services could certainly provide users with a user-friendly opt-out option, particularly for AI services that are deemed excessively invasive. Similar to the CAN-SPAM Act) in the US and GDPR regulation) in the UK, this could allow users to opt out of the use of any of their data to be used in training AI models.
There are several generative AI cases currently making their way through the U.S. legal system, including lawsuits against Stability AI, Midjourney, and DeviantArt, who are accused of mass copyright infringement). Microsoft, which owns GitHub, and OpenAI are also being sued over Microsoft Copilot’s tendency for reproducing developers’ publicly posted, open-source, licensed code). Keep tabs on these cases as they will shape how the combined art you make with AI is recognized.
Another critical point to consider is the potential for bias and discrimination in AI. The danger here is that biased data can lead to AI systems that further perpetuate those biases. The key to breaking this cycle is ensuring that the data used to train AI is diverse and unbiased and captures multiple perspectives. It’s also essential to regularly monitor AI systems to identify and rectify any biases that may sneak in.
Mandating AI model providers to publish their datasets to the public won’t fly, which means some form of audit by a 3rd party. The immediate thought here is some form of regulatory body, which I really can’t see a way around.
Transparency in AI algorithms is also an integral part of ethical AI use. It’s important for creators to understand how and why AI systems make certain decisions. As AI for the masses is a relatively new technology, we all need to become educated on the models we’re using. This kind of transparency can lead to more informed model provider choices and quite frankly encourages a sense of accountability.
Update: OpenAI, Google, others pledge to watermark AI content for safety, White House says.)
The broader societal implications of AI in the creative industry extend well beyond your editing fortress of solitude. As AI continues to transform workflows and redefine job roles, it is crucial to support your fellow creatives in adapting to these changes.
One way to support creatives is through continued education and upskilling programs. As AI tools become more prevalent, editors should embrace lifelong learning and acquire new skills to stay relevant in our industry. By developing a deeper level of understanding of AI technology and its applications, editors can leverage AI to their advantage and remain valuable contributors to the creative process.
As Phil Libin), creator of Evernote) and mmhmm), and current CEO of All Turtles) said: “AI won’t replace any humans, humans using AI will.”
)Phil’s quote, but my presentation – LACPUG May 2023
Like it or not, Democratic governance of AI is also crucial to address potential risks and ensure responsible AI development and usage.
Transparent regulations and ethical guidelines can help shape the future of AI in a way that aligns with our societal values and prevents misuse.
I know, that last statement is loaded with several gotchas, but I really don’t see another option.
Washington has historically played catch up on all things technology, but the current administration has published the “AI Bill of Rights” which speaks to many of these topics.)
However as of now, they are not enforceable by law, and there aren’t any federal laws that explicitly limit the use of artificial intelligence, or protect us from its harm.
At its core, what we do in post-production is all about storytelling, and AI, as fascinating as it is, is merely another tool in our toolbox. It’s true, AI can analyze data, identify patterns, and even suggest edits. Heck, it can generate content based on predefined parameters. But, no matter how advanced it becomes, it lacks the innate understanding of the human condition and the emotional and cultural context that you, as artists, possess.
To be clear: Don’t believe the job loss hype. AI is not about to take over our jobs. Please don’t fall for the classic “Lump of Labour” misconception) that automation kills jobs. It’s simply not true. Technology serves as the spark for productivity enhancement, making people more efficient in their work. This increased efficiency triggers a chain reaction: it drives down the costs of goods and services and pushes up your wages. The net result is a surge in economic growth and job opportunities. But it doesn’t just stop there. It also inspires the emergence of fresh jobs and industries; ones that we couldn’t have imagined just a few short years ago.
In the face of evolving AI technology, the role of editors is not diminishing…but transforming. You are the bridge between the cold calculations of AI and the warmth of human connection. We infuse videos with a depth of storytelling that resonates with audiences, touches hearts, and sparks imaginations. AI cannot replace your creative intuition or your storytelling skills. It’s the human touch that adds the emotional depth, the nuanced transitions, and your profound connection with viewers. Remember that, my fellow creatives, and let’s shape the future of our industry together.
I’m sure you have some input on one or more of these 5 THINGS). Let me know in the comments section. Also, please subscribe and share this tech goodness with the rest of your techie friends.
Even if they are AI.
5 THINGS is also available as a video or audio-only podcast, so search for it on your podcast platform du jour. Look for the red logo!
Until the next episode:* learn more, do more.*
**Like early, share often, and don’t forget to subscribe. **
Editing & Motion Graphics: Amy at AwkwardAnthems.)