« Return to video

Subtitles vs. Dubbing: Which is Right for Your Content? [TRANSCRIPT]

SOFIA LEIVA: Thank you for joining today’s session, Subtitles vs. Dubbing, Which is Right for Your Content? My name is Sofia Leiva. And I’m on the marketing team at 3Play Media. For those who aren’t familiar with 3Play, we’ve been a leader in accessibility solutions for the past 15 years and in the last few years have expanded our technology and leadership to include localization services.

I’m joined today by Erik Ducker, senior director of product marketing at 3Play. And he’s been in the video streaming space for over a decade and has a wealth of experience on building video strategies for all types of businesses. So with that, let’s get started.

In today’s webinar, we’ll tackle the biggest question around subtitles versus dubbing. We’ll dive into the current state of localization and explore how AI capabilities are transforming this landscape. We’ll look at the pros and cons to sub versus dub, factoring in audience preferences, cost considerations, and content type. And lastly, we’ll look at how you can build a comprehensive video localization strategy and prepare for the AI-driven future– so exciting session.

Now, before we dive into the debate of sub versus dub, let’s take a look at the current state of video localization, which can shed a lot of light into how we make decisions today. Video localization is a crucial, yet very complex process, as some of you may be familiar. And it remains heavily reliant on human expertise.

While advancements in AI and automation are emerging in the industry, it still depends on traditional workflows that can be really fragmented and time consuming. Typically, localization involves multiple manual hand-offs between the translator, the linguist, voice actor, and project managers, which can lead to inefficiencies, delays, and increased costs, especially for large-scale projects.

Another key challenge is the deep-rooted cultural preferences for sub versus dub. In some regions, like we’ll talk about later, like Northern Europe, subtitles are widely accepted, while in places like Germany and France, dubbing is the more dominant choice.

So due to all of these complexities, some organizations opt for just one method, sub or dub, while others, particularly smaller companies, may abandon their globalization efforts altogether. So as the technology evolves, we’re seeing new solutions emerge that can make localization more accessible, efficient, and scale. But for now, the human expertise remains at the core of this process.

So how do we navigate this complex landscape and choose the best localization strategy? It all starts really with understanding your audience. Take Squid Games, for example. When it exploded globally, viewers had two main options, subs or dubs. Some preferred subs to preserve that original performance, while others found dubs more immersive.

And the debate sparked discussion about the authenticity, accessibility, and cultural nuances, exactly the factors that influence your localization choices. To make the decision for your content, you need to consider several key elements. Who’s your target audience? What are their viewing habits? Are they accustomed to subtitles, or do they prefer dubbing? What’s your budget and timeline?

So thoughtfully weighing these factors will help you craft a strategy that not only reaches your audience but also maximizes your video’s impact. Keep these four key considerations in mind as we dive deeper into the debate of sub versus dub because that will help you ultimately make a decision.

Now, I want to preface, neither option is inherently better than the other. The right choice largely depends on your strategy, audience, and goals, as we just discussed. So there’s no universal answer, only what works best for your specific needs. So keep that in mind as we break down those key differences.

All right, let’s dive into subs. So subtitles, or subs, provide a textual representation of the original audio, allowing the viewers to follow the dialogue while hearing the original language. And this makes them a popular choice for audiences who value authenticity and want to experience the actors’ original performances.

Unlike dubbing, subtitles focus solely on translating the spoken language. They typically don’t include sound effects or background audio cues. However, subtitles can also serve as a tool for translating onscreen text, such as the signs, messages, or graphics that appear in the video. I see the comment on my audio. Let me disconnect and see if it makes it better. Let me know if this is better in the chat.

There are also specialized forms called SDH, or Subtitles for the Deaf and Hard of Hearing. These go beyond standard subtitles by incorporating the sound effects and the speaker identifications that make the content more accessible.

One key advantage of subtitles is that they can maintain the original tone, emotion, and intent of the audio without alteration. And additionally, subtitles are supported by a wider range of platforms, making them more universally available compared to dubs. But this is changing, as we’ll talk about later on.

Lastly, I want to talk about how subtitles are created. The process typically involves automation and human expertise. AI tools can generate an initial translation. But oftentimes, this isn’t of the quality needed to create a high-quality localized output.

So human linguists are essential for reviewing and making sure that it’s accurate, it captures the cultural nuances, and it’s also properly timed. Now let’s go to Erik to intro the dubs and how they’re different.

ERIK DUCKER: Thank you, Sofia. No hard feelings against subtitles, but I’m excited to talk about dubs. This has been a really emerging technology that’s more available to us than it’s ever been before. And what dubs fundamentally represent is really just replacing the original audio track.

So a subtitle is serving a purpose of translating what you’re hearing, assuming you can’t understand the language that’s being spoken. And a dub is giving that viewer an option to hear that language, hear that audio in their preferred language.

So when you’re writing and creating a dub track, you’re having to make sure that you’re mixing the translation in with the original audio and trying your best to maintain the artist’s integrity throughout the localization process.

This has been predominantly a very expensive, heavy, human-involved process, which has made it really only accessible to large media companies who are redistributing or syndicating their content to other countries that they’re currently not distributing to.

So this process has been largely unavailable to us. With the introduction of AI into the process of dubbing, it’s made us really, really close to actually doing dubbing for a variety of new use cases. As mentioned, dubs can be recorded with a voice actor, which is the traditional method and continues to be a mainstay in large Hollywood productions, at least in the US, and other large media sectors across the world.

But now AI has introduced, really just in the last two years, some amazing synthesized voice technology that’s really opened the opportunity for more people to get in the dubbing game. And one preview for this presentation is two years, we’re talking about, is when this big breakthrough started. So we have a lot of time left to see the innovation snowball and continue to get better and better and better.

So one thing you might be asking yourself is, how do dubs and subs relate? And we hear this question all the time within our customer conversations, prospective customer conversations as well– why can’t I just use my subtitle file and just do a quick recording of my subtitle and make it a dub?

And the short answer is no, because ultimately the optimization of that localization step is different. So when you’re doing a subtitle file, you’re just translating the text to new text that’s available on the screen. And the rules of the user experience in a video player allows you to have a lot more room to introduce more characters because certain languages are more efficient than others to say the same thing.

So an English phrase might be longer in French and vice versa. So you have a lot more flexibility when it comes to a subtitle. All you’re paying attention to is, does the readability of that output– can I keep up? With audio, when you’re translating audio, now you have a really hard constraint.

If the person is speaking English, and they have five seconds of a segment, your translation needs to fit into that five seconds of that audio segment. So the translation may deviate from what the subtitle translation has. So that’s why we explain that the short answer is no. And we’ll show and illustrate this in a few moments. But before we get there, let’s talk a little bit about costs.

SOFIA LEIVA: Thanks, Erik. All right, one of the biggest advantages of subtitling is its cost effectiveness. Compared to dubbing, subtitles require fewer resources, which make them a really budget-friendly option for localization.

So while dubbing involves hiring the voice actor, sound engineers, booking studio time, audio mixing, subtitling mainly relies on transcription, translation, and timing adjustments. So this significantly reduces the production costs.

Subtitles also shine when it comes to speed. Without the need for all that voice recording and post-production work, subtitles can be created much faster than a dub. And this makes subtitling the go-to solution for projects with tight deadlines or that need quicker turnaround time.

In addition, subtitles offer a significant advantage in terms of accessibility and compliance. They’re crucial for ensuring content is inclusive to deaf and hard of hearing audiences, meeting the legal requirements that many countries and streaming platforms have.

But beyond just the compliance, subtitles are also widely accepted across various platforms and formats, as we talked a little bit about, which gives them the flexibility to scale and work across a range of different content types.

For instance, foreign films and indie cinema, where the artistic integrity is paramount, subtitles work really well; documentaries, where maintaining that original voice adds credibility and emotional depth; or quick turnaround content, such as social clips or news updates or other fast paced media where speed is really crucial, and subtitling can be completed so much faster.

However, while subtitles offer a cost-effective, fast, and universally-accepted solution, they may not always be the best fit for every audience. So with that in mind, let’s take a closer look at dubbing and its advantages and challenges.

ERIK DUCKER: Yeah, thank you. So dubbing costs are a little bit different. There’s a few more steps along the way to produce a dub. So you’re going to have a little bit more time, need a little bit more time, potentially a little bit more money.

Spoiler alert, technology is making dubbing a lot more affordable, so the delta between these two services is shrinking rapidly. And so there’s going to be a great opportunity, from a cost perspective, that’s going to allow you to make a really informed decision and really focused on the ROI.

So dubbing, definitely traditionally more costly. And as I said, this is because you’re also introducing voice and actually recording a new audio track, which means you need to introduce new steps like quality assurance into the audio itself. This is going to introduce one extra step of turnaround.

Now, we have automation to completely make this less of a challenge and remove potential human error in the process. But ultimately, you are going to see a slightly slower turnaround than, say, a subtitling file.

Publishing might be a little bit more complex. Platforms are getting a lot better about this. But you need to make sure that how you’re hosting your video content can support multiple audio tracks. You need to understand how are you going to actually operationalize this investment.

As we’ve mentioned, the Squid Games director was very vocal about not wanting his English audience to consume his movie, his art with English dubs. And the reason why he’s defending this is that the original emotionality of the actors was so important to him.

And no matter what you do, whether you have an amazing voice actor or the perfect voice synthesis capabilities, emotionality may be altered. And so that’s something you have to be really carefully minded. If it’s an informative piece of content, emotionality is going to be less important to get right versus a piece to entertain, some content to entertain.

The other thing to consider that we’re seeing in the dubbing world is lip synchronization. Guess what? AI is also scary. AI can manipulate video content to create artificial imagery about an artificial human motion so that you can actually overwrite the lip movement in a video and have new lips seemingly match to the output language.

Now, my opinion– this is my opinion, warning– but that can lead to some really creepy outcomes. The lips don’t work perfectly. And you’re also edging into the ethical considerations of dubbing and using AI in this process. And it’s important to ask yourselves, how do you want to approach this, so that the original people involved in the video are represented in the most respectful way possible.

So thinking about that from a lip-syncing perspective, video being manipulated can be an ethical conundrum. The voice itself– so when you think about how are you going to choose what voice I dub in, with voice actors, you’re going to go through a casting, basically, and choose which voice that you want. Maybe you don’t get much of a choice either.

But ultimately, with AI, we’re actually seeing some really cool use cases where you have options for voice cloning or voice matching or native voices. The thing that you need to pay attention to is cloning a voice is a very specific definition. It means that you are creating a model of that person’s voice.

So if I opt to let someone clone my voice, Erik Ducker, that means that that model can be used to say anything under my voice. A voice match is basically algorithmically mapping or choosing against a catalog of voices. What’s the closest to Erik Ducker’s voice and for this video?

So it’s not actually taking my voice. It’s just trying to pick a voice that is going to not interfere with the overall quality of experience at the end of the day. So it’s something really important to think about when you’re considering using AI in your dubbing situation. So this leads us to the question of, how do you know? How do you choose? And I really like this example. We keep coming back to Squid Games.

This is the user experience for choosing. When you go to Netflix, and you pop open, how do you want to consume this content, you have all of these options for audio. You have all of these options for subtitles. And you’ll note that the audio that says dubs or provided audio descriptions for blind and low-vision users.

Subtitles, you’ll see that there’s an English subtitle, and there’s an English CC subtitle. Those are two different files. The English subtitle is a translation from Korean. The English closed caption is a closed caption from the English dubbed translation.

So those English files are actually different files. And you as a platform or a content publisher, you need to think about how are you going to present this to your user so there’s very clear guidance on what text format and audio format should we be choosing. So let’s get into the audience preferences. And we’ll start with subtitles, Sofia.

SOFIA LEIVA: Yeah, we’re big Squid Games fans over here. Before I talk about the audience preferences, let me know in the chat, do you have a preference in the content that you view? Do you prefer subtitles or dubbing? Definitely let us know here in the chat.

So subtitles are widely preferred in countries such as China, South Korea, India, and Japan. And several key factors contribute to this. Audiences often prefer to hear the original voices of actors rather than the localized voiceovers.

As Erik talked about, the mismatch of lip movements in dubbing content can also be really distracting, making subtitles a more immersive option in that regard. And subtitles also provide an educational benefit, helping viewers learn new languages while consuming that media.

Do keep in mind that the research has largely been in the entertainment space. And more research is coming out for other types of content as well. But I do see in the chat, a lot of you are sub supporters. While subtitles are widely accepted in many regions and for various content types, they’re not always the best fit for every audience.

Some viewers find reading subtitles distracting. If you’re multitasking watching TV, you may miss something. And others prefer a fully immersive experience in their native language. So this is where dubbing comes in handy. But let’s explore those viewer preferences with Erik.

ERIK DUCKER: I find it really fascinating because I’m also in the subtitle crew. But I think it’s going to change. I think we’re in a really, really interesting point in time where– and we’ll show you some data from our friends’ YouTube– dubbing is just new. And so it’s weird to us, especially in America. It’s a weird concept for us.

But dubbing is very popular in certain parts of the world– Russia, Germany, France, just to name a few. Eastern Europe, in general, are typically cultures that embrace dubbing. And that’s partly to do with how they were brought up around media.

A lot of media came from the West and was dubbed into their native tongue for nationalistic purposes, to cultural retainment purposes, or that’s just the only way that it was delivered to them. So there’s a lot of ingrained cultural preferences towards dubbing.

But I think what we’re learning is, especially in our conversations, we’re starting to see dubbing is an amazing tool for audiences where reading is not a strength of the audience. So for probably many of us on this webinar, we’re all very comfortable reading English captions. But if we’re a child, reading might not be our best suit or our best skill set today.

And so we’re seeing a lot of opportunity in using dubbing as a mechanism to take learning content from other cultures and translating that for kids who are not equipped with the tools to read subtitles. So there’s a lot of reasons to consider dubs that are beyond, say, a cultural preference. There’s an application of actually using dubbing.

In dubbing, for example, I think the reason why we have some of a visceral reaction to dubbing sometimes is when it’s not great. And especially in content that’s meant for entertainment, the dubbing quality can take us out of the moment. It can take us out of the experience. And that’s usually what our current viewpoint of dubbing is.

Once again, I turn on subtitles. I don’t really like watching dubbing for my entertainment purposes. But that is the challenge that we are faced with. And we’re seeing this in real time as dubbing becomes more popularized, especially through the social platforms.

So, Sofia, I think you had a couple really cool stats that will help change this conversation from where we are today and where we expect to go going forward.

SOFIA LEIVA: Yeah, this is really interesting. New data is showing a shift in viewer behavior. So according to insights from a YouTube poll with over 120,000 respondents, the poll showed that 61% of people claimed they prefer subtitles. However, actual viewing analytics told a completely different story.

When videos were dubbed, viewers were 40% to 50% more likely to watch until the end compared to the subtitled version. So why this contradiction? As we talked about, many viewers may say they prefer subtitles for authenticity. But in practice, dubbing leads to higher engagement. And this is because reading while watching can be cognitively demanding. And many people prefer a more immersive experience.

What’s interesting is platforms like YouTube and Netflix are now investing heavily in multi-language audio features, as we saw with the example that Erik shared. And this is making dubbing more accessible. AI technology is evolving rapidly and reducing those costs, which is allowing more people to localize their content with dubs. And for creators, the shift towards dubbing is not just about accessibility and immersion. It’s also a massive monetization opportunity.

So that same study found that dubbing actually led to an increase in watch time, expanded reach and more generated in additional ad revenue. And a huge creator on YouTube, specifically, that’s seeing this is MrBeast. So if you go to his YouTube, you can see all of his dub content in different languages.

ERIK DUCKER: Fun fact about MrBeast, he actually owns his own localization company underneath the MrBeast umbrella. So he’s taking this so seriously. He’s invested an entire operation to make sure that his content is reaching as many eyeballs as possible through YouTube and other platforms.

SOFIA LEIVA: Yeah, very cool. And it’s very interesting to watch his content with the dub. He does use humans in the process, because as we’ve talked about, it’s still essential for that quality.

As Erik is going to talk about next, dubbing content is likely going to become an even more prevalent part with those continued advancements in AI technology. So I preface the real question now won’t simply be dub versus sub. It’s going to be how quickly can content be localized to reach global audiences.

ERIK DUCKER: Yeah, thank you, Sofia. And I think this is really the turning point in our time today, which is we really want to make sure that we give you something to think through and work on after this. And the call to action here is, this is where the future is headed. What are you doing today to get ready for the future?

If you’re in a localization decision-making role, or you’re curious about what can your company do to become a more global company, or your content needs to be monetized, these are the steps that you need to start thinking about taking seriously in order to be ready for all of this technology innovation that’s happening right around us, whether we like it or not.

And so where we’re seeing the state of video localization moving towards is first, we’re going to see more improvement. So what you’re seeing today in samples and demos is two years old. The biggest breakthroughs with some of this dubbing and localization AI technology is just the beginning of where we’re headed.

We’re expecting voice quality to continue to get better, emotionality of those voices to get better. We’re going to see dubs, as they become more popular, change how we approach subtitling. Today might be the conversation of which do I invest in next.

But really, once the costs come down, your menu for a video might look like Netflix’s. Netflix spends a lot of money on localization today because they have the ROI for it. And so as the costs come down for your content, the ROI is going to start making sense. And your user preference menu is going to start exploding in terms of optionality.

And most importantly is because as we move to more technology-focused solutions, where we’re inserting humans versus trying to use humans and try to help them be more efficient, we’re using technology and using humans in a review-like process, we really need to make sure that we’re investing in solutions that streamline our operations.

So whether your video is hosted in Brightcove or an Amazon S3 bucket, how are you integrating your localization workflows into where your video is being hosted is going to be really important. So let’s talk about building that strategy. So the first step– just a second. Sofia, do you mind moving forward?

SOFIA LEIVA: My internet is struggling a little. But I think I’m back. Am I back?

ERIK DUCKER: Yeah, yeah. You’re back. You never left.

SOFIA LEIVA: Oh.

ERIK DUCKER: Will you click through in the presentation, please?

SOFIA LEIVA: Do you see my slide?

ERIK DUCKER: No. Here, let me– yep, now we’re good. So we’re going to build our strategy together. So the first thing is– ultimately, it’s February. It’s a bold ask, but you should start thinking about next year already. What are you doing this year to prove an ROI and a model that’s going to help you defend and build a budget for next year?

Ultimately, your budget’s probably pretty fixed right now. So it’s really important to think strategically about what are the little things, what are the small experiments of what you can do to identify your path to growth with localization in 2026.

So we’re going to go step by step really quick, as quickly as we can. But we’re going to really focus on, ultimately, building a business case. And this business case is just like any other product. I’ve been in product for 10 years. And it’s really about, hey, our goal is maybe two years from now, but really, it’s going to be how do we build checkpoints along the way that we’re on the right track.

So what can we do in the next three months, the next six months, the next nine months to get us in a spot where we can make a really, really smart business strategic decision to invest or to not invest in further localization?

So the first thing first with building our business case is going to be defining these business objectives. Is it increasing global audience engagement? Well, let’s define what that is. We need to have something that’s measurable.

Is it number of eyeballs? Is it eyeballs in a very specific country, part of the world, or a group of countries? Is it a language demographic that we’re really focused on, a region? You have to really think deeply about that. Is it to improve onboarding outcomes? So are you doing a training?

Are you self-onboarding people from multiple countries? How are you using localization to engage with those customers? Are we trying to drive revenue with localized content? What do we need to do to prepare to deliver to Germany Deutsche Telekom? What are their rules? We need to do that research up front.

So then, once we’ve identified our business objectives, the next step is gathering some of the data points. What are our costs? What’s going to be the cost of subtitling our content? What’s going to be the cost of dubbing our content?

What do we think the outcomes are? What’s success look like? What is the market demand? Are there enough eyeballs available if we’re an ad-based revenue company? Are there enough CPMs that we can increase with introducing Spanish as an output or German as an output of our dubbed content?

This can start really small scale. This can start with customer interviews. This can start with market research. This can start with a test. Go do five videos. See how they perform next to the other five videos that you just published.

So it’s really important that you’re tracking this along the way. Because at the end of the day, you’re going to need to build a business case for the finance team that says, hey, I have a path for us to scale and continue to grow our monetization of our video content across the globe.

I can assure you, MrBeast did the same thing as he tested each market one by one. And now he has a really, really strong global strategy that he’s expanding over and over and over again.

So the next thing is, how do I prioritize? You have all this data in mind. How do I prioritize my time? I can’t do everything at once. So what are the highest ROI markets that I can consider? Maybe I should test subs in one market and dubs in another market. Maybe I should do an A/B test in both.

Really pay attention to how are you prioritizing. Cost-sensitive markets– not all CPMs are created equal in the ad business. A CPM in Southeast Asia is very low relative to a CPM in Scandinavia. And so it’s really important for your monetization strategy to match what’s realistic in terms of your audience on the other side.

Test and measure– I think this is the real goal here, is that just throwing a video out there without a goal, without a benchmark, without measuring is not going to help you build that case. So you have your goals. You have your skeleton of what you want to do. The next is how are you going to do it.

So this is where it comes into understanding the market in front of you. There are hundreds, hundreds of solutions that are going to help you with video localization. 3Play is one of them. So we want to be very honest and up front. We are just one of those solutions. And it’s really important to understand what type of solution are you looking for in your business.

So we’ve built this little map out– and it’s more of a spectrum– of you have tons of technology, of which we call building blocks. These are your core, foundational AI models. So that could be an automatic speech-recognition engine. That could be a machine-translation provider. That could be a synthesized-voice provider.

These building blocks are the foundation of implementing a localization solution that’s ingrained in AI capabilities. So your engineering team could go build this tomorrow. There’s plenty of tools off the shelf. And they can go build this. You probably don’t want to build that for 10 videos. You probably don’t want to build that for one video.

That is a really good option for certain types of businesses in which you’re producing thousands and thousands of videos. And maybe the shelf life of those videos is really, really small. So speed and control is 100% on your side.

But at the same time, if you’re looking for high, high quality that has human review, you’re going to have to think about, OK, if I stand up this service with all these building blocks, who’s going to manage it? How do I make sure the quality is good? Who’s going to help me do that? So that’s one segment.

So if you have people who want to go build this, totally acceptable. Go build it. There’s all these tools out there. The next set of solutions that you’re going to run into in this market are bring your own workforce applications. So they’re going to be the end-to-end, hey, you give us a video, we’re going to give you a dub and a subtitle file back. Lots of solutions like that. It’s all AI.

Great. You’re going to lose a little control over the quality of the AI, because you’re going to be beholden to whatever models that company chooses. And they might give you some editing capabilities. But ultimately, you’re going to be on the hook for bringing your own human workforce behind the scenes.

So once again, great solution for 1, maybe 5 videos, 10 videos once a month, whatever it might be. If you’re needing to publish 10, 15, 20 videos every single week, this might get out of hand pretty quickly. Or if you’re trying to do a backlog of video content, this might get a little bit out of hand.

So then there’s going to be solutions in the market that are what we consider end-to-end solutions. So they’re going to have all the building blocks. They’re going to build the outputs for you. But they’re also going to have humans in the process. And you can work with them to decide how much human intervention you want in the process.

So that end-to-end solution, you may say, hey, I just want like a light QA of the audio. You may say, hey, every single step of the way, I need humans to review the transcript, the translation, the audio quality. Oh, and I want the human to make sure that we’re fixing all the pronunciation that the synthesized voices may have messed up.

So there’s a lot of optionality that exists out there. And as a prospect in this space, you have to make sure you identify which story you’re trying to go into because there’s plenty of options in each. But it’s important that you go into the right store. Otherwise, you’re going to be talking and solving the wrong issue that you have in front of you.

So once you’ve decided how you’re going to solve this problem, you’re going to go optimize your strategy. You’re going to start testing. You’re going to start learning how this is performing. And you’re going to continue to test and deliver data. So start small. Do a few videos. See how it performs. Do a larger set of videos. See how that performs.

Do I want to try adding more higher quality to this dub? Do I want to take away quality from this dubbed output? Play around with that strategy, because ultimately, at the end of the day, the next step is really presenting this back to the business and saying, here is our strategic vision of how we want to do localization.

It’s going to be a mixture of AI, infused with a little bit of human on this type of content. We can use AI only for all these pieces of content. Whatever it might be, you have, basically, a formula of how you’re going to grow your business through localization. And it’s all optioned by your data and built on your testing. And you can continue to scale that over and over.

So that’s a lot. Sofia wants to offer a few things of actual applications of what we’ve heard in the market of what they are doing in– sorry, what are customers in general doing in localizing these markets.

SOFIA LEIVA: Yes, definitely. And we’ve created a handout that covers all of these steps. So you’ll receive the link to that tomorrow as well. So don’t feel like you have to write it down super quickly.

So when it comes to video localization, we’re seeing a lot of diverse strategies emerging across different markets and industries. Today, I’m just going to cover three. In the corporate and technology space, there’s a focused approach centered on the user experience.

We see companies using dubbing for high-visibility videos, such as a welcome video or an onboarding video, to ensure that accessibility and to ensure a really seamless global launch experience.

For other types of content, subtitling is used if it’s really time sensitive or if it’s a different use case. As these organizations create more evergreen assets, we’re witnessing them take a more proactive approach towards localization planning and incorporating both subtitling and dubbing into their strategies.

In the education and e-learning sectors, they’re facing a unique challenge because they need their audience to understand and retain the complex and technical terminology present in their content. So to tackle this, we’re seeing these organizations approach it by offering both subtitles and dubbing, or they’ve been offering subtitles for a while, and now we’re testing dubbing to see if it’s going to improve that engagement and retention.

AI dubbing is really being embraced by this industry because it’s cost effective. And when complemented with that human review, it’s keeping the accuracy of that learning content. We’re also witnessing a shift towards a more centralized approach, where both subtitles and dubbing can be created in one unified platform.

And as one localization manager put it, what we’re looking for is a more reliable solution, a single platform I can depend on for all 14 languages, because they have big expansion plans, and they want to do that efficiently. And that highlights the future of localization that Erik alluded to.

In the entertainment and cinematic realm, there’s a growing interest around the speed and cost effectiveness promised by automated solutions, particularly AI dubbing. However, the artistic integrity remains the utmost priority. So they’ve traditionally used voice actors and subtitling.

As we saw with the Netflix example, both are offered. But we are observing some openness towards AI-enabled solutions. Particularly customers who’ve done synthesized audio description, they may be more receptive to trying AI dubbing with their audience.

However, the key here is that the technology must meet certain quality standards, and it needs to align with cost considerations and be accepted or demanded by the market.

Another thing to consider in M&E is that there’s guilds and regulations in place to prevent the AI takeover. So as a result, companies are exploring markets that are more receptive to AI and are carefully evaluating if there is a commercial opportunity.

So as you can see, there’s different approaches within these verticals to localization. So it’s really important to just tailor your strategies to what you’re seeing and staying informed on the AI capabilities that are coming.

We’re coming to a close. Please enter in the Q&A or chat any questions that you may have, any burning questions. Anything we didn’t answer, we’ll do our best to provide some guidance. But if you’d like to dive deeper into sub and dubs, Erik and I are happy to support any follow-up questions. You can send us an email that we’ve included in the chat and on this slide.

There’s a QR code. And that goes to a resource page for you all to keep learning about the localization space. All right, I’m going to get started with some of the questions coming in here.

The first one is, with AI-powered dubbing gaining traction, how do you ensure that the emotional nuance and cultural authenticity of performances are preserved? Any insights, Erik?

ERIK DUCKER: Yeah, I think that’s the question on all of our minds. And there’s a few things that are really interesting that are happening. The first is Large Language Models, LLMs, are getting better and better at providing, we’ll call it, post-machine-translation edits to guide further optimization of cultural context in its translation.

So there’s a lot of testing and research happening in that space from an automation perspective. The current practice, best practice is really introducing humans into that post-edit, post-machine-translation-edit workflow. And those humans are paying attention to glossaries provided by a customer. They’re traditionally going to be in-country resources for that particular culture.

So there’s a lot of opportunity for humans to really focus on just that piece. So that’s from the translation side. The other thing that’s happening that we’re seeing in the audio side, because I think that’s really important, is the audio experience has to also capture the emotion.

One, voices just generally are getting better and a little bit more dynamic in terms of reading, punctuation within a sentence. But also, there’s more tooling involved. There’s what we call speech-to-speech technology, where anyone, whether you’re a voice actor or myself, can re-speak a line of content that is synthesized voice and change the actual output.

So we can change the emotion. We can provide excitement for that sentence or a little bit of melancholy for that sentence. And the speech-to-speech technology is going to mimic that emotion and apply it to the actual voice that’s being synthesized.

So it’s not going to take my voice. It’s just going to take the emotion from my voice or the pronunciation from my voice and apply it to the voice that is meant for that particular piece of content. So we’re seeing a lot of innovation there, and more from 3Play Media specifically. We’re doing a lot there to invest in that for high emotion content.

SOFIA LEIVA: Yes, definitely. A lot of great demos that we can also share so you can see how cool the technology is. The next question is really interesting. Given the rapid advancements in AI-generated subs and dubs, do you see a future where automation fully replaces human oversight in media localization?

ERIK DUCKER: I think that’s everyone’s dream. We get a lot of individuals in companies who have this expectation that AI is supposed to be this magical solution for everything. And the reality is, it’s really important for us to understand the limitations of what AI has to offer.

Just like we see it in the transcription space and our accessibility side of our business, 95% accuracy is not perfect. And for the application of accessibility where you need 99% to 100% accuracy for it to be fully accessible, computers are plateauing. There’s a lot of plateauing in those capabilities. So I would expect us to see some plateau in terms of performance.

I think the translation steps are so– there’s not a binary “yes, this is accurate” or not. There’s a lot of opinion in terms of the quality review. So depending on the content, for Hightouch media content, I don’t anticipate that anytime soon. That could be decades from now that it replaces it.

I think there will be other reasons why AI takes more of the pie. But it won’t necessarily be because AI is better. It’s just, I think, there’s just going to be a very big emphasis on we’re OK with that because it’s so much more affordable than existing processes. So the quality isn’t necessarily going to ever be better than humans, but it’s going to potentially be more accepting over time in years to come.

SOFIA LEIVA: Also, with localized content, you have to think of where you’re delivering it to. A simple translation, there’s certain terminology that you may use in Latin America that’s different than what you would use in Spain. And so you want to make sure that it’s localized to that specific audience. And that’s something that AI really struggles with.

We have time for one more question. And this is, media companies face tight deadlines. What are the best workflow optimizations to ensure fast turnarounds without sacrificing accuracy and quality?

ERIK DUCKER: Yeah. Most of the conversations we’re having, Sofia, typically have been really about the start and end of the file process of the service line. So it’s really important that where we see most gaps is when you introduce manual processes where you can easily automate.

So when we think about file transfer– if your video is hosted in Brightcove, why should you have to download that file ever again? Just press a button, and have it populate in 3Play Media or whatever service you want and have the final service automatically paste it back. That happens. It reduces any kind of human error.

And when I say human error, I mean, hey, someone went on vacation. You could have your file back whether someone is working online or not. So those pieces are really, really important. And they’re not necessarily always super easy because there’s a lot of coordination from a technical perspective to make sure that the workflows make sense.

The users have to understand what an order means and what a delivered service means. So it’s not automatically easy from a workflow management. But technically, it’s super easy. The API world just makes everything so much more simple.

And then, really, within that process, it’s similar. We’re focused on making sure that once we have that file, the only time a human is really touching that file is an activity of editing and improving the file outcome. There’s no human involved in moving files around inside our process.

So once again, all of those aspects really make sure that the human touch is focused on extremely, highly important discernment activities to make sure that the output is really awesome.

SOFIA LEIVA: Thanks so much. All right, that’s all the time we have for today. But thank you to our audience for joining us. And I hope everyone has a wonderful rest of your day.