This article is incomplete
Some sections are missing or need additional details. Help improve it by contributing.
Overview
The lip sync system in Neverness to Everness controls how character mouth movements align with spoken dialogue during cutscenes, story sequences, and character interactions. Lip syncing was one of the most discussed aspects of the Co-Ex Test beta, with players identifying it as a major area for improvement. Hotta Studio acknowledged the feedback and confirmed lip sync improvements as a priority in the official launch roadmap.
Beta Feedback
During the Co-Ex Test beta in February 2026, players noticed that character lip movements frequently did not match the spoken audio. The desynchronization was particularly noticeable during story cutscenes and character dialogue sequences. Community feedback across forums and social media highlighted lip syncing as one of the most complained-about aspects of the beta experience.
The issue affected multiple languages, suggesting it was a systemic problem with the lip sync technology rather than a localization-specific bug. Reviewers noted that while the voice acting quality was strong, the visual disconnect between audio and mouth movement broke immersion during narrative moments.
Planned Improvements
The launch roadmap published in February 2026 lists lip sync improvements alongside several other character presentation upgrades:
Improvement Area | Details |
|---|---|
Lip Syncing | Improved synchronization between mouth animations and spoken dialogue across all languages |
Hair and Facial Models | Enhanced detail in character hair rendering and facial geometry |
Base Animations | Improved idle, walking, and general character animations |
Combat Animations | Refined attack, skill, and reaction animations during battle |
Localizations | Translation quality improvements for non-English text |
These improvements are grouped under character presentation as a collective priority. The developers have committed to addressing all of these areas before the April 29, 2026 global launch.
Technical Challenges
Lip syncing in a multi-language game presents unique challenges. Neverness to Everness supports multiple voice-over languages, and each language produces different mouth shapes (visemes) at different timings. A robust lip sync system must either generate per-language mouth animations or use a procedural system that adapts to audio input in real time.
Modern approaches to lip sync in games typically fall into one of several categories:
Challenge | Details |
|---|---|
Pre-baked animations | Mouth movements are hand-animated or motion-captured for each line of dialogue in each language. High quality but extremely time-intensive for large scripts. |
Procedural / phoneme-based | The engine analyzes audio waveforms or text input to generate mouth shapes in real time. More scalable but can appear generic. |
AI-driven | Machine learning models predict appropriate visemes from audio input. Increasingly common in 2025-2026, offering a balance of quality and scalability. |
Hotta Studio has not disclosed which approach NTE uses, but the scope of the improvements suggests a significant rework of the existing system rather than minor tweaks.
Impact on Story Experience
Neverness to Everness places heavy emphasis on its narrative. The main story follows the player as an Appraiser working out of the antique shop Eibon, and character relationships are developed through extensive dialogue sequences. Accurate lip sync is critical for these moments, as players spend significant time watching characters speak during story chapters, companion quests, and daily life interactions.
The quality of lip sync also affects the perceived production value of the game. As a free-to-play title competing with other high-budget anime RPGs, strong facial animation helps NTE stand out in character presentation.
Current Status
Lip sync improvements are actively being worked on as of the latest developer communications. The launch roadmap positions this as a pre-launch priority, meaning the improvements should be visible in the April 29, 2026 release build. Players who experienced the Co-Ex Test beta should expect noticeably improved lip synchronization at launch.
Supported Voice Languages
Neverness to Everness ships with three fully recorded voice tracks at global launch: English, Japanese, and Chinese (Mandarin). Each language is handled by a dedicated cast of voice actors and is bundled with its own dialogue audio files, meaning the lip sync engine has to drive mouth animations from three different phonetic streams. Players can swap between tracks at any time through the in-game settings without restarting the session, and the voice language is independent from the subtitle language, so mixing a Japanese voice track with English subtitles is fully supported.
Because the same scene can be replayed in any of the three languages, the system cannot rely on a single set of pre-baked mouth animations. The lip sync pipeline must either generate a separate animation track per language during production or react to the currently loaded audio clip at runtime. This is one of the main reasons early beta builds struggled: a shared animation track works for only one language at a time, and the other two will drift.
Language | Track | Availability | Notes |
|---|---|---|---|
Full dub | Available at launch | Covers main story, character stories, and the majority of side content. Localized script is paced to match mouth movement where possible. | |
Full dub | Available at launch | Recorded with a Japanese cast sourced through the same industry circles as other anime style RPGs. Popular default for players who prefer seiyuu performances. | |
Full dub | Available at launch | Original production language for Hotta Studio. Often the most tightly synced track because the script, storyboards, and animations start from this version. | |
Korean | Text only | Not confirmed for launch | Korean localization covers the UI and subtitles, but a Korean voice dub has not been confirmed by Hotta Studio or Perfect World as of the launch roadmap. |
Technical Implementation
Neverness to Everness is built on Unreal Engine 5, which gives Hotta Studio access to the engine's character rigging, morph target, and animation blueprint systems. Unreal's standard workflow exposes a set of mouth blend shapes on each character head mesh, and the lip sync component chooses which shapes to activate based on the audio that is currently playing. This is the same general pipeline used by most modern Unreal titles, though the specific driver (hand authored animation, phoneme detection, or an audio to viseme neural network) is a choice each studio makes.
Hotta Studio has not publicly confirmed which approach NTE uses at launch, but the scope of the improvements listed in the launch roadmap suggests a meaningful rework rather than a pass of minor tuning. The roadmap groups lip syncing with hair models, facial models, base animations, and combat animations, which points to a coordinated overhaul of the entire character presentation layer rather than isolated fixes.
Viseme accuracy also depends on the underlying facial rig. A head model with very few mouth blend shapes simply cannot form the range of shapes that real speech requires, no matter how smart the driver is. The hair and facial model upgrades mentioned in the roadmap are therefore directly connected to the lip sync improvements: better topology and more blend shapes give the lip sync system more to work with.
Application Areas
The lip sync system runs in every context where a character speaks on screen. The quality bar differs between these contexts, with cinematic cutscenes receiving the most attention and ambient chatter relying on a lighter pass.
Context | Description | Sync Priority |
|---|---|---|
Scripted narrative beats where characters deliver critical plot information. These use the most polished facial animation and the tightest lip sync. | Highest | |
Character stories and side quests | Personal story arcs for each playable character and the longer side quest chains found in the open world. Fully voiced in all three languages. | High |
Short introductory sequences that explain a character's kit and background. Short enough that reviewers tend to notice lip sync issues immediately. | High | |
Bonding dialogues unlocked as the player builds relationships with specific characters. Often delivered in quiet close up shots where mouth movement is very visible. | High | |
Social interactions and daily life | NPC conversations in shops, on the street, and in apartments. A large volume of short lines that need to be synced without exhausting the animation budget. | Medium |
Commissions and open world barks | Short spoken lines triggered during combat, exploration, or mini events. Usually a few words long, so minor drift is less noticeable. | Medium |
In-engine short films released by Hotta Studio on YouTube. These mix cinematic cutscenes with stylized sequences and are held to a high lip sync standard because they function as marketing. | Highest |
Launch Roadmap Improvements
On February 26, 2026, Hotta Studio published a launch roadmap summarizing the changes planned for the April 29, 2026 global release. The roadmap directly addresses lip syncing as one of the most complained about aspects of the Co-Ex Test beta and groups it together with several other character presentation upgrades. The developers committed to landing these fixes before launch rather than pushing them into a post release patch.
Area | Planned Improvement |
|---|---|
Lip syncing | Tighter alignment between mouth animations and spoken dialogue, targeted at all three voice tracks rather than any single language. |
Hair and facial models | Higher quality hair rendering and improved facial geometry, which gives the lip sync driver more blend shapes to work with. |
Base animations | Refined idle, walking, and general body movement. Helps scenes feel less stiff during dialogue. |
Combat animations | Reworked attack, skill, and reaction animations. Improves the combat feel commented on by beta players. |
Localization | Quality improvements to translated text, which indirectly helps lip sync because better pacing of localized lines reduces the amount of audio the mouth has to chase. |
Ray tracing and Lumen | Fixes for ghosting artifacts in the rendering pipeline, which were especially distracting during close up conversations. |
Mobile performance | Dedicated animation optimization for phones, so mobile players are not forced to disable the new facial detail to keep a stable frame rate. |
The collective goal is to lift character presentation to a level that matches the voice acting, rather than shipping strong performances alongside weak mouth animation.
Connection to Stories From Eibon
Stories From Eibon is a series of in-engine short films that Hotta Studio has been releasing on the official NTE YouTube channel ahead of launch. The first entry dropped in November 2024 and mixes cinematic cutscene footage with stylized cartoon-like sequences, introducing the antique shop Eibon, its regular customers, and the supernatural elements lurking behind everyday city life.
These shorts matter to the lip sync discussion for two reasons. First, they are produced with the same character rigs and animation pipeline as the in-game cutscenes, so their quality is an honest preview of what the main story should look like. Second, they are edited by Hotta Studio as marketing pieces, which means they are polished past the point where any obvious lip sync desync would be allowed to ship. Comparing the Stories From Eibon shorts with the Co-Ex Test beta build gave players a clear picture of the gap the launch roadmap was trying to close.
By launch, the expectation is that routine story cutscenes will approach the fidelity seen in the shorts, with the soundtrack and voice performances sitting on top of mouth animation that holds up in close shots rather than breaking the illusion.
Tips for Choosing a Voice Language
Players who want the cleanest lip sync experience have a few practical options. None of these are enforced by the game, but they help reduce the chance of noticing a sync issue during the most important story moments.
Tip | Why It Helps |
|---|---|
Try the Chinese (Mandarin) track during key story beats | The original production language is usually animated and storyboarded first, so mouth shapes tend to line up most tightly with Mandarin audio. |
Switch voice tracks mid-session if something feels off | The language can be changed at any time from the settings menu without losing progress, so there is no penalty for experimenting across chapters. |
Higher character detail presets expose more facial blend shapes, which gives the lip sync system more precision to work with. | |
Disable aggressive frame generation during cutscenes | Frame interpolation can smear mouth shapes between keyframes and create the impression of lip sync drift even when the underlying animation is correct. |
Compare the same scene across languages | If one track feels loose, another may be tighter for that specific scene. Players who are sensitive to lip sync often keep two tracks downloaded and switch per chapter. |
Rewatch Stories From Eibon for reference | The short films show the target fidelity Hotta Studio is aiming for. If in-game cutscenes fall noticeably short, the gap is a known issue rather than a hardware or settings problem. |
Lip syncing will almost certainly continue to improve across post launch patches. The February 2026 roadmap treats it as a visible priority rather than a nice to have, and ongoing updates are expected as Hotta Studio collects telemetry from the full global player base rather than the limited Co-Ex Test audience.