Loading...
Lip Sync System - Version 6 vs Version 7
Apr 19, 2026, 03:36 PM
Link cleanup after slug rename (2026-04-19)
Apr 22, 2026, 10:11 AM
Add wikilinks to table cells (8 new links)
11Overview2233The lip sync system in Neverness to Everness controls how character mouth movements align with spoken dialogue during cutscenes, story sequences, and character interactions. Lip syncing was one of the most discussed aspects of the Co-Ex Test beta, with players identifying it as a major area for improvement. Hotta Studio acknowledged the feedback and confirmed lip sync improvements as a priority in the official launch roadmap.4455Beta Feedback6677During the Co-Ex Test beta in February 2026, players noticed that character lip movements frequently did not match the spoken audio. The desynchronization was particularly noticeable during story cutscenes and character dialogue sequences. Community feedback across forums and social media highlighted lip syncing as one of the most complained-about aspects of the beta experience.8899The issue affected multiple languages, suggesting it was a systemic problem with the lip sync technology rather than a localization-specific bug. Reviewers noted that while the voice acting quality was strong, the visual disconnect between audio and mouth movement broke immersion during narrative moments.10101111Planned Improvements12121313The launch roadmap published in February 2026 lists lip sync improvements alongside several other character presentation upgrades:14141515Improvement AreaDetailsLip SyncingImproved synchronization between mouth animations and spoken dialogue across all languagesHair and Facial ModelsEnhanced detail in character hair rendering and facial geometryBase AnimationsImproved idle, walking, and general character animationsCombat AnimationsRefined attack, skill, and reaction animations during battleLocalizationsTranslation quality improvements for non-English text1616These improvements are grouped under character presentation as a collective priority. The developers have committed to addressing all of these areas before the April 29, 2026 global launch.17171818Technical Challenges19192020Lip syncing in a multi-language game presents unique challenges. Neverness to Everness supports multiple voice-over languages, and each language produces different mouth shapes (visemes) at different timings. A robust lip sync system must either generate per-language mouth animations or use a procedural system that adapts to audio input in real time.21212222Modern approaches to lip sync in games typically fall into one of several categories:23232424ChallengeDetailsPre-baked animationsMouth movements are hand-animated or motion-captured for each line of dialogue in each language. High quality but extremely time-intensive for large scripts.Procedural / phoneme-basedThe engine analyzes audio waveforms or text input to generate mouth shapes in real time. More scalable but can appear generic.AI-drivenMachine learning models predict appropriate visemes from audio input. Increasingly common in 2025-2026, offering a balance of quality and scalability.2525Hotta Studio has not disclosed which approach NTE uses, but the scope of the improvements suggests a significant rework of the existing system rather than minor tweaks.26262727Impact on Story Experience28282929Neverness to Everness places heavy emphasis on its narrative. The main story follows the player as an Appraiser working out of the antique shop Eibon, and character relationships are developed through extensive dialogue sequences. Accurate lip sync is critical for these moments, as players spend significant time watching characters speak during story chapters, companion quests, and daily life interactions.30303131The quality of lip sync also affects the perceived production value of the game. As a free-to-play title competing with other high-budget anime RPGs, strong facial animation helps NTE stand out in character presentation.32323333Current Status34343535Lip sync improvements are actively being worked on as of the latest developer communications. The launch roadmap positions this as a pre-launch priority, meaning the improvements should be visible in the April 29, 2026 release build. Players who experienced the Co-Ex Test beta should expect noticeably improved lip synchronization at launch.36363737Supported Voice Languages38383939Neverness to Everness ships with three fully recorded voice tracks at global launch: English, Japanese, and Chinese (Mandarin). Each language is handled by a dedicated cast of voice actors and is bundled with its own dialogue audio files, meaning the lip sync engine has to drive mouth animations from three different phonetic streams. Players can swap between tracks at any time through the in-game settings without restarting the session, and the voice language is independent from the subtitle language, so mixing a Japanese voice track with English subtitles is fully supported.40404141Because the same scene can be replayed in any of the three languages, the system cannot rely on a single set of pre-baked mouth animations. The lip sync pipeline must either generate a separate animation track per language during production or react to the currently loaded audio clip at runtime. This is one of the main reasons early beta builds struggled: a shared animation track works for only one language at a time, and the other two will drift.42424343LanguageTrackAvailabilityNotesEnglishFull dubAvailable at launchCovers main story, character stories, and the majority of side content. Localized script is paced to match mouth movement where possible.JapaneseFull dubAvailable at launchRecorded with a Japanese cast sourced through the same industry circles as other anime style RPGs. Popular default for players who prefer seiyuu performances.Chinese (Mandarin)Full dubAvailable at launchOriginal production language for Hotta Studio. Often the most tightly synced track because the script, storyboards, and animations start from this version.KoreanText onlyNot confirmed for launchKorean localization covers the UI and subtitles, but a Korean voice dub has not been confirmed by Hotta Studio or Perfect World as of the launch roadmap.4444Technical Implementation45454646Neverness to Everness is built on Unreal Engine 5, which gives Hotta Studio access to the engine's character rigging, morph target, and animation blueprint systems. Unreal's standard workflow exposes a set of mouth blend shapes on each character head mesh, and the lip sync component chooses which shapes to activate based on the audio that is currently playing. This is the same general pipeline used by most modern Unreal titles, though the specific driver (hand authored animation, phoneme detection, or an audio to viseme neural network) is a choice each studio makes.47474848Hotta Studio has not publicly confirmed which approach NTE uses at launch, but the scope of the improvements listed in the launch roadmap suggests a meaningful rework rather than a pass of minor tuning. The roadmap groups lip syncing with hair models, facial models, base animations, and combat animations, which points to a coordinated overhaul of the entire character presentation layer rather than isolated fixes.49495050Viseme accuracy also depends on the underlying facial rig. A head model with very few mouth blend shapes simply cannot form the range of shapes that real speech requires, no matter how smart the driver is. The hair and facial model upgrades mentioned in the roadmap are therefore directly connected to the lip sync improvements: better topology and more blend shapes give the lip sync system more to work with.51515252Application Areas53535454The lip sync system runs in every context where a character speaks on screen. The quality bar differs between these contexts, with cinematic cutscenes receiving the most attention and ambient chatter relying on a lighter pass.55555656ContextDescriptionSync PriorityMain story cutscenesScripted narrative beats where characters deliver critical plot information. These use the most polished facial animation and the tightest lip sync.HighestCharacter stories and side questsPersonal story arcs for each playable character and the longer side quest chains found in the open world. Fully voiced in all three languages.HighCharacter tutorialsShort introductory sequences that explain a character's kit and background. Short enough that reviewers tend to notice lip sync issues immediately.HighAffinity system interactionsBonding dialogues unlocked as the player builds relationships with specific characters. Often delivered in quiet close up shots where mouth movement is very visible.HighSocial interactions and daily lifeNPC conversations in shops, on the street, and in apartments. A large volume of short lines that need to be synced without exhausting the animation budget.MediumCommissions and open world barksShort spoken lines triggered during combat, exploration, or mini events. Usually a few words long, so minor drift is less noticeable.MediumStories From EibonIn-engine short films released by Hotta Studio on YouTube. These mix cinematic cutscenes with stylized sequences and are held to a high lip sync standard because they function as marketing.Highest5757Launch Roadmap Improvements58585959On February 26, 2026, Hotta Studio published a launch roadmap summarizing the changes planned for the April 29, 2026 global release. The roadmap directly addresses lip syncing as one of the most complained about aspects of the Co-Ex Test beta and groups it together with several other character presentation upgrades. The developers committed to landing these fixes before launch rather than pushing them into a post release patch.60606161AreaPlanned ImprovementLip syncingTighter alignment between mouth animations and spoken dialogue, targeted at all three voice tracks rather than any single language.Hair and facial modelsHigher quality hair rendering and improved facial geometry, which gives the lip sync driver more blend shapes to work with.Base animationsRefined idle, walking, and general body movement. Helps scenes feel less stiff during dialogue.Combat animationsReworked attack, skill, and reaction animations. Improves the combat feel commented on by beta players.LocalizationQuality improvements to translated text, which indirectly helps lip sync because better pacing of localized lines reduces the amount of audio the mouth has to chase.Ray tracing and LumenFixes for ghosting artifacts in the rendering pipeline, which were especially distracting during close up conversations.Mobile performanceDedicated animation optimization for phones, so mobile players are not forced to disable the new facial detail to keep a stable frame rate.6262The collective goal is to lift character presentation to a level that matches the voice acting, rather than shipping strong performances alongside weak mouth animation.63636464Connection to Stories From Eibon65656666Stories From Eibon is a series of in-engine short films that Hotta Studio has been releasing on the official NTE YouTube channel ahead of launch. The first entry dropped in November 2024 and mixes cinematic cutscene footage with stylized cartoon-like sequences, introducing the antique shop Eibon, its regular customers, and the supernatural elements lurking behind everyday city life.67676868These shorts matter to the lip sync discussion for two reasons. First, they are produced with the same character rigs and animation pipeline as the in-game cutscenes, so their quality is an honest preview of what the main story should look like. Second, they are edited by Hotta Studio as marketing pieces, which means they are polished past the point where any obvious lip sync desync would be allowed to ship. Comparing the Stories From Eibon shorts with the Co-Ex Test beta build gave players a clear picture of the gap the launch roadmap was trying to close.69697070By launch, the expectation is that routine story cutscenes will approach the fidelity seen in the shorts, with the soundtrack and voice performances sitting on top of mouth animation that holds up in close shots rather than breaking the illusion.71717272Tips for Choosing a Voice Language73737474Players who want the cleanest lip sync experience have a few practical options. None of these are enforced by the game, but they help reduce the chance of noticing a sync issue during the most important story moments.75757676TipWhy It HelpsTry the Chinese (Mandarin) track during key story beatsThe original production language is usually animated and storyboarded first, so mouth shapes tend to line up most tightly with Mandarin audio.Switch voice tracks mid-session if something feels offThe language can be changed at any time from the settings menu without losing progress, so there is no penalty for experimenting across chapters.Raise graphics settings on PC and PS5Higher character detail presets expose more facial blend shapes, which gives the lip sync system more precision to work with.Disable aggressive frame generation during cutscenesFrame interpolation can smear mouth shapes between keyframes and create the impression of lip sync drift even when the underlying animation is correct.Compare the same scene across languagesIf one track feels loose, another may be tighter for that specific scene. Players who are sensitive to lip sync often keep two tracks downloaded and switch per chapter.Rewatch Stories From Eibon for referenceThe short films show the target fidelity Hotta Studio is aiming for. If in-game cutscenes fall noticeably short, the gap is a known issue rather than a hardware or settings problem.7777Lip syncing will almost certainly continue to improve across post launch patches. The February 2026 roadmap treats it as a visible priority rather than a nice to have, and ongoing updates are expected as Hotta Studio collects telemetry from the full global player base rather than the limited Co-Ex Test audience.