Florida-based AI startup NPCx recently made the news after securing the lead investor in their $3 million seed round investment. The investor is none other than Kakao Investment, a branch of Kakao Games, known to gamers as the old Western publisher of Black Desert Online and the new publisher of XLGames’ ArcheAge, including the hotly anticipated MMO sequel.

NPCx is composed of computer engineers, data scientists, artists, and animators. The startup has a suite of products dedicated to animating non-player characters (NPCs) cheaply, though their key product, BehaviorX, will go one step further and allow them to create Drivatar-like player clones for a variety of uses.

To discuss NPCx and its AI-based products, I had a nice, long chat with co-founder and CEO Cameron Madani. You can find the full conversation (edited for clarity) below.

Cameron, can you talk about yourself and the company?

Sure. Let me give you a real quick background. There’s three founders: me, Michael Puscar, and Alberto Menache.

I’ve been in video game development since 2000 and working on a number of titles in corporate development and business development. In 2011, I started my own first attempt at a game studio.

We worked for Microsoft on a title called Torchlight for the Xbox. It did not do very well, so I decided to focus more on the service side of the business.

In 2014, I started a motion capture and animation studio called Motion Burner. Then I realized there’s a lot of repetitive process and it’s very, very time and cost-intensive. Some of it is also very manual, so it’s not very fun or creative, especially when you talk about motion capture.

I met Michael through a friend. Michael is a serial entrepreneur. He’s a computer engineer, but he became a data scientist. I asked him if there was anything in machine learning or artificial intelligence that could help this very costly process of creating animation with motion capture.

We came up with a hypothesis, we tested it, and we launched the company in 2020 right before the pandemic, in February 2020. We came up with a product roadmap and then somebody came to us asking to build a prototype of a motion-matching product. Ubisoft is one of the pioneers in the area. As it happens, a lot of game companies hold their own tech; they don’t share it. They want to use it for their own purposes. So, we more or less created a similar system as Ubisoft as an MVP that helped us actually provide seed capital. Then, we launched a fundraising campaign on Republic.com, which was great. We raised half a million dollars and were able to start developing the product roadmap.

The reason we took that first client was to provide the initial capital, but that wasn’t part of our roadmap, which is about building tools for automating the animation pipeline for the game industry, movie industry, XR, and metaverse.

As for the name, NPCs are one of the most important components of a game. We wanted to focus primarily on making non-player characters realistic, not only in the way they move but in the way they act, too.

Would you classify NPCx technology as belonging to the generative AI category?

This is a good question. There is a lot of attention on basic generative AI, the ability to create these language models, and generative animation systems. There’s a company called Inworld AI. I think they’re very, very interesting and compelling, but they are basically creating generative AI with an animation. It’s a bit trivial, in my opinion. Our system uses unique neural networks. We’re more focused on the physics and the body.

Two of our developers are biomechanical engineers with experience in robotics. All of our data scientists are physicists. Some NPCx tech uses generative AI techniques, but some do not; some are biomechanical models. The generative aspect is not as generic as an LLM.

But it’s still based on a neural network model, right?

Yes. We actually have several types of neural networks.

Okay. There are four branches in the NPCx tech stack. Why don’t you go through them?

Sure. AimX is actually the MVP that we originally created for another client. It’s the motion matching system. We’re keeping that because that stack is gonna be used for BehaviorX.

But the reason that Michael and I started shopping initially was for TrackerX. What we do with that is save motion capture right off the stage. The best systems in the world can clean up about 90% of the problems. Usually, the markers get blocked. If two people are standing together, the chest markers can disappear, the feet markers can disappear if they’re jumping on a mat, et cetera. It takes a team of 20 or 30 people to clean up the motion capture for an average triple-A title.

What we said is we can train neural networks to see what it looks like right up the stage and then we show it what a human did after that, the cleaning. Our hypothesis was we could teach neural networks to clean the data with the physics-based engine. It took about two years and 750K in capital to finally do that.

We launched the first part of this in March, but there’s definitely resistance in the industry, and we didn’t foresee this. If we go to the head of motion capture, they’ll be like, I would have to let go three-quarters of my team with this. It’s so cost-competitive that we have to jump past that decision-maker to the executive level.

It is an interesting conundrum, for sure. The main benefit of this NPCx technology would be to allow for much cheaper motion-captured animations, right?

You can create more animation with the same budget, or you can create the same amount of animation you did before using this with significantly reduced costs.

Can you talk about implementation? Do you have a plugin for Unreal Engine or Unity? How easy is it for developers to implement NPCx tech?

Basically, they give us the files in their format. We take whatever format they give us, we convert it into ours, we process it and then we return it into the same format that they wanted.

The one I’m most interested in is definitely BehaviorX. How does it work?

Well, BehaviorX does require some integration into a game. At its simplest core, it’s observing and creating behavioral clones of players in games. There would have to be an API tie-in because we have to take over the animation systems.

There’s also an opt-in on the player level because we’re observing players. We don’t want to just go in and observe players without their knowledge. There’s the trust factor and all the legal implications of cloning players and so forth.

But the way that it works simply is we first understand the entire available universe of options on a level. I always use the Zelda example. Let’s say you’re in a Zelda dungeon, right?

There’s a certain amount of environmental tiles, a certain amount of space within the environment, there’s certain objectives, enemies, doors, and traps. We want to understand all the things that are in that level, and then we observe how a player plays.

Let’s say you’re a level one player. You have a certain weapon, armor, potions, magic, or whatever. You have your health. How much do you have in your backpack? What have you achieved?

Based on what you do and what you have and your condition, we can recreate that in the form of a clone. Let’s say there’s a boss on a level and you clearly are underpowered. Your weapon is not very strong and you’re in bad health and you charge that boss.  We can determine that you’re a risk taker.

Likewise, if you have strong weapons and armor, full health, and you come up against a medium or a low boss and you choose to avoid them to focus on exploration instead, then we can determine that’s your play style. With that, we can build your behavioral clone and emulate what you would do on another level. Now, the implications there are pretty big.

Yeah. It would have to be opt-in, as I’m sure some gamers wouldn’t like being cloned.

Absolutely. Part of our brand is to be open and ethical because the potential implications are pretty sinister. A lot of people are being observed in what they do online, on social media and things like that, and that’s being exploited. We’re doing it because we think it’s the right thing to do is to have an opt-in system. But we’re also doing it from a risk management point of view to avoid lawsuits in the future.

Here’s an interesting thing that happened to us. At the Game Developers Conference trade show last March, people were starting rumors about us that we’re using our clients’ motion capture data to create our models. People are trying to sabotage us because they’re scared of us. We purposely tell each client within our contract that if you give us motion capture data for TrackerX so that we can use it to clean up your data, we will not use your data to train our generalized models to be used for other customers.

Specifically in regards to BehaviorX, I believe you said you worked at Microsoft. As you might already know, they’ve been using Drivatars in Forza Motorsport games for a long time (though the new game released last month has dropped the concept). I was wondering how similar that is to the technology you’ve developed at NPCx.

It is a similar approach. But that’s the thing: there are a lot of interesting things in the industry. The big companies build something and they hold it. The small companies don’t have the budgets to build something like this. We kind of fit that hole.

Our technology can certainly work with AAA studios other than Microsoft that are interested in what they’re doing so they don’t have to hire a team. And then you have the small companies that certainly don’t have the budget or resources to create this. So we fit that unique position where we’re building tech that’s similar to the largest companies.

You could have a clone of yourself out there. Eventually, it would get better and better because we would get a base clone and then some more data, we would prune it, and we would observe. One application is co-op. If you’re used to playing a four-person format with three of your friends, but only two are available, you could grab the other friend’s clone to play with you.

You could also pay to get the clone of a top-ranked eSports player to play with you. We could even create a composite of clones, we might take a little bit of this person’s clone and that person’s clone to create an archetype, like a sniper or a thief.

Our technology can also go well beyond video games. We can put sensors on humans and see what they do in human situations. It could be applied to workers, military, education, and so forth.

BehaviorX is still under development, correct?

Yes. We’re starting with 2D games, like puzzle games, and then we start to learn and see if we can create the model.

Do you have an internal estimate at NPCx for when this technology will be ready?

We’re targeting early 2025. That’s when we plan to have an early version of BehaviorX in one of the titles made by our investors. From there, we’ll move on to 3D.

Thank you for your time.



Source link