I don’t know, some of these latest AI developments are starting to freak me out a little bit.

In amongst the various visual AI generator tools, which can create entirely new artworks based on simple text prompts, and advancing text AI generators, that can write credible (sometimes) articles based on a range of web-sourced inputs, there are some concerning trends that we’re seeing, both from a legal and ethical standpoint, which our current laws and structures are simply not built to deal with.

It feels like AI development is accelerating faster than is feasible to manage – and then Meta shares its latest update, an AI system that can use strategic reasoning and natural language to solve problems put before it.

As explained by Meta:

“CICERO is the first artificial intelligence agent to achieve human-level performance in the popular strategy game Diplomacy. Diplomacy has been viewed as a nearly impossible challenge in AI because it requires players to understand people’s motivations and perspectives, make complex plans and adjust strategies, and use language to convince people to form alliances.

But now, they’ve solved this. So there’s that.

Also:

“While CICERO is only capable of playing Diplomacy, the technology behind it is relevant to many other applications. For example, current AI assistants can complete simple question-answer tasks, like telling you the weather — but what if they could hold a long-term conversation with the goal of teaching you a new skill?

Nah, that’s good, that’s what we want, AI systems that can think independently, and influence real people’s behavior. Sounds good, no concerns. No problems here.

And then @nearcyan posts a prediction about ‘DeepCloning’, which could, in future, see people creating AI-powered clones of real people that they want to build a relationship with.

DeepCloning, the practice of creating virtual AI clones of humans to replace them socially, has been surging in popularity

Does this new AI trend go too far by replicating partners and friends without consent?

This court case may help to clarify the legality (2024, NYT) pic.twitter.com/7OvtzSbLLl

— nearcyan (@nearcyan) November 20, 2022

Yeah, there’s some freaky stuff going on, and it’s gaining momentum, which could push us into very challenging territory, in a range of ways.

But it’s happening, and Meta is at the forefront – and if Meta’s able to make its Metaverse vision come to life as it expects, we could all be confronted with a lot more AI-generated elements in the very near future.

So much so that you won’t know what’s real and what isn’t. Which should be fine, should be all good.

Not really concerned at all.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *