‘Things are moving really, really fast’: Inside a lab researching AI

Boston, Massachusetts: Listen up y’all, I got a story to tell/ About a technology that’s advancing like hell/ It’s called AI, and it’s changing the game/ But there’s danger in its rise, and it’s not just fame.

The lyrical introduction you’ve just read wasn’t written by me, despite being under my byline.

Chat GPT is a language-processing AI model that is capable of generating human-like text, such as essays.Credit:iStock

It was generated by ChatGPT, the latest Artificial Intelligence program that helps you write essays, answer questions with a basic prompt, or even produce complex coding.

In a bid to test the technology, I merely signed up to the service on the internet and entered the following command: “Write an introduction for an article on the dangers of artificial intelligence, in the style of a rapper.”

The fact this answer was produced in a matter of seconds is emblematic of both the risk and reward of AI, which is evolving so quickly even its biggest proponents admit they’re surprised.

MIT-IBM AI Lab director David CoxCredit:IBM

“As a field, things are moving really, really fast,” said David Cox, the IBM director of the MIT-IBM Watson AI Lab in Boston, during a tour at the facility this week.

“We’d love to harness these technologies and we’re working on that for our customers, but there’s an important step, which is: these things have to be trusted.”

Seven decades after artificial intelligence was defined as a field of research in computer science at a Dartmouth College conference in the summer of 1956, AI is everywhere.

It can help predict cancer or fight crime. It can personalise our shopping or assist us in getting from one place to another. It can even tackle global challenges. Last week, for instance, the US and the European Union signed an administrative arrangement to bring together experts from across both regions to drive advancements in five key areas: extreme weather and climate forecasting; emergency response management; health and medicine; electric grids; and agriculture optimisation.

But with the rise of AI comes inevitable risks.

Deepfake software – which allows people to swap faces, voices and other characteristics – has been used to create everything from phoney pornographic films involving real-life celebrities to fake speeches by important politicians.

Elon Musk’s Tesla vehicles featuring autopilot functions were involved in 273 known crashes in the 12 months to June of last year, according to data from the National Highway Traffic Safety Administration.

And then there’s ChatGPT, which has polarised consumers ever since it was launched by San Francisco company OpenAI in late November.

Two months since taking the world by storm, numerous schools and colleges have already banned the speedy text generator amid concerns that students will use it to cheat or plagiarise.

In Australia, award-winning musician Nick Cave has lamented the technology after someone used it to write a song in his style, resulting in Cave dismissing it as “bullshit” and describing it as “a grotesque mockery of what it is to be human”.

Nick Cave performing at the Concert Hall, Sydney Opera House in December.Credit:James Brickwood

And across the globe, many workers now face serious disruption as machine learning promises to do their work – faster and cheaper.

Indeed, in an ominous sign for journalists, digital media outlet Buzzfeed – which is currently in the process of trimming its workforce by 12 per cent as part of a cost-cutting strategy – last week announced it would use ChatGPT’s creator OpenAI to generate content for its website. Shares more than doubled as its AI-centric plans were reported.

“I think all of us are looking at our jobs and saying: hmm, this is interesting,” Cox said, of the potential for job displacement.

“It’s unlikely that AI is wholesale going to take over jobs… but what it will do is start picking off some of the skills. This isn’t new – but it’s changing the balance.”

The MIT-IBM Watson AI Lab is an industry-academic collaboration between IBM and the Massachusetts Institute of Technology, which focuses on research in artificial intelligence.

It began in 2017 with a 10-year, $US240 million investment, six years after IBM’s supercomputer Watson, named after tech giant founder Thomas Watson, made its debut on the quiz show Jeopardy, beating two of its most successful players.

Among its many projects, the Lab is now working on making AI fairer and advancing what are known as foundation models, which are trained on a broad set of unlabelled data that can be used for different tasks. In a collaboration announced this week, IBM will pair up with NASA and use its AI technology to draw more information from large NASA datasets. The aim is to find an easier way for researchers to advance their scientific understanding of Earth and respond to climate change.

Yet, despite the advances, challenges remain, particularly as AI technologies evolve.

Cox cites the example of Galactica AI, the language model Facebook was forced to take down within days because it was generating very “authoritative sounding” but inaccurate literature. Another concern is the potential for scientific misinformation, as seen during the global pandemic.

“People are putting out articles that aren’t peer reviewed yet, that have inflammatory and anti-vaccine claims. Now, potentially, we have a tool that can generate those automatically, or you can have an interactive conversation with someone where you try to convince them something that isn’t true,” Cox says.

“In some way that’s a failing of the technology, which isn’t perfect and doesn’t always produce correct answers, but I would say the bigger problem is more on human conceptualisation and the use of technology.”

The US Congress has been slow to react when it comes to AI, and politicians acknowledge that it would be near impossible to regulate each specific use of the technology.

However, in a sign that some things may be shifting, Republican Speaker Kevin McCarthy told reporters last week that all members of the House Intelligence Committee would take courses in AI and Quantum – the same training military generals receive.

“We want to be able to speak of making sure our country and the national security is protected,” he said.

Meanwhile, House Democrat Ted Lieu – who is one of a handful of members with a computer science background – has called for Congress to establish a nonpartisan federal commission that would provide recommendations about how to oversee AI.

The California-based politician also recently introduced a bill that, if passed, would direct the House of Representatives to examine the technology. And true to the times, it was the first piece of federal legislation to ever be written by ChatGPT.

  • The author travelled to the MIT-IBM Watson AI Lab courtesy of IBM.

Get a note directly from our foreign correspondents on what’s making headlines around the world. Sign up for the weekly What in the World newsletter here.

Most Viewed in World

From our partners

Source: Read Full Article