Artificial intelligence:“Time is running out to subdue AI’s overwhelming power”
Bleich’s capriciousness sharply illustrates the education system’s dilemma. There is no longer any way for a teacher or lecturer to be confident that students’ work is their own – existing plagiarism checkers have no defense against ChatGPT.
Microsoft billionaire Bill Gates says AI technology is now “targeting” white-collar jobs.Credit:Peter Morris
So far, three Australian states have blocked its use on school internet networks – NSW, Queensland and Tasmania – and so have many other school authorities internationally. But they cannot block it everywhere. As Elon Musk remarked in response to his arrival: “Goodbye homework.”
Jeremy Weinstein, a professor at Stanford University in the heart of Silicon Valley and co-author of System Error: Where Big Tech Went Wrong and How We Can Reboot, points out that the maker of ChatGPT – a San Francisco firm called OpenAI – “is only one company and there are dozens of companies developing these large language models”.
Weinstein says that “it’s obviously a revolution” and that “like many of the technological advances before it, the world is going to be completely different” as a result.
In an anonymous survey of about 4,500 Stanford students conducted this month by a campus newspaper, The Stanford Daily, 17 percent said they used ChatGPT in their final exams and assignments, even if it violated ethics codes .
“One of the costs that this brings is on teachers and the education system — we’re in the moment where teachers and school districts are overwhelmed,” Weinstein tells me. “Are we approaching this new moment with concern for possible harm? We are absolutely not.”
It should be possible to integrate a program like ChatGPT into teaching, just as the trendy calculator was eventually integrated into the teaching of mathematics. But schools, companies, regulators are unprepared, Weinstein says: “Do any companies or governments have the infrastructure to allow the benefits of this technology and mitigate its potential harm? We don’t have standards or codes in companies, and we have a race between disruption and democracy – and democracy always loses.”
The world is in a “seat belt moment” with machine learning as it was when the basic safety device was forced on the auto industry in the 1960s and 70s, but so far no one is installing the seat belts: “Government is largely absent from the regulatory landscape across the board in technology,” Weinstein says. “In AI, we’re reliant on self-regulation. It puts us in things like platform moderation in the hands of a single individual, which is very uncomfortable for a lot of people,” a reference to Musk’s control of Twitter.
It also creates perverse outcomes such as the rental system created by Amazon. The machine learning program ate up all the existing data about hiring practices at Amazon and applied it to new job applicants. The result was a bot that systematically discriminated against women. The bot, irreparable, had to be wasted.
This is one of the limitations of machine learning that it learns from the data it is trained on. So ChatGPT can do impressively broad and fast research on the Internet, but it’s only as accurate as what’s on the Internet. And we all know how accurate that is. Caveat emptor.
The dilemma posed by ChatGPT extends far beyond education. “There will be a lot of anxiety over the fact [artificial intelligence] targeting white-collar jobs,” Bill Gates predicted during a visit to Sydney last week.
It was already targeting blue-collar jobs. As Bleich well knows. Barack Obama’s representative to Australia from 2009-13 is now the chief legal officer for Cruise, a company that already has a hundred driverless taxis on the streets of San Francisco offering rides to the public.
Driverless vehicles have yet to be perfected, but already have a better safety record than humans behind the wheel. The implications are clear for the millions of people who make a living as delivery drivers, couriers, truckers, taxis, Uber drivers.
The release of ChatGPT now sends a chill through the complacent set. Lawyers, doctors, journalists, academics all face the prospect of serious disruption as machine learning promises to do some of their work faster and at almost zero cost. Millions more jobs face disruption.
One member of the US Congress, Ted Lieu, a Democrat from California with a degree in computer science, says he is “eaten” by AI. He proposes a federal commission to consider how to regulate it. He hopes to eventually create something like the US Food and Drug Administration.
Weinstein agrees that this is the kind of ambition that is needed. He says that Australia’s regulators can play an important role: “I think we are in a moment of regulatory experimentation. For this reason, even if small markets cannot influence the extraterritorial behavior of large technology companies, they can experiment with new policy and regulatory approaches. Now that’s great value.”
As for Oscar, the sonnet failed to change his ways, says Bleich. “But the bell we were finally able to get around his neck seemed to do the trick.” A constant reminder that there are some things machine learning can’t do. Still.
The Opinion Newsletter is a weekly series of views that will challenge, challenge and inform your own. Sign up here.
More world commentary from our award-winning writers
Weapons of choice: We may not be at war, but Australia has some hard lessons to learn from Ukraine about what to prioritize and prepare in case we go to war – Mick Ryan
The US/Australia alliance: There’s a new head of the US Studies Center in Sydney, and this Republican has some things he wants Australians to know about the reality of their relationship with the US – Peter Hartcher
Long-term gain: Recent headlines suggest that far-right populism is on the wane, but shutting it down would be short-sighted and dangerous. The long-term trend line becomes clearer – and turns sharply to the right – Duncan McDonnell