AI search is a trash fire and Google is holding the matches

0

The world of AI research is in shambles. From academics prioritizing easy-to-monetize programs over innovation, to the Silicon Valley elite using the threat of job loss to encourage business-friendly assumptions, the system is a broken mess.

And Google deserves the lion’s share of the blame.

how it started

There were approximately 85,000 research papers published worldwide on the topic of AI/ML in the year 2000. Fast forward to 2021 and there were almost twice as many published in the US alone .

To say there was an explosion on the pitch would be a huge understatement. This influx of researchers and new ideas has made deep learning one of the most important technologies in the world.

Between 2014 and 2021, big tech has all but abandoned their “web first” and “mobile first” principles in favor of “AI first” strategies.

Now, in 2022, AI developers and researchers are more in demand (and paid more) than almost any other tech job outside of the C suite.

But this kind of unfettered growth also has a dark side. In the race to meet market demand for deep learning-based products and services, the field has become as ruthless and capricious as professional sports.

Over the past few years, we’ve seen ‘GANfather’ Ian Goodfellow go from Google to Apple, Timnit Gebru and others get kicked out of Google over dissenting opinions on search efficiency, and a virtual torrent of dubious AI papers somehow manage to pass peer review.

The flood of talent that has arrived as a result of the explosion of deep learning has also resulted in a landslide of bad research, fraud, and corporate greed.

How are you

Google, more than any other company, bears responsibility for the modern AI paradigm. This means we need to give the big G full marks to bring natural language processing and image recognition to the masses.

It also means we can credit Google for creating the researcher-eat-researcher environment in which some students and their big-tech partner professors treat research papers as mere bait for venture capitalists and hunters. of business heads.

At the top, Google showed its willingness to hire the most talented researchers in the world. And it’s also been shown time and time again that it will fire them in the blink of an eye if they don’t follow the company’s line of conduct.

The company made headlines around the world after it fired Timnit Gebru, a researcher it had hired to help lead its AI ethics division, in December 2020. A few months later, she fired another team member, Margaret Mitchell.

Google argues the researchers’ work fell short of specifications, but the women and many supporters say the layoffs only took place after raising ethical concerns about the research that Google’s boss The company’s IA, Jeff Dean, had approved.

It is now just over a year later and history is repeating itself. Google fired another world-renowned artificial intelligence researcher, Satrajit Chatterjee, after he led a team of scientists to challenge another paper Dean had signed.

The mudslide effect

At the top, this means competition for high-paying jobs is fierce. And the hunt for the next talented researcher or developer begins earlier than ever.

Students working on graduate degrees in machine learning and AI, who eventually want to work outside of academia, should write or co-author research papers that demonstrate their talent.

Unfortunately, the pipeline from academia to big tech or the VC-led startup world is littered with shitty papers written by students whose whole penchant is to write algorithms that can be monetized.

A quick Google Scholar search for “natural language processing,” for example, brings up nearly a million results. Most articles listed have hundreds or thousands of citations.

At first glance, this would indicate that NLP is a thriving subset of machine learning research that has captured the attention of researchers around the world.

In fact, searches for “artificial neural network”, “computer vision” and “reinforcement learning” have all yielded a similar glut of results.

Unfortunately, a significant portion of AI and ML research is either intentionally fraudulent or full of bad science.

What may have worked well in the past is rapidly becoming a potentially obsolete mode of communicating research.

Stuart Richie of the Guardian recently wrote an article asking whether we should remove research articles altogether. According to them, the problems of science run quite deep:

This system comes with some big issues. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. Scientists therefore go to great lengths to boost their studies, rely on their analyzes to produce “better” results, and sometimes even commit fraud in order to impress these all-important gatekeepers. This radically distorts our view of what really happened.

The problem is that the gatekeepers everyone is trying to impress tend to hold the keys to students’ future employment and scholars’ admission to prestigious journals or conferences – scholars may not get their approval at their risks and perils.

And, even if an article manages to pass peer review, there’s no guarantee that the people making things happen aren’t sleeping at the switch.

This is why Guillaume Cabanac, lecturer in computer science at the University of Toulouse, created a project called Problematic Paper Screener (PPS).

The PPS uses automation to flag articles that contain potentially problematic code, math, or verbiage. In the spirit of science and fairness, Cabanac ensures that every reported item undergoes manual human review. But the work is probably too big for a handful of humans to do in their spare time.

According to a report by Spectrum News, there are many problematic articles. And the majority has to do with machine learning and AI:

The reviewer estimated about 7,650 studies to be problematic, including more than 6,000 for torturing sentences. Most of the articles with tortured phrases seem to come from the fields of machine learning, artificial intelligence, and engineering.

Tortured phrases are terms that catch the attention of researchers because they attempt to describe an already well-established process or concept.

For example, the use of terms such as “fake neuron” or “artificial neuron” could indicate the use of a thesaurus plugin used by bad actors trying to get away with plagiarizing previous work.

The solution

While Google can’t be blamed for everything untoward in the areas of machine learning and AI, it has played an outsized role in decentralizing peer-reviewed search.

This is not to say that Google does not support and does not support the scientific community through open source, financial aid and support for research. And we’re certainly not trying to imply that everyone studying AI is just there to make a quick buck.

But the system is set up to encourage the monetization of algorithms first, and to advance the field second. For this to change, big tech and universities must commit to fundamentally reforming the way research is presented and reviewed.

Currently, there is no widely recognized third-party verification authority for the papers. The peer review system is more like a code of honor than an agreed set of principles followed by institutions.

However, there is a precedent for the creation and operation of an oversight board with the reach, influence, and expertise to govern beyond academic boundaries: the NCAA.

If we can unify a fair competition system for thousands of amateur athletic programs, it’s a safe bet we could form a governing body to set guidelines for academic search and review.

And, when it comes to Google, there’s a better than zero chance that CEO Sundar Pichai will find himself called before Congress again if the company continues to fire the researchers it hires to oversee its ethical AI programs. .

American capitalism means that a company is generally free to hire and fire whoever it wants, but shareholders and workers also have rights.

Eventually, Google will have to commit to ethical research or it will find itself unable to compete with the companies and organizations that want it.

Share.

About Author

Comments are closed.