The progress of AI is not too fast, but too slow

Original source: Pondering things

Image source: Generated by Unbounded AI

The resignation from Altman led to the Q * algorithm, and then seemed to lead to a conclusion: strong artificial intelligence is coming, but the reality may be the opposite, artificial intelligence does have progress, and there is great potential and disruptive power, but the overall progress is not too fast, but too slow.

Turing Test 2.0: Ditch it but get back to it

The 1950 Turing Test says that when a person asks a question with an invisible person and a machine, and cannot distinguish between a human and a machine, then the machine passes the Turing test.

Now in some scenarios, large models can indeed pass the Turing test, so this version of the Turing test is an outdated test method, which is of little significance.

But the underlying kernel of the Turing test is valuable. It delineates a scene, lets the AI do it, and judges whether the intelligence is enough by whether the outside can perceive it is not outdated, but actually becomes more critical.

Extending the Turing test, we can delineate a position or scenario in an economic activity and then examine whether an AI can do it, while the party experiencing the service does not know whether it is a service provided by a human or a service. If AI can do it, it’s through the Turing Test 2.0, otherwise it’s not.

Why does this make sense?

Because the original Turing test is more like testing an agent living in a virtual space, it does not need to distinguish between real and false, as long as it ensures logical self-consistency, then the goal of passing the test can be achieved, and it doesn’t matter if you talk nonsense in the process. This is a technical perspective.

There is a science fiction movie called “This Man is from Earth”, in which a man claims that he is a caveman who has lived for 14,000 years, and that he has witnessed the changes of human history and civilization, and even had exchanges with Buddha and Jesus. The scientists who were in the same room with him tried to use logic to verify whether he was, but it turned out that it was not true or false to sit in the room. In the room, as long as the person is knowledgeable enough and can ensure logical self-consistency, you can’t tell the difference at all. And when you walk out of the house, it is different immediately, and other facts and feedback can quickly determine whether it is true or false.

Similarly, whether artificial intelligence is really intelligent is an academic and technical issue, as well as a commercial issue, so it must come out and be tested by a larger scene, and it cannot be a language model that can only chatter. At this time, it is necessary to follow the same idea, go back to the Turing test intelligence comparison kernel, and upgrade it.

Pondering things: Can AI make money?", calling it the full-scene coverage method, with the increase in the attention of artificial intelligence, it seems that there is a growing need to emphasize this perspective. Because our entire civilization is built on intelligence, there can always be countless perspectives on artificial intelligence, such as: one is the fantasy without anchors, which can do anything, similar to an imaginary superman, which is used to write novels, and the other is a purely technical perspective, which is joyful and sad, either thinking that this thing can be useful (don’t look at the current popularity, in fact, most AI researchers in the past decade have a pessimistic attitude), or they see various developments every day and feel that the world is about to be threatened.

It’s easy to be so flickering without anchors and scales, but it’s precisely the scale itself that is the essence.

Why is the progress of artificial intelligence actually slow?

If you compare yourself with yourself in the technical circle, in fact, there is still a lot of progress, whether it is the recognition rate in the past or the content generation this time, the large model has made considerable progress, but if you change to the perspective of the Turing Test 2.0 mentioned above, you will find that even today, it still can’t pass. It’s like an infinitely close curve, but there is just no breakthrough.

The division of labor within the enterprise can be further enumerated, and the typical positions of the enterprise are:

  • Functions: HR, Finance, IT, Administration, Intellectual Property
  • Production and research: product, R&D, supply chain, testing, operation and maintenance
  • Business: marketing, sales, pre-sales, after-sales

Each position will be further subdivided horizontally and vertically, vertically refers to the hierarchy, which is what we often call the reporting route, and horizontally refers to the front-end, back-end, and App responsibilities.

A product company with 100~200 people almost has a variety of similar positions, at this time, we return to the perspective of Turing Test 2.0, which part of the current artificial intelligence can pass?

I’m afraid it won’t be able to pass, even the most advanced programming.

When programming, the current artificial intelligence can not complete the mapping of the demand model to the development model, (now it is true that 1 person can do the work of 2 people), that is to say, someone still needs to abstract the requirements model, and turn it into ;

Therefore, artificial intelligence based on large models cannot pass the Turing Test 2.0, and if it cannot pass the realization of commercial value, there will be problems (passing is not necessarily no problem)

So combined with the scene perspective, we can say that although we have been busy for more than 10 years, the progress is far from as fast as we thought. Now that OpenAI has pretty much mobilized all the money it could mobilize to sprint through this, we should really hope that they will rush over it, not the other way around.

Whether it can pass or not is exactly two situations: it can’t pass through a reservoir and is occasionally used as a water supply pool, and the potential energy that has been accumulated through it will rush down.

The first step in the restructuring of this kind of industry is more likely to be a big fold, a fold that rolls up to the extreme, and then a new life.

Suddenly it may be difficult to understand the folding, let’s take the past e-commerce as an example:

E-commerce must have knocked out traditional department stores, and stimulated a series of new industries such as takeaway and live streaming. But first of all, the traditional department stores were folded out, and then there was gradually a situation where everyone collectively brought goods.

If AI passes the Turing Test 2.0, it will be similar to this, for example, if the daily copywriting is all AI, then the business value that can be created on the API is estimated to be only a few thousandth, but it will completely make the job go to history, and then new roles and positions can be created.

In fact, there is a second challenge in this folding process: you can collapse many existing positions, but you may not be able to become a benign model and continue to develop. (If you really stop here, you will harm others and yourself)

In the short days when Altman was forced away, a piece of news came by the way: every call to OpenAI would result in a loss. In other words, OpenAI operates in a fragile balance, specifically: the global attention attracts massive amounts of capital, and the trend of attention and excitement in this is actually the same as a Ponzi scheme (not that AI is a scam, but the characteristics of this model are very similar, including digital currencies). Under this trend, the key is whether the real business value can be realized in the end, and then the next cycle can be driven. All Ponzi schemes are not in the middle of no gains, but in the end they are not accounted for, the expected value is completely empty, and then quickly falls to nothing, and collapses.

From this perspective, AI needs to build a virtuous cycle: the first step needs to pass the Turing test, and the second step needs to excite new positive feedback patterns. Then there was the internet in 2000, the rise of AI-native applications, otherwise it was all a prelude. From this perspective, it seems comical to think that AI is moving too fast at this point in time.

Of course, this is not only a problem for several companies at OpenAI, but also involves a large number of entrepreneurial projects.

If a general-purpose model that plays the role of an engine does not pass the Turing Test 2.0, the results of various attempts based on it will not be so good.

Potential Victims

Recently, I have accidentally seen a lot of introductions to this wave of many entrepreneurial projects, and after reading it, I feel that if the intelligence peak of the final large model does not pass the Turing Test 2.0, then these projects will slowly die, like fish in a dry lake.

I can’t cite specific projects to comment on this kind of thing, let’s take one as an example, for example: I may find that an enterprise uses multiple platforms, repeatedly aligning data, and then uses RPA to combine models to make improvements. Is there value to this? Yes, but if the proportion of intelligence is not enough, then the value created is not enough for its own consumption, and it is not commercially valid.

Another example is that I found that there is a lot of work at home, and people are not willing to do housework, so is there any value in having a robot? It is valuable, but if it is not intelligent enough, it will not be able to produce a truly useful product.

By the way, I went to meet a few old friends at an event on the weekend, and I saw a few robots used in the event, which made me cry without tears. This kind of so-called embodied robot has not made essential progress at all more than ten years ago,It’s still a chassis plus a PAD,It’s really the part of the smart speaker that takes a lot of time to polish,Even if it’s noisy, the accuracy of voice recognition is still OK。

There are many, many projects similar to the above, including the supply chain for AI companies to provide ammunition to make chips and do data, everyone wants to be NVIDIA, but if it can’t pass the Turing Test 2.0, maybe there will be another one, but there won’t be more.

If the peak of intelligence cannot be further raised, then these products will be stuck under a certain line, and the money that should be spent is not indispensable at all, but it just does not create new value.

From this perspective, it is easier to see what it really means that AI is not developing too fast, but too slowly. Who has as long a health bar, and as long as a total of how long a blood bar is!

A little metaphysical

In the existing economic system, people are actually large tools, and the time to play this role is squeezed into proportions of the time in family and life. Only a very small number of people can have fun in this kind of tool role, and the vast majority of people are not, but they all need to work, which is the difference in degree between the alienation mentioned before and the modern era.

The people, tools, and organizational models constitute a kind of upper limit of ability, and as the pursuit of the upper limit gets higher and higher, the faster the conveyor belt under the feet of the people inside turns, which is manifested as the fact that some people are getting busier and busier.

And when people think about losing such a role they don’t like, they will be even more panicked, because there is a feeling that the economic umbilical cord has been cut.

That’s the most interesting part, how can you get joy after losing something you don’t like?

Artificial intelligence is one of the elements of civilization, providing the power to reconstruct the social structure of the past, but it is not all, based on its progress, we may be able to solve those problems that cannot be solved at a lower cost, such as poverty and hunger, it will improve the freedom of the whole society, make people more room to solve problems, and re-carry out a higher level of integration.

I agree with Kevin Kelly on this point: technology always brings good and bad, but always a little bit better. It at least increases the scope of possibilities.

From this point of view, the development of artificial intelligence is also slow.

Summary

In artificial intelligence, technology, social imagination, and commercial judgment are now piled together, so there are often various points of view, but at the current stage, it may not be of great significance to do a pure technical interpretation or a pure social interpretation of artificial intelligence, and only from a commercial perspective can we see more clearly the contradiction of its imminent death and vigorous superposition, so it should be meaningful to return to Turing Test 2.0.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)