Is the AI Revolution Losing Steam?



Explore whether the AI revolution is losing steam or if it’s just a media narrative. Dive into the Wall Street Journal’s recent piece …

source

41 thoughts on “Is the AI Revolution Losing Steam?

  1. We do see a slow down of AI adoption in the industry I worked at. It clearly plateaued. I think there's a lot of "want" in the AI industry but not a lot of real "need".

  2. Right now, very few startups make money with LLMs and for enterprises, it’s too unreliable for mission critical applications. The realization of these facts is sinking in. Nvidia probably scores 90% of the profits generated by the LLM gold rush right now.

  3. We will use AI to do what? As a statistician computer programmer, I use these skills once a week in my job and never outside of it. Why people like my mom would ever choose to use AI is unthinkable. Or even me, why do i need it? why would I beleive it? Dummies think AI will be something,

  4. We are in serious trouble as long as Ai safety first thing that comes to mind is =#1 controlled nepotism = can't trust access to majority #2 terminater or rogue ai = as if weak minded people will be bullied or get lazy at being the chink in the armor allowing it to launch nukes or some other ignorant plausible deniabilty excuses.
    They're forced to automate the 150 year old totally automated sectors that requires hard tough hardware in areas we don't even have demand that much.
    Deflecting and lobbying away from easy to replace sectors like how many have refused cut n dry programs over the past 30 years as they don't have built in mechanisms to enforce it and most of these have set the excuse bar for controlled nepotism ai so g
    High that it's creating an avalanche where all sociopolitical intervention is going to have to burst that bubble.

    These cut n dry, qualified or don't systems with court judge arbitors for the anomalys would've never been connected to other ai had we properly innovated along the way.

  5. For sure there appears to be some exaggerated boasts from leading Ai camps … ie: OpenAi announced ChatGPT 4o "omni" on May 13th in a Livestream event showcasing the dazzling capability of this new model that it says is built from the ground up, fully-integrated brand new model where all features of text, image (input/output), speech (input/output) vision (using user's camera) are now interwoven into a single model that uses the same neural network. Whereas the prior ChatGPT 4 used multiple models for text, image, voice, that had to communicate with one another, causing latency and loss of information.

    However, despite OpenAi saying this new 4o "omni" model was rolling out as of May 13th for "Pro" subscribers (except for the new 'speech' which would roll out a bit later) … although this model appears in the menu of ChatGPT Pro / Team / Enterprise subscribers, it is missing pretty much all of the features and when comparing results with the prior ChatGPT 4 model, the results are often worse or equal … other than outputting a bit faster.

    On OpenAi's website on its "Hello ChatGPT 4o" web page showcasing all the glitzy things that this brand new model is supposed to be capable of doing, if you scroll down below the video examples there's a section entitled "Explorations of Capabilities" with a drop down menu showing 17 amazing examples where it includes the Prompt that was used and the amazing output. However, when I use OpenAi's own prompt in my ChatGPT 4o … it fails terribly, giving repeated wonky results.

    For example, the web page demo's an example of 4o outputting an image with long form text in elegant meticulous, legible, perfectly spelled hand writing of a 3 verse poem, total 12 lines. OpenAi's example is perfect … but when I copy & paste OpenAi's same prompt, it Outputs a wall of gibberish text, malformed letters, not even remotely matching the number of lines in the poem, sometimes Outputting 23 lines of unformatted gibberish text. In some outputs the gibberish text is written sideways.

    When I ask it to afterwards analyze the image and tell me what it says, although it is able to identify that the text is illegible … if I ask it how many lines of text are in the image, each time it fails to provide an accurate count.

    Same thing with another example of a 'first person perspective' of a robot typing a message on his phone with just 2 message bubbles of a small amount of text. In OpenAi's example, the text is perfect including the correct letter placement of the onscreen keyboard in the image. But when I try replicating the example it's complete gibberish text messages, outputting more messages of illegible text than is requested in the Prompt.

    But the most interesting thing is that in my ChatGPT account, when I'm only using this ChatGPT 4o model, suddenly my ChatGPT 4 (the older model) becomes 'greyed out' stating it's 'timed out' for reaching the limit … but I wasn't using ChatGPT 4, I was using the new model. Also, there are many times when I'm certain that the model's been swapped out with ChatGPT 3.5 because it will suddenly begin making errors and hallucinating every response, which will continue most of the day and maybe be resolved the next day if I'm lucky.

    So there's obviously some 'funny business' with what's being "launched" and what paid subscribers are actually receiving. Another hint that something's fishy is that OpenAi is supposedly using DALL-E 3 which came out in October 2023 and was highly touted for its ability to create properly spelled text. I used the model a lot and it could accurately spell small blocks of text, ie: a road sign, the name of a shop, words on poster art, etc. But suddenly the DALL-E in OpenAi is no longer able to correctly spell even a small number of words in an image. I asked it to make an image of characters holding up the letters of the word "Apple", with an arched banner saying: "Learn To Spell". It took me at least 20 re-rolls before it finally output a correct result ! So why is a less powerful version of DALL-E being swapped out to paid subscribers … more than a year after DALL-E 3 was launched?

    And same thing with Google's Ai event that took place the following day, where it also made glitzy announcements, where it has failed to live up to the announced features. And why are we even talking about Sora and Google's fancy animation when so far they are not available to subscribers to test to see whether they live up to the glitzy examples?

    When the prior version of ChatGPT 4 was launched, and was incrementally rolled out, subscribers from various countries immediately began uploading videos showing what they were doing with the (prior) voice and image features … so even if some users didn't receive these features until several weeks later, at least we all knew that indeed the new features had begun to roll out. However with the new ChatGPT 4o features and functionality, in discussions on Reddit, so far not a single person has been able to replicate any of the 17 examples demo'ed on OpenAi's 'Hello ChatGPT 4o' web page … and it's now more than 3 weeks since OpenAi said these features have begun being rolled out to Pro subscribers … so there's definitely some lack of transparency going on where Ai companies are trying to outdo one another by making dazzling announcements of half-baked, premature Ai products that it pretends to launch … but that seem to be a bunch of bits and pieces of older models cobbled together and shoved into a new package masquerading as a brand new model.

  6. Nope. Not in the slightest. But the skeptics will certainly use the typical summer slowdown experienced by every industry, as an excuse to claim it is. lol

  7. AI is advancing steadily. New chips and hardware are in the works. Intelligent general purpose robots are a real possibility. If the conversation is about an economic hype bubble who cares. Financial sector can winge about asset speculation all they want, the rest of us can focus on the tangibles.

  8. Didn't watch the video yet, but just wanted to say this quickly, yes there are bad actor companies that take advantage of the AI hype and give AI a bade name, but if anyone thinks AI is losing steam, they are delusional, every big company key note has amazing updates, look at NVIDIA for example, again didn't watch the video, yet, but you'll probably have a more nuanced take.

  9. 😮 translation the same media companies that are replacing people with AI😮 want to soften the blow😮 so want to convince everyone AI isn't dangerous because AI is just hype😮 open till the last minute that they fire all of you😮 which they are literally in the process of remember these media companies are controlled by the wef world economic forum😮 and un😮 there seems to be an angle that everyone's missing about ai ai is a planned event like the pandemic😮 don't you think it's weird people are getting H1 H1 and AI is a thing now 😮 they know people don't want to get drafted or work anymore so they're going to replace everyone with robots😮 and potentially like the Georgia guidestones say eliminate most of the human race and maintain a human population of 500 million😮

  10. Just looked into the stats of the major AI channels. Most of them have pretty bad view drops. I think people have AI fatigue, but the technology is still going full steam.

  11. Yes I agree. It appears a little bit like a plateau from inside the curve because everything feels like it's moving slowly whenever you're inside any change. It looks fast when you look back with more perspective. OpenAI have been holding back on releasing large improvements because they just don't need to in order to remain the lead. It is not just OpenAI that say this is not asymptoting. There's plenty of steam.

  12. I would say, the problem is all these AI stuff is mostly limited to generating something, as a developer in college, I am not sitting here 24/7 generating Images/Videos/Text/Code etc again and again. We need more domains and i would say, slowly and steadily, we are gonna return back to Blockchain with heavily integrated AI capabilities, cause people/devs need genuine long lasting applicability

  13. Jesus Crist, have you all forgotten Altman is a recently proven top to bottom liar ? – or do you all think one of the main shapers/developers of AI telling lies to us about a thing that could kill us all is ok ? (He got sacked from Open AI for lying to board members and teams and keeping information back from other senior members). What the fuck is wrong with you people ?

  14. I've been watching AI Art for the last year or so and the developments have been truly amazing- but I am now getting the sense that the technology has reached some kind of plateau- not in terms of image quality, which does continue to improve incrementally- but in terms of narrative. It's easy to get AI Art generators to output images with simple narratives like 'Picture of kitten dressed as a cowboy'- but how do I create a scene where the late evening sun just touches a temple gateway in exactly the right place to illuminate the arrival of the hero while throwing the rest of the scene into atmospheric shadow that conceals the hero's nemisis who appears in outline, revealing by his posture his intent to waylay the hero as he enters the building?

    A human artist could take this brief and from it create an image that told this story by using their understanding of light and colour and composition and body language ect, but it's not clear to me that AI's in their current form will ever be able to do this because they lack any significant understanding of the real world.

    If you show the average person a photograph of a dog and ask them what they are looking at they will say 'It's a Dog!'- which is not true- what they are looking at is a collection of pixels printed on a flat surface. This process of projecting complex realities onto abstract patterns of coloured squares is so effortless and instinctive that most people could not see anything BUT a dog- they could not see that photograph for the abstraction it actually is.

    AI's cannot perform this feat- what they see when shown a collection of pixels is a collection of pixels- their apparant ability to interpret those pixels as a Dog is a fake, created by the fact that they have been trained to associate the word 'Dog' with certain patterns of pixels. But while this trick looks the same as the way that humans project meaning onto pixel patterns it is not the same- because humans are drawing upon their understanding of what a Dog actually is- a living breathing reality- while the AI is simply correleating pixel patterns and word patterns.

    I think current AI's in most domains are running into is this scenario- they are reaching the limits of what can be done by correlating data sets to synthesize new data- in part because sources of new data are drying up- but mostly because in order to progress further AI's will need to gain a genuine understanding of how the abstractions they deal in reference a multidimensional reality that is not itself an abstraction.

  15. I dont think theres any evidence in the market that AI improvements are accelerating. It's important to keep in mind that breadth of application is still being explored, which can always be refined. The depth has not changed; accessibility has.

  16. I'm having a déjà vu moment here. I swear we had the same thing pop up last summer, just before something big came out. I still think that OAI is going to announce a new model this month around WWDC.

  17. I don't understand how this channel only has 20k subs. This is by far the most level headed, uptodate source for Ai news. Keep up the great work!

  18. I would like AI to slow down for a few reasons. Job loss is inevitable when new technologies can perform tasks faster and cheaper. While this isn't inherently bad, the consequences are concerning. Countries simply aren't prepared for such a shift. We require currency for housing and food, and without a way to earn enough, some people might end up with only one or, in rarer cases, neither. I hope AI development slows down so humanity can figure out some form of universal basic resources—such as food and housing—before we let the cat out of the bag.

  19. They're sitting only in USA market, it's overcrowded, as a businessman myself i can't understand why they don't expand their business in vast other language/cultures markets-it's almost desert there in terms of competition. Their thinking of getting ultimate best Ai first in USA and kinda afterwards conquer other countries not working. People in other countries receiving local player will not use american companies services and local politicians predictably only helps in that.

Leave a Reply to @dexio85 Cancel reply

Your email address will not be published. Required fields are marked *

Optimized by Optimole