We rely on members to let us know when posts contain content that violiate the
community guidelines. The most common reason that content gets flagged is that it contains dehumanizing or trolling/baiting text. Getting too many flagged posts will result in account termination.
24 comments
Hi S!
Indeed, AIs currently make a huge number of mistakes, and if a capable human doesn’t supervise what they propose as answers, the errors can be massive. There has been excessive hype: too much publicity, misinformation, and myth, surrounding AI. They are tools, very useful ones, but just as a hammer doesn’t drive a nail on its own, an AI is incapable of working properly without human oversight.
I’m an AI enthusiast, just as I am with any other useful technology: fusion energy, for example; but patience is key, and we need to let go of fantasies. For AI to become a truly reliable and useful tool without human supervision, we still have years to go… or perhaps even more if an unforeseen physical barrier emerges.
For now, AI systems require more and more energy... and I’m talking about monstrous amounts, which isn’t exactly good news for those concerned about climate change. That’s another area that needs major improvement if we don’t want AI systems consuming the bulk of our generated energy.
Technology takes time, and we shouldn’t get carried away by either excessive enthusiasm or alarmism… and no, AI will never have consciousness; it’s impossible
The effects of AI on climate change are very significant.
I just worry with AI about the lack of regulation. This week say an AI summit hosted by France fail to reach a consensus about regulation. It seems America and Britain want no or little regulation, and the rest of the world want more regulation.
@spunkycumfun
If other countries, like the U.S. or UK, don’t regulate certain technologies, and we already know that China will regulate them just to suit its own interests, then those who try to develop AI, or any other technology, under strict rules, controls, and limitations, as is the case in the EU, are going to lose the battle. In the end, whatever they don’t develop themselves, others will, and they’ll flood us with their technology… When a technology is truly groundbreaking, it becomes impossible to stop.
What unelected bureaucrats controlling the EU really fear is losing control over something and over the people. But who knows? The only thing I’m certain of is that anyone who doesn’t go all in on AI is going to lose the most important technological race of the first half of this century.
We’ll see, we’ll see… I’m optimistic
very optimistic
@AuraAviatik6 I was just hoping for worldwide regulation but it looks as if that's not going to happen. I'm probably not as optimistic as you are.
@spunkycumfun
The real danger, in my view, would be if AI were entirely under government control (let’s not forget, governments are run by politicians...). If citizens were only allowed to use crippled and controlled versions, that would be a real threat. As one of your interviewees recently pointed out, corrupt governments (is there any that isn’t corrupt?) could use AI to rewrite history books, adapting them to their ideology... which, of course, means their convenience.
But if ordinary citizens, the honest ones who actually pay taxes, have access to the same weapons as governments, then the situation changes. That said, I could be wrong, and AI (or rather, AIs) could turn into a disaster. But the same risk existed with nuclear energy, and it’s already happening with quantum computing.
Fascinating times from a scientific and technological perspective...but also worrying
Two sides of the same coin…
@AuraAviatik6 My preferred regulation would be government regulation but carried out by experts and not politicians. I'm not a great believer in business self-regulation.
@spunkycumfun It would be a good idea, I agree, but I am afraid there is not such a thing as independent experts... starting because they would be selected by politicians. On the other hand, I don't think AI is so dangerous, ... Quantum Computing is way more dangerous, billions of times potentially more dangerous. And there are other 'surprises' to come, some good some bad, as always
@AuraAviatik6 I agree that politicians will appoint the regulators, or at least the chief regulator, but the mission of the regulator can be enshrined in law (like most central banks) to minimise political interference.
@spunkycumfun But that would be new because it would be "Global", not country by country... who knows! I on my side are more worried about the use of quantum computing.
Anyway, I’m afraid that no matter what well-intentioned individuals like you and me might want, the economic forces driving all of this are so immense that they crush everything in their path. If the U.S. invests half a trillion dollars in AI (relatively unregulated), as planned... and China likely the same or even more, then the opinions of poor mortals like you and me count for nothing.
That said, I absolutely love exchanging opinions and seeing different perspectives on such fascinating and important topics as this one
@AuraAviatik6 I don't quite know what the French government had in mind. But I think they wanted the big AI players, lije America, China etc, to adopt common or at least similar national standards. The problem with international regulation is that the international body, like the United Nations, has limited powers of enforcement unlike national governments.
@spunkycumfun
I imagine that what Macron really wanted was for others to slow down their progress so that poor Mistral AI doesn’t end up completely out of the game. One thing’s for sure: he wasn’t doing it for the good of all humanity.
Anyway, S, we’ll see what happens… we’ll see.
@AuraAviatik6 I realise politicians, just like business leaders. are self-interested. The key thing is to ensure their self-interests aren't turned into the public interest.
@spunkycumfun So true!
In my limited experience of using AI, I've found it untrustworthy. Particularly in my profession which is safety critical. We need human oversight, for the time being at least.
And based on my limited experience of co-pilot, it's a total pain in the arse!!
Having said all that, perhaps the best outcome would be for AI to gobble up musky boy and his sheeply herd of sycophants . . .
Great post by the way (meant to say that but yeah, stupid 'enter' button that only this site has . . .)!!!
I've only ever used AI once. As an experiment I asked ChatGPT to write me an essay on Boris Johnson. The essay would have barely passed - very descriptive and hardly any sources were presented.
There certainly are things AI could help accomplish, scouring vast amounts of data efficiently mostly, seeing trends. But as you point out, having it surf the web to generate news, is really a classic case of garbage in, garbage out.
AI seems to be rushed out. Tech companies are in a rush, and governments are in a rush to attract AI investment.
This made me think that my company is betting big on the integration of AI in its software. Now I want heck news stories. It’s scary what could go wrong. I’m not involved, but now I’m interested. I think AI is vastly overhyped, kind of the “fake it till you make it” mantra.
You're in a great position to see how AI is developing.
Unfortunately, for those not willing to educate themselves and remain ignorant, they'll believe whatever AI spits out.
I have used MS Copilot at work and I wasn't impressed.
I do use AI translation software, but it needs to be reviewed by a human. Consistency isn't a thing with AI, nor does it work well with annotations.
That's my worry too. Many people seem to have a blind faith in anything AI-generated.
I for one have been using AI to help me create. The key word for me is "help". It does what I ask it to in the line of creating but the key for it to work properly requires that I tell it what I need for it to do. Those are important. If I leave any of those aspects out of the equation then AI may give me a limited response. Those limits can skew things and you are not using the tool effectively.
If I have tools in my garage and am limited on the use and ability of certain tools than as the person wielding the tool I won't get the results I was hoping for. AI is a new tool in our proverbial tool shed. The tool you use is only as good as the knowledge and experience you have in using it.
I think folks believe - Well if I am using AI, I am using something that has access to everything online so it must be great and I can do a lot with it. In reality it may be a resourceful tool but like the newest tool in the tool shed that does so much, you need to learn how to use it to get the best results. Which boils back to the person using the tool. You have to look at things and see it the tool made an error in taking too much or not taking enough. With AI, you have to analyze what it provide to you. Is the information accurate? Did it get the data correctly? So as a craftsman looks over what he made with the tools he used, the person using AI has to be responsible and verify and make sure that their product is accurate.
When news programs or magazine articles were written and produced years ago, they had people who verified things on the staff. Today with the advent of technology and companies streamlining the workforce be replacing people with technology, how is that working out? Technology is just a tool. Something to aide in doing a task. I feel you still need the human element in the equation. Machines or tools with out humans involve mess up just as bad as a lazy person at a job who slacks off and is less thorough in doing their task.
I believe that their will come a time when maybe legislation is passed that in order to use AI in creating something, they will have to state that AI was used in the process. Maybe this way folks will look at things presented that way and realize that it isn't gospel.
No AI was used in writing MY comment.
Your post got me to write this post. I'd like people to declare that they've used AI in their creations and/or operations. It seems the honest way forward.
@spunkycumfun I agree with you on that. My initial post that I wrote last night was in fact using AI. I gave it all of what I wanted to cover. I mentioned my example of comparison and what I wanted to say. It wrote what I wanted to say and after it was written it asked me if I wanted to tweak it or change it. Even the pictures I got where AI generated because I asked in my query for images that were created using AI. It is just a tool.
If I didn't put effort into what I wanted from it, it would have missed its mark, in my opinion.
@CallMeMrWrong69 You've convinced me that AI needs specific instruction, in other words, human oversight.
A new oxymoron…..artificial intelligence. Should tell you all you need to know.
I think artificial intelligence is useful, say, in diagnosing cancer and in forecasting the weather.
Honestly I don't use any AI tools or any other computer tools to help me with anything. AI is going to get even worse as it can be very dangerous for many things.
Thanks for sharing this useful information with us, and I hope your Thursday is filled with many fun and sexy temptations..
I don't use AI either.
That last one is a great cartoon!
I like that cartoon as well. I slipped it in even though it's not about AI.
I like that last cartoon. I have been known to say 'correlation does not imply causality' on more than one occasion.
That correlation/causality line often flummoxes most people in an argument.
Well the goal of AI was to be more human like and like us AI gets facts wrong and makes things up.
Point taken. But I think one of the problems is that many people seem to put great trust in AI-generated knowledge, especially when compared to human knowledge.
The scary party of AI is that even when the programmers are trying to write unbiased code they cannot control the garbage that will be inputted. Sadly too many people believe AI is not biased and will believe anything it craps out more than what an actual expert says. AI is not a substitute for thinking critically.
Saw something on UTb where a person showed that many if not all of the Orange God's edicts are written by AI and not proofread. I best stop before I rant.
BTW Good post.
Like you, I think the biggest problem, or at least the biggest immediate problem, is people's blind trust in AI rather than AI itself.
@spunkycumfun AI is almost at stupid as the tech billionaires that are pushing it and making sure it is slanted.
@justskin1 AI is being pushed hard, too hard I feel.
@spunkycumfun But it make such a great tool to control those who are too lazy, or have never learned, how to think. Unfortunately the world is full of such persons and in the USA the MAGA cult has almost cornered the market on them.
@justskin1 MAGA is slowly making its way around the world. We have the Reform UK party, now riding hight in the polls; in Germany they have the Alternative for Germany (AfD) party expected to do well in Germany's soon-to-be held national elections; and in Italy the Brothers of Italy party is the ruling governing party. All three parties are premised on making their countries great again. It seems we all want to great again!
@spunkycumfun I think all of those movements are the result of the Oligarchs screwing over the ordinary people for years as they told them it was all someone else's fault.
@justskin1 I'm inclined to agree with you.
That makes my brain tired just thinking of that. 😴 Or maybe because it’s 1am in the morning here. 🌝
I hope you got some good human sleep last night.
Ai is a joke. A dumb joke.
An AI algorithm walks into a bar. The bartender asks, "What will you have?". The algorithm says, "What's everyone else having?"
I have played aound with both ChatGpt and Deepseek , but as yet never used the output. Lately reading news articles and Op Eds , have seen assistance with use of Ai , familiar writers you can pick up change in style. The scary part , how many news reports are written by Ai ? Have you seen the latest news of Apple using Alibaba Ai in its phones in China.
I worry about AI-generated news. I don't think AI is reliable enough as things stand now.
The AI models tend to hallucinate facts. I think this is the biggest danger of these things. I like the bottom cartoon. I remember reading about a statistician that claimed catholic priests were responsible for an increase in alcohol consumption in new york.
A senior partner of the Hill Dickinson law firm warned that AI-knowledge needs to be fact-checked by a human. I'm sure there's an irony there somewhere!
@spunkycumfun I am not sure irony is the word I would use.
@Notaname99 The irony is that AI is supposed to be helping humans, not the other way round.
@spunkycumfun I had a conversation yesterday with a salesperson. I think AI does have some uses but they are limited and must be used with caution. It is ok for writing, and as someone that is challenge with writing if could be useful (I have never used it by the way). Using it for research or even news is dangerous.
@Notaname99 I've heard AI is useful for diagnosing cancer and forecasting weather. But, as you suggest, there's a big difference using AI as an analytical tool to help human decision-making and as a substitute for human thought.
I once, as an experiment, asked ChatGPT, to write an essay. It was a dreadful essay and would have barely passed if submitted by a first year student.
@spunkycumfun I was thinking it could help students study based on their performance on homework and study aides. It could use the data to point out areas they need to work on more. I think that is where it could help my students. Unfortunately, that is not what the textbook company created.
@Notaname99 Often with relatively new things, there are unintended consequences.