Are they generated AI tools that I can apply, which are maybe a bit more ethical than others?
—Boors elections
No, I don’t think one generative AI tool than the main players is more ethical than any other. Here’s why.
For me, the ethics of generative apply of artificial intelligence can be divided into problems with the development of models – in particular how access to data for their training was obtained – as well as constant fears related to their environmental impact. To power chatbot or image generator, indecent amount of data is required, and the decisions were made by programmers in the past – and still make – to get this data repository, they are doubtful and shielded in secret. Even what people in the Silicon Valley call “Open Source” models, hide the sets of training data inside.
Despite complaints from authors, artists, filmmakers, creators of YouTube, and even social media users who do not want to scrape and turn into a squad of chatbot, AI companies usually have survived as if the consent of these creators is not necessary for their results to be used as data training. One known claim from AI supporters is that obtaining this huge amount of data with the consent of the people who created it would be too bulky and make it arduous to make it arduous. Even for companies that have concluded license agreements with the main publishers, “clean” data is an infinite part of the colossal machine.
Although some developers are working on approaching people to compensate for people, when their work is used to train AI models, these projects remain quite niche alternatives to the mainstream.
And then there are ecological consequences. The current impact on the environmental environment of artificial intelligence is similarly exceeded in the main options. While generative artificial intelligence still represents a diminutive piece of aggregate stress of humanity on the environment, GEN-AI software tools require much more energy to create and run than their non-generation counterparts. Using chatbot to research helps much more to the climate crisis than just searching for networks on Google.
It is possible that the amount of energy required to launch tools can be reduced – fresh approaches, such as the latest Deepeek model, SIP Precious Energy Resources, and not to note them – but gigantic AI companies seem more interested in accelerating development than stopping, to recognize a less harmful approach for the planet.
How to make and smarter and more ethical than wiser and more powerful?
– –Galaxy Brain
Thank you for your wise question, colleague. This situation can be a more common topic of discussion among those who build generative AI tools than to expect. For example, the “constitutional” Anthropic approach to his Claude Chatbot is trying to instill a sense of basic values to the machine.
Confusion in the heart of your question is how we talk about the software. Recently, many companies have released models focused on “reasoning” and “thinking chain“Approaching research. Describing what AI tools with human terms and phrases do makes the border between man and the machine unnecessarily hazy. I mean that if the model can really reason and have chains of thoughts, why couldn’t we send the software a path of self -control?
Because it doesn’t think. Words such as reasoning, deep thinking, understanding – these are only ways to describe how the algorithm processes information. When I stop at the ethics of training these models and impact on the environment, my attitude is not based on a combination of predictive patterns or text, but rather on the sum of my individual experiences and strictly kept beliefs.
AI’s ethical output aspects will always circle back to our human input data. What are the intentions of the user’s hints when interacting with chatbot? What were the prejudices in the training data? How did developers teach the bot to respond to controversial queries? Instead of focusing on making artificial intelligence itself, the real task is to cultivate more ethical practices of development and interaction of users.