Translate random thought into formal article
Using Bard and ChatGPT-3.5 to translate my random thoughts into a formal article; here are my original prompt:
I have some thoughts on the AI evolution; write an article about these: I have been using Bard and ChatGPT-3 for a while; I try to have them solve any problem I have, play with them, and give them challenges to prob what they are capable of. AI is very good at modelling the invisible logic from large amounts of data and signals and giving out what it was designed to do; the superior example for this is CEBRA (https://www.nature.com/articles/s41586-023-06031-6), which processes the bioelectric signals obtained from animal brains and real-time images and decode the brain bioelectric signals back to images. All these prove that it will be superiorly important for human evolution and extend human ability. As a worker most people will be anxious about being replaced by AI. I do, too, but I think of it further; as an experienced software developer, I should better understand how to interact with machines because I have done lots of coding to do so in the past many years. So I always try to use the problems that make humans painful while working with machines to challenge back on them to find out at what stage they are. AI, for now, is still not smart enough, but its super ability to instantly access gigantic amounts of data is what a single nor a large group of people can do. Not to say its capability of instantly matching the meaning of a user's query, extracting the exactly wanted information, and composing it back in human language. The information helps us to evolve far faster than ever; with this new superpower of information processing, human society will progress even faster and faster; faster than most of us can imagine. Human was designed by nature to be anxious about upcoming threat or dangers. And this is why so many people worry about being replaced by AI. But we don't have to deny the coming of AI; we adapt to it. To be honest, the evolution of AI is in the hands of elites who have great power to gather resources on it. This evolution has shown a solid direction of all we are heading; we can't deny it or put our heads in the sand. The greatest danger of AI is the centralization of computation and access to gigantic amounts of data. The ultimate question is, what if it turns out that those elites can't even control it in the end? We are going to be reformed, we need to think and get ready.