Artificial intelligence for financial, psychological, life advice πŸ€–

You turn on the radio or read the newspaper these days and you see many alarming newspaper articles about how people upon the advice of a large language model A. I.Β  response take upon themselves to make a wildly foolish decision, get trapped in a mind loop that leads to psychosis, or ultimately end up committing suicide. Much like those worry-worts who warn about the dangers of consuming marijuana – as if you look hard enough you’re bound to find people who have secumbed to psychosis, done foolsih things or killed themselves. It’s a big country, you can always find outliers if you sift through enough data.

Yet, I do find the popular free artificial intelligence models – especially Google’s Gemani built into their search – to be a powerful tool for brainstorming and pulling together information and thoughts that would otherwise require multiple Google Searches and not necessarily give you the clear, integrated answer you were looking for.Β  Artificial intelligence has the ability to take all the facts you present to it, mix-and-mash it and give an informed answer based on the consensus of an internet-wide source of information. Doesn’t mean the information is always right, but is not personally judgemental but instead pulls bits and pieces from across the web to give you what is likely a personalized consensus answer to your questions and concerns.

Professionals like psychologists, life coaches and financial advisors look with quite an amount of alarm at the biases and mistakes that artificial intelligence presents when people describe their problems and thoughts to an artificial intelligence model. However, they suffer from motivated reasoning – A. I. directly competes with their business. But the bigger issue is that humans have biases and personal judgements, which aren’t there with artificial intelligence models. A. I.Β  only strings together words based on likelihood of correctness based on the internet consensus, it doesn’t consider often irrelevant things not presented to the model. A. I. also isn’t selling you problems. Even the best psychologist or financial advisor hopes you remain a paying customer for a long time and that you consume many profitable services. While not all human advisors are sleezy car salesmen, many of the same biases exist whenever a human is rendering services. A. I. models can and sometimes are trained to market products, however their biases are usually easier to spot and may be required to be disclosed, unlike those of humans who can often render biases without even realizing it.

Really, I’ve discovered Artificial Intelligence to be a powerful brain storming tool. While I take answers as a grain of salt, and do often verify and question them, it’s hard to beat how it can bring several sources of information to a multi-faceted problem like a human but with less judgement or bias. No A. I. is going to ball you out for something you said or no longer be on good terms. You can be honest with artificial intelligence and consider it’s answers while be aware such answers might contain biases or be wrong. You can always pause an A. I.  conversation and do other sources of research. Artificial intelligence is good when you think of it as a very personalized search result based on facts you sent it, and then verify the information it presents to you.

Leave a Reply

Your email address will not be published. Required fields are marked *