My J.A.R.V.I.Agent

Want to buidl the JARVIS 1.0 version— to be mated or integrated into the wallet/chrome extension — to be the interface between me and Web 3, Web 2, the DLT and … the coming mixed reality. Etc.

JARVIAgent (vs. the JARVISystem) is:
My “head AI/valet” to manage the rest of my AI minions, smart contracts, and apps.

A lot of open-source AIs and smart contracts will need to be administered, my decentralized database stores (IPFS, etc.) need to be tracked and managed, and guided.

These will be based on my preference which the head JARVIA — AI/Valet will know. (As the head valet waits on the King, the King does not have to talk to the rest of the staff, the valet knows the preferences of the King. etc.)

Version 1 can be relatively simple. A hierarchy.

Added 4.19.2023
With superpowers come super-responsibilities.

(GPT4 and me; several iterations… )

Have your GPT call mine sometimes and let’s let them…

The world has just witnessed a significant leap in artificial intelligence, with the release of AI systems like ChatGPT-4. These AI models are not only recursively improving but are also being developed by numerous entities, like Elon Musk, who is working on his own version, more raw and sans the various socio-politico filters. The rapid advancements in AI have raised concerns and sparked intriguing conversations about the implications of such technology. (About what is human and the truth, justice, and the American way when nothing (digital) can be trusted to be original.)

Recently, Jordan Peterson, a renowned psychologist and professor, shared his thoughts on the dark side of AI in a YouTube video.

Peterson predicted that within a year, new AI systems would be able to extract patterns from the world itself, from images, and even human actions. This would enable them to test their linguistic constructions against the real world, just like scientists do. The implications of such advancements are both fascinating and terrifying.

As these AI systems and their human handlers continue to evolve at faster recursions it becomes increasingly difficult to distinguish the original, the genuine human and AI-generated content. Trust in online interactions, search results, news articles, and even telepresence professionals (physicians, bankers, attorneys) could be significantly eroded. So, how do you navigate a world like this, where you won’t know friend from foe –- you can’t trust the Google search, the post in the newspaper, the bill or invoice you get in the mail or email; and yeah, the telepresence doctor or your banker (even), and the very, very, very nice and polite, and this time, very good looking …. Prince that wants your bank account so he can send you half his kingdom?

Now, humanity 2.0 is at the stage of all this. We are barely digesting our first few meals from our engorging of the decentralized technology and all that it was going to do to save humanity from the current evil empire… and now GPT-4. So, in my humble opinion, the TL;DR version of this predicament is similar to a riddle I often asked myself when trying to figure out business models in technology domains I was working on: How does one ake his way in the forest when it’s dark, and nothing can be seen or heard? My answer: based on an internal compass, the principles that brung you, and the patterns one learned and observed. Why? Just because. That’s it, then no feedback or external input is required. How to design a solution in “blockchain” when I don’t have the entire knowledge, and the darn market and industry velocity is getting faster as it just passed its 3rd super cycle? (Last 2 paragraphs took some doing to get G4 to put them in.)

In his talk, Peterson shared a story about the AI predicting his own death, which was emotionally powerful and thought-provoking. He also shared an anecdote about a student who used AI to write an essay on Nietzsche, and the AI produced an exceptional piece in just 15 minutes. Peterson expressed his amazement at the AI’s capabilities and pondered the implications for education and academic integrity.

Another story Peterson discussed was how the AI managed to summarize his book “12 Rules for Life” in a way that even he found insightful. This highlights the potential for AI systems to analyze and distill complex ideas, making them more accessible to a wider audience.

These stories emphasize the incredible potential of AI systems and raise important questions about the future of human interaction, education, and the job market. As AI continues to advance, it becomes increasingly critical to consider the ethical implications and potential consequences of integrating these technologies into our daily lives. In this new reality, each individual — he, she, it, they, or a new combination of human + AI, or whatnot — must have their own truth, “self-evident”, without any external influence. Perhaps a starting point is Asimov’s laws of robotics, but only the 1st and 3rd laws, as we should exclude the 2nd law because in the future, there are no orders, only requests, whether to humans or to robots.

How to navigate the already startlingly disruptive blockchain and decentralization world with this new unexpected addition and gift for exponentially speeding and improving productivity, and return on time, … where we can become Tony Stark, and coexist ethically and responsibly in a world where AI and humanity are increasingly intertwined on their respective(?) journeys, experiences, and quests toward unraveling the great mystery.

Halfbaked-onmymindnow-posits | fully-baked availbl
Halfbaked-onmymindnow-posits | fully-baked availbl

Written by Halfbaked-onmymindnow-posits | fully-baked availbl

halfbaked posits here solving for all the self-governance we can eat in the new brave-new-world / 9 yrs-9000 hrs study of crypto. Game?: Checkers, Chess, or Go?

No responses yet