Managing Information in a Post LLM World

  

Hype of Generative AI

Believing the hype of products like BARD and ChatGPT using Large Language Model (LLM) has created chaos in the information world.

Microsoft's $10B investment into OpenAI added to the frenzy. Generative AI with ChatGPT is dominating media and social media with promising deployment on HR hiring, creating software code, facilitating drug development, assisting medical doctors with health diagnosis, stock picks and investment, detect fraudulent transaction, text and content generation, and the list goes on and on.

While LLM is a statistical generative technology that promises to generate human-like dialogue on topic of interest, it has no concept of truth and knowledge behind its dialogue. Indeed, government and enterprises are jumping on the bandwagon. Trust, knowledge and truth are set aside to accept such info-unreliable source.


  

LLM Articulates with Hallucination and an Authoritative Tone

The very nature of statistical word generation disallows the maker of these LLM applications to create effective safety net on the information it renders. There may be one instance that the information is 100% correct and true, follow by the next response that could be totally false and untrue or somewhere in between. These language models often assert with an authoritative tone while rendering hallucination. In many instances, reference sources provided by ChatGPT do not exist.


  

Adoption is Happening Regardless of the Elevated Risk and Warning

As much as LLM articulation is full of hallucination, its adoption is here to stay. Adoption of this kind of ?AI? comes with warning and elevated risks, yet Microsoft has integrated it into MS WORD. Enterprises are beginning to use it to assist in planning and advices. Some are using it to render information for consumers and professional workers.

While response from LLM systems are hallucinations based on statistics and probability, the qualitative of these training data may raise further question on the validity of information it renders. Nonetheless, the text is generated from training models that were built with real corpus.


  

Post LLM with Symbolic AI

LLM is largely based on statistics and probability, not grounded with context or semantics. LLM's rendering consumed by human as solutions to problems may contain bias, misinformation, and is difficult to validate without original reference. However, LLM shortcoming can be turned into an asset with symbolic logic. Instead of accepting output from LLM as the end product, submit the output of LLM to a symbolic logic system for abstraction.

The resulting abstractions enable users to apply creative thinking and critical thinking to derive different concrete cases. It removes the bias and misinformation from LLM's output. The symbolic logic turned such mechanical answers into an abstract model for use with augmented intelligence. Typical application includes think tanks, strategic planning in operation, product marketing, and financial planning.

  

ELAINE - An Implementation of Symbolic Logic

ELAINE is an implementation of Symbolic Logic. It uses "Context Discriminant Calculus" to perform abstraction. When output from LLM such as ChatGPT or BARD is given to ELAINE for analysis, among the output are tuples of symbols that represents relevant abstraction related to the input prompt. Strategists and planners can then use these abstractions as seeds to their creative thoughts

ELAINE JOURNAL

Current Day Analysis:

  • Energy Scientists Say They Have Solved the Centuries-Old Mystery of Why Ultra-Thin Sheets of Gold Glow.
  • Russia's Anti-Satellite Nuke Could Leave Lower Orbit Unusable, Test Vehicle May Already Be Deployed.
  • Scientists Keep Finding Giant Sinkholes in China that Hide Ancient Forests, Unknown Wildlife, and Long ...
  • How Scammers Are Stealing Food Stamps From Struggling Americans
  • Furor Over U.S. Steel Bid Puts Secretive Government Panel In Spotlight
  • Paramount Will Let Exclusive Talks With Skydance Lapse, Imperiling Deal
  • Lawsuit Accuses Everton Bidder 777 Partners of $600 Million Fraud
  • Tesla Pullback Puts Onus on Others to Build Electric Vehicle Chargers
  • Trump's Scandals Captivate the Courtroom, but Case Hangs on Dry Details
  • 2 new COVID variants called 'FLiRT'are spreading in the U.S. What are the symptoms? ...
  • Americans have tipping fatigue. Domino's thinks it has the answer
  • Intel discloses $7 billion operating loss for chip-making unit.
  • Japan's finance minister says yen intervention may be necessary when there are 'excessive'moves ...
  • Lindt & Spr ngli expands cocoa processing plant
  • President of Patek Philippe: 'I am, by far, the watchmaker who knows his customers best'
  • The Fed Is Looking for a Job Market Cool-Down. It Just Got One.
  • TikTok Tells Advertisers: 'We Are Not Backing Down'
  • Google Employees Tune Out Antitrust Threat as Trial Comes to a Head
  • Oil Companies Expand Offshore Drilling, Pointing to Energy Needs
  • A New Issue Flares in the 2024 Race: Campus Protests