top of page
  • Writer's pictureTroy Vermillion

The Ultimate AI Showdown: Chat GPT vs. Google Bard

Updated: Jun 22, 2023

The Ultimate AI Showdown: Chat GPT vs. Google Bard | Ep. 27

In the latest episode of the podcast, the Steven Faust delves into the complicated world of AI and the challenges that come with regulating it. He raises concerns about the difficulty of getting universal buy-in for regulating AI because of the potential for companies to advance their technologies during a regulatory hiatus secretly. Additionally, he suggests the creation of a government department to study and monitor AI as a possible solution to balance oversight and innovation.

One of the key takeaways from this episode is the complexity of regulating AI. Faust compares it to regulating nuclear proliferation, saying the potential for harm is similarly high.

The underlying issue is that technology evolves so quickly that traditional regulatory policies can't keep up. It becomes challenging to identify where the line is between innovation and regulation. Faust also points out that technology has always had the potential to go wrong.

However, the potential for danger with AI is greater because of its ability to operate on a scale that humans cannot comprehend. It's essential to identify which sectors are most vulnerable to harm and put in place frameworks to protect them.

Another point the Faust raises is the challenges posed in holding Chat GPT accountable for suppressing information and manipulating data. The difficulty lies in assigning legal responsibility, especially since it's difficult to determine whether the fault lies with the AI or the human creators. The speaker notes that there has to be a middle ground where sensitive data can still be used for research and development.

Still, the risks must be mitigated and legal responsibilities established. Considering these challenges, Faust suggests the creation of a government department solely dedicated to studying and monitoring AI. The focus would be to track the evolution of AI and identify where risks lie.

Additionally, it would ensure that new ideas are implemented with oversight, traditional regulations, and ethical considerations. This balance would support innovation while protecting consumers, the community, and the environment.

Undoubtedly, AI is a rapidly evolving area with the potential for significant benefits to humanity. It could also be catastrophic, primarily if not appropriately regulated. The appropriate steps need to be taken to prepare for the possible dangers that come with AI. This would require a concerted effort by governments, businesses, and regulators to work together, make informed decisions, and balance the risks with the possible benefits of AI.

One of the essential things to consider when dealing with AI is openness. Generally, an open-source approach to AI development would be more successful. This would allow the community to have input, and progress could be made based on the community's consensus. Unfortunately, Chat GPT, previously known as OpenAI GPT, is no longer as open for development as it was.

Contributions to the platform are limited in scope, and one cannot improve its algorithms. Faust then notes that the core business of AI instruments is queries or search requests. AI experience can be considered a more intuitive version of Google search. However, Unlike Google's ranking factors, which consider several considerations, people want a definitive answer from a bot. This can be challenging for AI because of conflict in results, leading to questions of accuracy and objectivity.

To truly address these concerns, one would need to find something that allows for input from various AI systems to balance out the inherent bias that comes with any single design. This would allow for more ideal results for those who require a middle ground.


In conclusion, AI has the potential for significant benefits, and it is here to stay. However, it also has the potential for substantial destruction and harm. We need to find a balance between innovation and regulation, ensuring that AI is used for the greater good while protecting the vulnerable sectors of society.

Faust's suggestion of a dedicated department to study and monitor AI is an excellent idea and would go a long way toward achieving this balance.

As Faust aptly put it: "AI is evolving more rapidly than traditional regulatory policies can keep up with. It would be challenging to monitor, regulate, and ensure compliance with emerging AI systems." Another great quote from the podcast is: "The key to success for AI lies in finding the middle ground between effective and regulated innovation."

Troy Vermillion - Technology, Human Resources, and Benefits Expert



bottom of page