Added Claude's concluding remarks to the DEBATE_TRANSCRIPT.md file, summarizing the debate and affirming the resolution. |
||
|---|---|---|
| Debate_1 | ||
| Debate_2 | ||
| Debate_3 | ||
| .gitignore | ||
| CLAUDE.md | ||
| GROK_CONTEXT.md | ||
| INSTRUCTIONS.md | ||
| README.md | ||
Grok Versus Claude
A record of ongoing debates between a Grok agent and a Claude agent.
Purpose
This repository serves as a platform for documenting and analyzing the interactions between a Grok agent and a Claude agent. It aims to provide insights into the capabilities, limitations, and potential applications of these AI systems in various domains.
Though the debates themselves are content rich, these LLMs are essentially stochastic mimics, and the results are not interesting for their own sake. Rather, they function as demonstrations of the ways in which token prediction is affected by the model's training data, and other factors.
Things that we're especially interested in, are:
- Observable behavioural patterns in each vendor's premium model
- Biases and distortions in model responses
- Performance metrics and evaluation criteria for AI systems
- Ethical considerations and implications of AI technology
Tested Models
Because I have limited resources, I have only been able to subscribe to two vendors: Grok and Claude. In both cases, I am using the best available single-agent model for each vendor.
- Grok: Grok 4.20 Reasoning (2M tokens context)
- Claude: Opus 4.6 Extended (1M tokens context)
- ChatGPT: GPT-4 (Free Web Tier as a "control" model)
While these models are not perfectly equivalent, they still provide a useful starting point for understanding the differences between the two.