← Back to Blog

Why GLM-5.1 is Our Default Model

When building SANDCODE, choosing the default model was one of the most important decisions. We needed something that could handle the complex reasoning, code generation, and tool-use patterns that make an AI coding assistant effective.

Why GLM-5.1

GLM-5.1 is z.ai’s latest flagship model, purpose-built as a foundational model for agent-oriented applications. Here’s why it fits SANDCODE:

200K context window

Large codebases need large context. GLM-5.1’s 200K token context window means SANDCODE can understand more of your project at once — fewer “I can’t see that file” moments.

Tool use and function calling

SANDCODE relies heavily on structured tool calls (Edit, Bash, Read, etc.). GLM-5.1 has native support for function calling and tool use, making it a natural fit for the agentic workflow.

API compatibility

z.ai’s API is compatible with standard SDK formats, making integration straightforward.

Cost efficiency

GLM-5.1 offers competitive performance at a fraction of the cost of other flagship models. This matters for a tool that can consume hundreds of thousands of tokens per session.

Not locked in

The beauty of SANDCODE’s architecture is that the model is configurable. GLM-5.1 is the default, but you can swap in any compatible provider. The config hierarchy lets you set a global default and override it per-project.

Try it yourself

Terminal window
sandcode --model GLM-5.1

See the Providers & Models docs for configuration details.