Numbers
GX uses explicit-width number types. No ambiguous int or float — you always know exactly how big your data is.
Integer Types
| Type | Size | Range |
|---|---|---|
i8 | 1 byte | -128 to 127 |
i16 | 2 bytes | -32,768 to 32,767 |
i32 | 4 bytes | -2.1 billion to 2.1 billion |
i64 | 8 bytes | -9.2 quintillion to 9.2 quintillion |
u8 | 1 byte | 0 to 255 |
u16 | 2 bytes | 0 to 65,535 |
u32 | 4 bytes | 0 to 4.2 billion |
u64 | 8 bytes | 0 to 18.4 quintillion |
i32 is the default — when you write var x = 10, that’s an i32.
fn main() {
var a = 42 // i32 (default)
var b:i8 = 100 // fits in a byte
var c:u8 = 255 // unsigned byte (max value)
var big:i64 = 9999999999
print("a = {a}\n")
print("b = {b}\n")
print("c = {c}\n")
print("big = {big}\n")
}
Float Types
| Type | Size | Precision | Use for |
|---|---|---|---|
f32 | 4 bytes | ~7 decimal digits | Graphics, games, most math |
f64 | 8 bytes | ~15 decimal digits | Scientific computing, precision |
GX infers the float type from how many digits you write:
fn main() {
var a = 3.14 // f32 (7 or fewer digits)
var b = 3.14159265358979 // f64 (more than 7 digits)
print("a (f32) = {a}\n")
print("b (f64) = {b}\n")
}
The rule is simple: 7 or fewer decimal digits gives you f32. More than 7 gives you f64. No suffixes needed.
You can always be explicit:
fn main() {
var x:f32 = 1.0
var y:f64 = 1.0
print("f32: {x}\n")
print("f64: {y}\n")
}
Hex Literals
Prefix with 0x for hexadecimal:
fn main() {
var color = 0xFF00FF
var mask:u8 = 0x0F
var addr = 0xDEADBEEF
print("color = {color}\n")
print("mask = {mask}\n")
print("addr = {addr}\n")
}
Arithmetic
All the operators you’d expect:
fn main() {
var a = 10
var b = 3
print("a + b = {a + b}\n") // 13
print("a - b = {a - b}\n") // 7
print("a * b = {a * b}\n") // 30
print("a / b = {a / b}\n") // 3 (integer division)
print("a % b = {a % b}\n") // 1 (remainder)
}
Increment and Decrement
fn main() {
var x = 10
x++
print("After x++: {x}\n") // 11
x--
print("After x--: {x}\n") // 10
}
Compound Assignment
Shorthand for modify-and-assign:
fn main() {
var x = 100
x += 10 // x = x + 10
x -= 5 // x = x - 5
x *= 2 // x = x * 2
x /= 3 // x = x / 3
print("x = {x}\n") // 70
}
Casting
Convert between types explicitly with (type)value:
fn main() {
var x:i32 = 42
var y = (f64)x
print("x (i32) = {x}\n")
print("y (f64) = {y}\n")
var big:f64 = 3.99
var truncated = (i32)big
print("truncated = {truncated}\n") // 3 (truncates, doesn't round)
}
Try it — Experiment with different number types in the Playground.
Expert Corner
Why explicit-width types? When you write int in C, its size depends on the platform (16-bit on some embedded, 32-bit on most, potentially 64-bit). GX eliminates this: i32 is always 32 bits, everywhere. This matters for serialization, network protocols, and cross-platform code.
The float inference rule (digit count) is a GX innovation. In C, 3.14 is always double and you need 3.14f for float. In GX, the number of digits you write signals your precision intent. Writing 3.14 means “I only care about a few digits” — that’s f32. Writing 3.14159265358979 means “I need precision” — that’s f64. No suffixes to remember.
C ABI compatibility types exist for when you’re calling C libraries: c_int, c_uint, c_long, c_ulong, c_float, c_double, c_char, cstr. These match whatever the C compiler uses on your platform. You’ll rarely need them unless you’re writing FFI bindings.
Integer overflow behaves the same as C — it wraps. GX trusts you to pick the right size for your data. If you need 64-bit range, use i64. If a byte is enough, use i8 or u8 and save memory in arrays.