2025-12-07 13:00:59 +07:00
2025-12-07 13:00:59 +07:00
2025-12-07 01:17:31 +07:00
2025-12-07 00:32:24 +07:00
2025-12-07 00:32:24 +07:00
2025-12-07 01:17:31 +07:00
2025-12-07 00:33:36 +07:00

SPL Lexer

A lexer (tokenizer) implementation in Go for parsing custom language/query syntax.

Overview

This project implements a lexical analyzer that converts input strings into tokens for a custom language. The lexer recognizes various token types including symbols, operators, keywords, literals, and punctuation.

Features

  • Tokenizes input strings into meaningful components
  • Supports various token types:
    • Symbols (identifiers)
    • Operators (comparison, logical)
    • Literals (strings, numbers)
    • Punctuation (braces, colons, pipes)
    • Comments
    • Whitespace

Token Types

The lexer supports the following token types:

  • SYMBOL: Identifiers
  • COLON: :
  • EXCLAMATION: !
  • EQUAL: =
  • NOT_EQUAL: !=
  • AND: and
  • NOT: not
  • OR: or
  • MORE: >
  • LESS: <
  • MORE_EQUAL: >=
  • LESS_EQUAL: <=
  • OPEN_BRACE: (
  • CLOSED_BRACE: )
  • COMMENT: // comments
  • PIPE: |
  • NUMBER: Integer numbers
  • FLOAT_NUMBER: Floating-point numbers
  • STRING_LITERAL: String literals in quotes
  • SPACE: Whitespace

Usage

To use the lexer, you can call the Parse function from the lexer package:

package main

import (
    "github.com/e1lama/spl/lexer"
)

func main() {
    tokens := lexer.Parse("some input string")
    // Process tokens
}

Project Structure

  • main.go - Main entry point (currently empty)
  • lexer/lexer.go - Core lexer implementation
  • lexer/tokens.go - Token definitions and types

License

MIT

Description
No description provided
Readme 30 KiB
Languages
Go 100%