LogQL Examples and Queries
LogQL: Mastering Log Queries in Loki
LogQL is the query language for Loki, Grafana's log aggregation system. It allows you to efficiently search, filter, and analyze your logs. This page provides a comprehensive collection of LogQL examples to help you master log querying.
LogQL Query Examples
Explore various LogQL query patterns for different use cases:
Basic Log Filtering
View all logs for a specific job:
{job="dev/logs"}
View log lines for a specific filename within a job:
{job="dev/logs", filename="/var/log/app.log"}
Content-Based Filtering
Search for logs with an exact match for a string:
{job="dev/logs"} |= "This is a test"
Perform a case-insensitive search:
{job="dev/logs"} |= "(?i)this is a test"
Exclude logs with a specific string:
{job="dev/logs"} |= "This is a test" != "testerId=123"
Include logs matching a pattern and exclude others:
{job="dev/logs"} |= "This is a test" != "testerId=123" |~ "accountId=000"
Combine multiple inclusion and exclusion filters:
{job="dev/logs"} |= "This is a test" |= "accountId=000" !~ "deviceId=(001|209)"
Metric Queries
Calculate log events per container name:
sum by(container_name) (rate({job="prod/dockerlogs"}[1m]))
LogQL Parser Examples
Examples using the logql-parser
image:
{job="adsb"} | json | gs > 500
Calculating throughput with logfmt parsing:
sum by (query) (avg_over_time({job="dev/app"} |= "caller=metrics.go" | logfmt | duration > 100ms | unwrap throughput_mb[1m]))
Filtering and formatting logs with logfmt:
{job="dev/app"} |= "caller=metrics.go" | logfmt | throughput_mb < 100 and duration >= 200ms | line_format "{{.duration}}{{.query}}"
Accessing specific fields with logfmt:
{compose_service="loki", job="dockerlogs"} | logfmt | read >= 0
Extracting a specific field with logfmt:
{compose_service="loki",job="dockerlogs"} | logfmt | read >= 0 | line_format "{{.level}}"
Complex filtering with JSON parsing and line formatting:
{container_name=~"ecs-.*-nginx-.*"}
| json
| status=~"(200|4..)" and request_length>250 and request_method!="POST" and xff=~"(54.*|34.*)"
| line_format "ReqMethod: {{.request_method}}, Status: {{.status}}, UserAgent: {{.http_user_agent}} Args: {{.args}} , ResponseTime: {{.responsetime}}"
Regex in LogQL
Leverage regular expressions for advanced log parsing and extraction.
Basic Regex Extraction
Example log line:
1.2.3.4 - - [23/Nov/2020:17:31:00 +0200] "POST /foo/bar?token=x.x HTTP/1.1" 201 83 "http://localhost/" "Mozilla/5.0 (Linux; Android 10; Nokia 6.1 Build/x.x.x; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/x.0.x.110 Mobile Safari/537.36" "1.2.3.4"
Regex to extract fields like IP, date, verb, path, etc.:
{job="prod/logs"} |~ "foo" | regexp "(?P<ip>\\d+.\\d+.\\d+.\\d+) (.*) (.*) (?P<date>\\[(.*)\\]) (\")(?P<verb>(\\w+)) (?P<request_path>([^\"]*)) (?P<http_ver>([^\"]*))(\") (?P<status_code>\\d+) (?P<bytes>\\d+) (\")(?P<referrer>(([^\"]*)))(\") (\")(?P<user_agent>(([^\"]*)))(\")"
Filtering extracted fields:
{job="prod/logs"} |~ "doAPICall" | regexp "(?P<ip>\\d+.\\d+.\\d+.\\d+) (.*) (.*) (?P<date>\\[(.*)\\]) (\")(?P<verb>(\\w+)) (?P<request_path>([^\"]*)) (?P<http_ver>([^\"]*))(\") (?P<status_code>\\d+) (?P<bytes>\\d+) (\")(?P<referrer>(([^\"]*)))(\") (\")(?P<user_agent>(([^\"]*)))(\")" | status_code=201
Metric aggregation based on regex extraction:
sum by (ip) (count_over_time({job="dev/nginx", host="localhost"}
| regexp `(?P<ip>\S+) (?P<identd>\S+) (?P<user>\S+) \[(?P<timestamp>[\w:\/]+\s[+\-]\d{4})\] "(?P<action>\S+)\s?(?P<path>\S+)\s?(?P<protocol>\S+)?\" (?P<status>\d{3}|-) (?P<size>\d+|-)\s?"(?P<referrer>[^\"]*)"?\s?"(?P<useragent>[^\"]*)?"?`
| referrer=~"(http|https)://10.21.2.42:(80|443)/(.+)"[60s]))
More Regex Examples
Nginx access log parsing:
# logline
172.10.10.2 - - [10/Jun/2022:09:44:03 +0000] "GET /en/home HTTP/1.1" 200 5434 "https://example.com/en/direct/gohome" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36"
# regex
{job="prod/nginx"}
| regexp `(?P<ip>\S+) (?P<identd>\S+) (?P<user>\S+) \[(?P<timestamp>[\w:\/]+\s[+\-]\d{4})\] "(?P<action>\S+)\s?(?P<path>\S+)\s?(?P<protocol>\S+)?\" (?P<status>\d{3}|-) (?P<size>\d+|-)\s?"(?P<referrer>[^\"]*)"?\s?"(?P<useragent>[^\"]*)?"?`
Structured log parsing with regex:
# logline
[2022-06-10 10:44:45] dev.INFO: {"logType":"Event","logTag":"SomeResponse","description":"SomeClientResponse: handleSomeRequest","attributes":{"foo":"bar"}}
# regex
{job="prod/logs"} | regexp `\[(?P<timestamps>(.*))\] (?P<environment>(prod|dev)).(?P<loglevel>(INFO|DEBUG|ERROR|WARN)): (?P<jsonstring>(.*))`
Combining regex, JSON parsing, and line formatting:
# message: EventID:12345678-1234-1234-1234-123456789abv - Some troubleshooting needed - runbookId:123456, categoryCode:DB, cpuValue:92.000000000000000000, serverName:Server01, versionId:1
{job="serverlogs"} |= "logTag:event" | json | line_format "{{.message}}" | regexp "EventID:(?P<event_id>[a-f0-9\\-]+) - Some troubleshooting needed - runbookId:(?P<runbook_id>\\d+), categoryCode:(?P<category_code>\\w+), cpuValue:(?P<cpu_value>[\\d\\.]+), serverName:(?P<server_name>[^,]+), versionId:(?P<version_id>\\d+)"
Accessing JSON Data
Parse and query structured JSON logs effectively.
Parsing and Accessing JSON Fields
If your log contains JSON data after a specific marker:
{job="dev/logs"} | regexp `\[(?P<timestamps>(.*))\] (?P<environment>(prod|dev)).(?P<loglevel>(INFO|DEBUG|ERROR|WARN)): (?P<jsonstring>(.*))` | line_format "{{.jsonstring}}" | json | __error__ != "JSONParserErr"
Filtering based on JSON keys and values:
{job="dev/logs"} | regexp `\[(?P<timestamps>(.*))\] (?P<environment>(prod|dev)).(?P<loglevel>(INFO|DEBUG|ERROR|WARN)): (?P<jsonstring>(.*))` | line_format "{{.jsonstring}}" | json | __error__ != "JSONParserErr" | logTag="BalanceCheck"
Querying numeric JSON fields:
{job="dev/logs"} | regexp `\[(?P<timestamps>(.*))\] (?P<environment>(prod|dev)).(?P<loglevel>(INFO|DEBUG|ERROR|WARN)): (?P<jsonstring>(.*))` | line_format "{{.jsonstring}}" | json | __error__ != "JSONParserErr" | logTag="BalanceCheck" | balance > 100
Line Formatting with JSON
Format logs with extracted JSON fields:
{job="containerlogs"} | json | line_format "timestamp={{ .time }} source_ip={{ .req_headers_x_real_ip }} method={{ .req_method }} path={{ .req_url }} status_code={{ .res_statusCode }}"
Summing metrics based on JSON fields:
sum by (res_statusCode) (rate({job="containerlogs"} | json | line_format "timestamp={{ .time }} source_ip={{ .req_headers_x_real_ip }} method={{ .req_method }} path={{ .req_url }} status_code={{ .res_statusCode }}"[60s]))
Accessing Nested JSON
Querying deeply nested JSON structures:
{namespace=~"$namespace", app=~"$app", pod=~"$pod"} | json | line_format "{{.log}}" | json raw_body="message" | line_format "m: {{.raw_body}}" | __error__!= "JSONParserErr"
Filtering out JSON parsing errors:
{container="my-service"} |= "" | json | __error__!="JSONParserErr" | line_format "{{.message}} {{.stacktrace}}"
Extracting nested JSON key-value pairs:
{container="my-service"}
| pattern "<_entry>"
| json
| line_format "{{ .message }}\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}{{$k}}: {{$v}} {{ end }}{{ end }}"
Line Format Utility
Control the output format of your log lines.
Conditional Formatting
Include a stacktrace only if the key exists:
{container="my-service"} |= "" | json | line_format `{{.message}} {{if .stacktrace }} {{- .stacktrace -}} {{else}} {{- "-" -}} {{end}}`
Metric Queries
Create powerful metrics from your log data.
Counting and Aggregating Logs
Count log events within a time range and group by application:
sum by (app)(count_over_time({app=~"my-service"} | json | line_format "{{.log}}" |~ "Unable to acquire"[$__interval]))