Readers API Reference
Data source readers for various formats.
BaseReader
BaseReader
Base class for all data source readers
Readers are responsible for: 1. Reading data from a source (file, URL, database, etc.) 2. Yielding rows as dictionaries (lazy evaluation) 3. Optionally supporting predicate pushdown 4. Optionally supporting column pruning
Source code in sqlstream/readers/base.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | |
read_lazy
Yield rows as dictionaries
This is the core method that all readers must implement. It should yield one row at a time (lazy evaluation) rather than loading all data into memory.
Yields:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary representing one row of data |
Example
{'name': 'Alice', 'age': 30, 'city': 'NYC'}
Source code in sqlstream/readers/base.py
supports_pushdown
Does this reader support predicate pushdown?
If True, the query optimizer can call set_filter() to push WHERE conditions down to the reader for more efficient execution.
Returns:
| Type | Description |
|---|---|
bool
|
True if predicate pushdown is supported |
Source code in sqlstream/readers/base.py
set_filter
Set filter conditions for predicate pushdown
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
conditions
|
list[Condition]
|
List of WHERE conditions to apply during read |
required |
Note
Only called if supports_pushdown() returns True
Source code in sqlstream/readers/base.py
supports_column_selection
Does this reader support column pruning?
If True, the query optimizer can call set_columns() to specify which columns are needed, allowing the reader to skip reading unnecessary columns.
Returns:
| Type | Description |
|---|---|
bool
|
True if column selection is supported |
Source code in sqlstream/readers/base.py
set_columns
Set which columns to read (column pruning)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
columns
|
list[str]
|
List of column names to read |
required |
Note
Only called if supports_column_selection() returns True
supports_limit
Does this reader support early termination with LIMIT?
If True, the query optimizer can call set_limit() to specify the maximum number of rows to read, allowing early termination.
Returns:
| Type | Description |
|---|---|
bool
|
True if limit pushdown is supported |
Source code in sqlstream/readers/base.py
set_limit
Set maximum number of rows to read (limit pushdown)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
Maximum number of rows to yield |
required |
Note
Only called if supports_limit() returns True Reader should stop yielding rows after 'limit' rows
Source code in sqlstream/readers/base.py
supports_partition_pruning
Does this reader support partition pruning?
If True, the query optimizer can call set_partition_filters() to specify which partitions to read based on filter conditions.
Returns:
| Type | Description |
|---|---|
bool
|
True if partition pruning is supported |
Source code in sqlstream/readers/base.py
get_partition_columns
Get partition column names for Hive-style partitioning
Returns:
| Type | Description |
|---|---|
set
|
Set of partition column names (e.g., {'year', 'month', 'day'}) |
set
|
Empty set if not partitioned |
Example
For path: s3://bucket/data/year=2024/month=01/data.parquet Returns: {'year', 'month'}
Source code in sqlstream/readers/base.py
set_partition_filters
Set filter conditions for partition pruning
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
conditions
|
list[Condition]
|
List of WHERE conditions on partition columns |
required |
Note
Only called if supports_partition_pruning() returns True Reader should skip partitions that don't match these conditions
Source code in sqlstream/readers/base.py
get_schema
Get schema information (column names and types)
Returns:
| Type | Description |
|---|---|
Schema | None
|
Schema object with inferred types, or None if schema cannot be inferred |
Note
Optional method. Returns None by default. Readers should override this to provide schema inference.
Source code in sqlstream/readers/base.py
__iter__
to_dataframe
Convert reader content to pandas DataFrame
Returns:
| Type | Description |
|---|---|
|
pandas.DataFrame containing all data |
Raises:
| Type | Description |
|---|---|
ImportError
|
If pandas is not installed |
Note
Default implementation iterates over read_lazy() and creates DataFrame. Subclasses should override this for better performance (e.g. using read_csv/read_parquet).
Source code in sqlstream/readers/base.py
CSVReader
CSVReader
Bases: BaseReader
Lazy CSV reader with basic type inference
Features: - Lazy iteration (doesn't load entire file into memory) - Automatic type inference (int, float, string) - Predicate pushdown support - Column pruning support
Source code in sqlstream/readers/csv_reader.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 | |
__init__
Initialize CSV reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to CSV file (local or s3://) |
required |
encoding
|
str
|
File encoding (default: utf-8) |
'utf-8'
|
delimiter
|
str
|
CSV delimiter (default: comma) |
','
|
Source code in sqlstream/readers/csv_reader.py
supports_pushdown
supports_column_selection
supports_limit
set_filter
set_columns
set_limit
read_lazy
Lazy iterator over CSV rows
Yields rows as dictionaries with type inference applied. If filters are set, applies them during iteration. If columns are set, only yields those columns. If limit is set, stops after yielding that many rows.
Source code in sqlstream/readers/csv_reader.py
get_schema
Infer schema by sampling rows from the CSV file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sample_size
|
int
|
Number of rows to sample for type inference (default: 100) |
100
|
Returns:
| Type | Description |
|---|---|
Schema | None
|
Schema object with inferred types, or None if file is empty |
Source code in sqlstream/readers/csv_reader.py
to_dataframe
Convert to pandas DataFrame efficiently, respecting inferred types.
Source code in sqlstream/readers/csv_reader.py
HTMLReader
HTMLReader
Bases: BaseReader
Read tables from HTML files or URLs
Extracts all tables from HTML and allows querying them. If multiple tables exist, you can select which one to query.
Example
Query first table in HTML
reader = HTMLReader("data.html")
Query specific table (0-indexed)
reader = HTMLReader("data.html", table_index=1)
Query table by matching text
reader = HTMLReader("data.html", match="Sales Data")
Source code in sqlstream/readers/html_reader.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 | |
__init__
Initialize HTML reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source
|
str
|
Path to HTML file or URL |
required |
table
|
int
|
Which table to read (0-indexed, default: 0) |
0
|
match
|
str | None
|
Text to match in table (tries to find table containing this text) |
None
|
**kwargs
|
Additional arguments passed to pandas read_html |
{}
|
Source code in sqlstream/readers/html_reader.py
read_lazy
Read data lazily from the selected table
Source code in sqlstream/readers/html_reader.py
get_schema
Get schema from the selected table
Source code in sqlstream/readers/html_reader.py
supports_pushdown
supports_column_selection
set_filter
set_columns
list_tables
List all tables found in the HTML
Returns:
| Type | Description |
|---|---|
list[str]
|
List of table descriptions (first few column names) |
Source code in sqlstream/readers/html_reader.py
HTTPReader
HTTPReader
Bases: BaseReader
Read data from HTTP/HTTPS URLs with intelligent caching
Automatically detects file format (CSV or Parquet) and delegates to appropriate reader. Caches downloaded files to avoid re-downloads.
Example
reader = HTTPReader("https://example.com/data.csv") for row in reader.read_lazy(): print(row)
Source code in sqlstream/readers/http_reader.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 | |
__init__
__init__(url: str, cache_dir: str | None = None, force_download: bool = False, format: str | None = None, **kwargs)
Initialize HTTP reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
HTTP/HTTPS URL to data file |
required |
cache_dir
|
str | None
|
Directory to cache downloaded files (default: system temp) |
None
|
force_download
|
bool
|
If True, re-download even if cached |
False
|
format
|
str | None
|
Explicit format specification (csv, parquet, html, markdown). If not provided, will auto-detect from URL extension or content. |
None
|
**kwargs
|
Additional arguments passed to the delegate reader |
{}
|
Source code in sqlstream/readers/http_reader.py
read_lazy
Read data lazily, delegating to underlying reader
Source code in sqlstream/readers/http_reader.py
get_schema
supports_pushdown
supports_column_selection
set_filter
set_columns
clear_cache
clear_all_cache
staticmethod
Clear all cached files
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cache_dir
|
str | None
|
Cache directory to clear (default: system temp) |
None
|
Returns:
| Type | Description |
|---|---|
int
|
Number of files deleted |
Source code in sqlstream/readers/http_reader.py
JSONReader
JSONReader
Bases: BaseReader
Reader for standard JSON files.
Supports: - Array of objects: [{"a": 1}, {"a": 2}] - Object with records key: {"data": [{"a": 1}, ...], "meta": ...} - Automatic type inference - Predicate pushdown (filtering in Python) - Column pruning
Source code in sqlstream/readers/json_reader.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 | |
__init__
Initialize JSON reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to JSON file |
required |
records_key
|
str | None
|
Key containing the list of records (e.g., "data", "records"). If None, attempts to auto-detect or expects root to be a list. |
None
|
encoding
|
str
|
File encoding (default: utf-8) |
'utf-8'
|
Source code in sqlstream/readers/json_reader.py
read_lazy
Read JSON file and yield records.
Note: Standard JSON parsing loads the whole file into memory. For large files, use JSONL format.
Source code in sqlstream/readers/json_reader.py
get_schema
Infer schema from data
Source code in sqlstream/readers/json_reader.py
JSONLReader
JSONLReader
Bases: BaseReader
Reader for JSONL (JSON Lines) files.
Format: {"id": 1, "name": "Alice"}
Features: - True lazy loading (line-by-line) - Handle malformed lines - Predicate pushdown - Column pruning
Source code in sqlstream/readers/jsonl_reader.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 | |
__init__
Initialize JSONL reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to JSONL file |
required |
encoding
|
str
|
File encoding (default: utf-8) |
'utf-8'
|
Source code in sqlstream/readers/jsonl_reader.py
read_lazy
Yield rows from JSONL file line by line
Source code in sqlstream/readers/jsonl_reader.py
get_schema
Infer schema by sampling first N lines
Source code in sqlstream/readers/jsonl_reader.py
MarkdownReader
MarkdownReader
Bases: BaseReader
Read tables from Markdown files
Parses Markdown tables (GFM format) and allows querying them. Supports files with multiple tables.
Example Markdown table
| Name | Age | City |
|---|---|---|
| Alice | 30 | New York |
| Bob | 25 | San Francisco |
Example
reader = MarkdownReader("data.md") for row in reader.read_lazy(): print(row)
Source code in sqlstream/readers/markdown_reader.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | |
__init__
Initialize Markdown reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source
|
str
|
Path to Markdown file |
required |
table
|
int
|
Which table to read if multiple tables exist (0-indexed) |
0
|
Source code in sqlstream/readers/markdown_reader.py
read_lazy
Read data lazily from the selected table
Source code in sqlstream/readers/markdown_reader.py
get_schema
Get schema by inferring types from first few rows
Source code in sqlstream/readers/markdown_reader.py
supports_pushdown
supports_column_selection
set_filter
set_columns
list_tables
List all tables found in the Markdown file
Returns:
| Type | Description |
|---|---|
list[str]
|
List of table descriptions |
Source code in sqlstream/readers/markdown_reader.py
to_dataframe
Convert to pandas DataFrame
ParallelCSVReader
ParallelCSVReader
Parallel CSV reader using chunked reading
Note
This is a placeholder for true parallel CSV reading. Implementing this correctly requires: - Chunk boundary detection (find newlines) - Header parsing and schema inference - Correct line splitting across chunks - Order preservation
For now, this is just a wrapper around ParallelReader.
Source code in sqlstream/readers/parallel_reader.py
__init__
Initialize parallel CSV reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to CSV file |
required |
num_threads
|
int
|
Number of worker threads |
4
|
chunk_size
|
int
|
Chunk size in bytes |
1024 * 1024
|
Source code in sqlstream/readers/parallel_reader.py
read_lazy
ParallelParquetReader
ParallelParquetReader
Parallel Parquet reader using row group parallelism
Parquet files are naturally parallelizable because: - Data is split into row groups - Each row group can be read independently - PyArrow supports parallel reading natively
Note
This is a placeholder. PyArrow already supports parallel reading via threads parameter in read_table().
For true parallel execution in SQLStream, we would: 1. Read row groups in parallel 2. Apply filters in parallel 3. Merge results in order
Source code in sqlstream/readers/parallel_reader.py
__init__
Initialize parallel Parquet reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to Parquet file |
required |
num_threads
|
int
|
Number of worker threads |
4
|
Source code in sqlstream/readers/parallel_reader.py
read_lazy
ParallelReader
ParallelReader
Parallel wrapper for data readers
Wraps any BaseReader and reads data in parallel using a thread pool.
Usage
How it works: - Producer threads read chunks of data - Consumer (main thread) yields rows in order - Queue-based coordination - Graceful shutdown on completion or error
Source code in sqlstream/readers/parallel_reader.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | |
__init__
Initialize parallel reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reader
|
BaseReader
|
Underlying reader to wrap |
required |
num_threads
|
int
|
Number of worker threads |
4
|
queue_size
|
int
|
Maximum items in queue (backpressure) |
100
|
Source code in sqlstream/readers/parallel_reader.py
read_lazy
Yield rows from parallel reader
Yields:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary representing one row |
Source code in sqlstream/readers/parallel_reader.py
enable_parallel_reading
enable_parallel_reading
Enable parallel reading for any reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reader
|
BaseReader
|
Reader to wrap |
required |
num_threads
|
int
|
Number of worker threads |
4
|
Returns:
| Type | Description |
|---|---|
ParallelReader
|
Parallel reader wrapper |
Example
Source code in sqlstream/readers/parallel_reader.py
ParquetReader
ParquetReader
Bases: BaseReader
Intelligent Parquet reader with statistics-based optimization
Features: - Lazy iteration (doesn't load entire file) - Row group statistics-based pruning (HUGE performance win) - Column selection (only read needed columns) - Predicate pushdown with statistics
The key insight: Parquet stores min/max for each column in each row group. We can skip entire row groups if their statistics don't match our filters!
Example
Row Group 1: age [18-30], city ['LA', 'NYC'] Row Group 2: age [31-45], city ['NYC', 'SF'] Row Group 3: age [46-90], city ['LA', 'SF']
Query: WHERE age > 60 → Skip RG1 (max=30), Skip RG2 (max=45), Read RG3 only!
Source code in sqlstream/readers/parquet_reader.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 | |
__init__
Initialize Parquet reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to Parquet file (local or s3://) |
required |
Source code in sqlstream/readers/parquet_reader.py
supports_pushdown
supports_column_selection
supports_limit
set_filter
set_columns
set_limit
supports_partition_pruning
get_partition_columns
set_partition_filters
Set partition filters and check if this file should be skipped
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
conditions
|
list[Condition]
|
List of WHERE conditions on partition columns |
required |
Source code in sqlstream/readers/parquet_reader.py
read_lazy
Lazy iterator over Parquet rows with intelligent row group pruning
This is where the magic happens: 1. Check partition pruning (skip entire file if needed!) 2. Select row groups using statistics (skip irrelevant ones!) 3. Read only selected row groups 4. Read only required columns 5. Yield rows as dictionaries 6. Early termination if limit is reached
Source code in sqlstream/readers/parquet_reader.py
get_schema
Get schema from Parquet metadata
Returns:
| Type | Description |
|---|---|
Schema
|
Dictionary mapping column names to types |
Source code in sqlstream/readers/parquet_reader.py
get_statistics
Get statistics about row group pruning
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary with pruning statistics |
Source code in sqlstream/readers/parquet_reader.py
to_dataframe
Convert to pandas DataFrame efficiently
Source code in sqlstream/readers/parquet_reader.py
XMLReader
XMLReader
Bases: BaseReader
Read tabular data from XML files
Extracts tabular data from XML by finding repeating elements. Each repeating element becomes a row, and child elements/attributes become columns.
Example XML
Example
Query all elements
reader = XMLReader("data.xml", element="record")
Query with XPath-like syntax
reader = XMLReader("data.xml", element="data/record")
Source code in sqlstream/readers/xml_reader.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 | |
__init__
Initialize XML reader
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source
|
str
|
Path to XML file |
required |
element
|
str | None
|
Element tag name or path to extract (e.g., "record" or "data/record") If not provided, will try to find the first repeating element |
None
|
**kwargs
|
Additional arguments (reserved for future use) |
{}
|
Source code in sqlstream/readers/xml_reader.py
read_lazy
Read data lazily from parsed XML
Source code in sqlstream/readers/xml_reader.py
get_schema
Get schema by inferring types from all rows
Source code in sqlstream/readers/xml_reader.py
supports_pushdown
supports_column_selection
set_filter
set_columns
to_dataframe
Convert to pandas DataFrame