Skip to content

Commit 0c82fe1

Browse files
authored
Make clippy and fmt happy (#2)
* Make clippy and fmt happy * Add pre-commit conf * Add cache for clippy job
1 parent deaef43 commit 0c82fe1

22 files changed

Lines changed: 99 additions & 60 deletions

File tree

.github/workflows/check.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ jobs:
6464
with:
6565
toolchain: ${{ matrix.toolchain }}
6666
components: clippy
67+
- uses: Swatinem/rust-cache@v2
6768
- name: Install Dependencies
6869
run: |
6970
sudo apt install -y llvm-18 libclang-18-dev

.pre-commit-config.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
repos:
2+
- repo: https://github.com/doublify/pre-commit-rust
3+
rev: v1.0
4+
hooks:
5+
- id: cargo-check
6+
args: [ "--workspace" ]
7+
- id: fmt
8+
args: [ "--", "--check"]
9+
- id: clippy

README.md

Lines changed: 33 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,6 @@
33
**Run Snowflake SQL dialect on your data lake in 30 seconds. Zero dependencies.**
44

55
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
6-
[![SQL Logic Test Coverage](https://raw.githubusercontent.com/Embucket/embucket/assets/assets/badge.svg)](test/README.md)
7-
[![dbt Gitlab run results](https://raw.githubusercontent.com/Embucket/embucket/assets_dbt/assets_dbt/dbt_success_badge.svg)](test/dbt_integration_tests/dbt-gitlab/README.md)
86

97
## Quick start
108

@@ -17,26 +15,50 @@ docker run --name embucket --rm -p 3000:3000 embucket/embucket
1715
Run the Snowflake CLI against the local endpoint:
1816

1917
```bash
18+
pip install snowflake-cli
2019
snow sql -c local -a local -u embucket -p embucket -q "select 1;"
2120
```
2221

2322
**Done.** You just ran Snowflake SQL dialect against the local Embucket instance with zero configuration.
2423

25-
### Bootstrap external volumes via config
24+
### Create external volumes via config
2625

27-
You can pre-create volumes, databases, and schemas by pointing `embucketd` at a YAML config file. This
28-
is handy when you want to mount an S3 Tables bucket at startup without sending API calls after the
29-
process is online.
26+
**Important**: External volumes must be created via YAML configuration at startup. REST API-based volume creation is not supported.
27+
28+
Pre-create volumes, databases, and schemas by pointing `embucketd` at a YAML config file:
3029

3130
```bash
3231
cargo run -p embucketd -- \
3332
--no-bootstrap \
34-
--metastore-config config/metastore.s3tables.demo.yaml
33+
--metastore-config config/metastore.yaml
34+
```
35+
36+
**Sample configuration** (`config/metastore.yaml`):
37+
38+
```yaml
39+
volumes:
40+
# S3 Tables volume - connects to AWS S3 Table Bucket
41+
- ident: demo
42+
type: s3-tables
43+
database: demo
44+
credentials:
45+
credential_type: access_key
46+
aws-access-key-id: YOUR_ACCESS_KEY
47+
aws-secret-access-key: YOUR_SECRET_KEY
48+
arn: arn:aws:s3tables:us-east-2:123456789012:bucket/my-table-bucket
49+
50+
# S3 volume - connects to standard S3 bucket
51+
# - ident: s3_volume
52+
# type: s3
53+
# bucket: my-data-bucket
54+
# endpoint: https://s3.amazonaws.com
55+
# credentials:
56+
# credential_type: access_key
57+
# aws-access-key-id: YOUR_ACCESS_KEY
58+
# aws-secret-access-key: YOUR_SECRET_KEY
3559
```
3660

37-
The sample config under `config/metastore.s3tables.demo.yaml` provisions a `demo` database backed by an
38-
S3 Tables bucket using the credentials provided in the file. Update the file with your own secrets
39-
for real deployments.
61+
Update the credentials and ARN/bucket details with your own values for real deployments.
4062

4163
## What just happened?
4264

@@ -58,7 +80,6 @@ Perfect for teams who want Snowflake's simplicity with bring-your-own-cloud cont
5880
Built on proven open source:
5981
- [Apache DataFusion](https://datafusion.apache.org/) for SQL execution
6082
- [Apache Iceberg](https://iceberg.apache.org/) for ACID table metadata
61-
- A lightweight in-memory metastore purpose-built for Embucket
6283

6384
## Why Embucket?
6485

@@ -70,16 +91,8 @@ Built on proven open source:
7091
- **Horizontal scaling** - Add nodes for more throughput
7192
- **Zero operations** - No external dependencies to manage
7293

73-
## Next steps
74-
75-
**Ready for more?** Check out the comprehensive documentation:
76-
77-
[Quick start](https://docs.embucket.com/essentials/quick-start/) - Detailed setup and first queries
78-
[Architecture](https://docs.embucket.com/essentials/architecture/) - How the zero-disk lakehouse works
79-
[Configuration](https://docs.embucket.com/essentials/configuration/) - Production deployment options
80-
[dbt Integration](https://docs.embucket.com/guides/dbt-snowplow/) - Run existing dbt projects
94+
## Build from source
8195

82-
**From source:**
8396
```bash
8497
git clone https://github.com/Embucket/embucket.git
8598
cd embucket && cargo build

crates/api-snowflake-rest/src/server/error.rs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,14 @@ use crate::SqlState;
22
use crate::models::JsonResponse;
33
use crate::models::ResponseData;
44
use axum::{Json, http, response::IntoResponse};
5-
use executor::QueryRecordId;
6-
use executor::error::OperationOn;
7-
use executor::error_code::ErrorCode;
8-
use executor::snowflake_error::Entity;
95
use datafusion::arrow::error::ArrowError;
106
use error_stack::ErrorChainExt;
117
use error_stack::ErrorExt;
128
use error_stack_trace;
9+
use executor::QueryRecordId;
10+
use executor::error::OperationOn;
11+
use executor::error_code::ErrorCode;
12+
use executor::snowflake_error::Entity;
1313
use snafu::Location;
1414
use snafu::prelude::*;
1515

crates/api-snowflake-rest/src/server/helpers.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ use axum::Json;
55
use base64;
66
use base64::engine::general_purpose::STANDARD as engine_base64;
77
use base64::prelude::*;
8-
use executor::models::QueryResult;
9-
use executor::utils::{DataSerializationFormat, convert_record_batches};
108
use datafusion::arrow::ipc::MetadataVersion;
119
use datafusion::arrow::ipc::writer::{IpcWriteOptions, StreamWriter};
1210
use datafusion::arrow::record_batch::RecordBatch;
11+
use executor::models::QueryResult;
12+
use executor::utils::{DataSerializationFormat, convert_record_batches};
1313
use snafu::ResultExt;
1414
use uuid::Uuid;
1515

crates/api-snowflake-rest/src/server/router.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ use super::layer::require_auth;
77
use super::server_models::Config;
88
use super::state;
99
use axum::middleware;
10+
use catalog_metastore::Metastore;
1011
use executor::service::CoreExecutionService;
1112
use executor::utils::Config as UtilsConfig;
12-
use catalog_metastore::Metastore;
1313
use std::sync::Arc;
1414
use tower::ServiceBuilder;
1515
use tower_http::compression::CompressionLayer;

crates/api-snowflake-rest/src/server/test_server.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
use super::server_models::Config;
22
use crate::server::router::make_app;
3-
use executor::utils::Config as UtilsConfig;
43
use catalog_metastore::{InMemoryMetastore, Metastore};
4+
use executor::utils::Config as UtilsConfig;
55
use std::net::SocketAddr;
66
use std::sync::Arc;
77
use tracing_subscriber::fmt::format::FmtSpan;

crates/catalog/src/catalog_list.rs

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,9 @@ use crate::table::CachingTable;
1111
use aws_config::{BehaviorVersion, Region, SdkConfig};
1212
use aws_credential_types::Credentials;
1313
use aws_credential_types::provider::SharedCredentialsProvider;
14-
use catalog_metastore::{AwsCredentials, Database, Metastore, RwObject, S3TablesVolume, VolumeType};
14+
use catalog_metastore::{
15+
AwsCredentials, Database, Metastore, RwObject, S3TablesVolume, VolumeType,
16+
};
1517
use catalog_metastore::{SchemaIdent, TableIdent};
1618
use dashmap::DashMap;
1719
use datafusion::{

crates/catalog/src/catalogs/embucket/catalog.rs

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,16 +51,15 @@ impl CatalogProvider for EmbucketCatalog {
5151
let database = self.database.clone();
5252

5353
block_in_new_runtime(async move {
54-
metastore
55-
.list_schemas(&database)
56-
.await
57-
.map(|schemas| {
54+
metastore.list_schemas(&database).await.map_or_else(
55+
|_| vec![],
56+
|schemas| {
5857
schemas
5958
.into_iter()
6059
.map(|s| s.ident.schema.clone())
6160
.collect()
62-
})
63-
.unwrap_or_else(|_| vec![])
61+
},
62+
)
6463
})
6564
.unwrap_or_else(|_| vec![])
6665
}

crates/catalog/src/catalogs/embucket/schema.rs

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,8 +50,10 @@ impl SchemaProvider for EmbucketSchema {
5050
metastore
5151
.list_tables(&SchemaIdent::new(database, schema))
5252
.await
53-
.map(|tables| tables.into_iter().map(|s| s.ident.table.clone()).collect())
54-
.unwrap_or_else(|_| vec![])
53+
.map_or_else(
54+
|_| vec![],
55+
|tables| tables.into_iter().map(|s| s.ident.table.clone()).collect(),
56+
)
5557
})
5658
.unwrap_or_else(|_| vec![]);
5759

0 commit comments

Comments
 (0)