diff --git a/docs/user_guides/fs/data_source/creation/s3.md b/docs/user_guides/fs/data_source/creation/s3.md index ef88615a3..735093260 100644 --- a/docs/user_guides/fs/data_source/creation/s3.md +++ b/docs/user_guides/fs/data_source/creation/s3.md @@ -76,6 +76,8 @@ Here you can specify any additional spark options that you wish to add to the sp To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The data source will then be able to read from your specified S3 compatible storage. +You can also add options to configure the S3A client. For example, to disable SSL certificate verification, you can add the option with key as `fs.s3a.connection.ssl.enabled` and value as `false`. You can also configure other options such as `fs.s3a.path.style.access` if you use s3 compliant storage which does not support virtual hosting. + !!! warning "Spark Configuration" When using the data source within a Spark application, the credentials are set at application level. This allows users to access multiple buckets with the same data source within the same application (assuming the credentials allow it). You can disable this behaviour by setting the option `fs.s3a.global-conf` to `False`. If the `global-conf` option is disabled, the credentials are set on a per-bucket basis and users will be able to use the credentials to access data only from the bucket specified in the data source configuration.