mirror of
https://github.com/supabase/supabase.git
synced 2026-05-07 06:27:16 +08:00
[bot] Sync from supabase/troubleshooting (#42464)
This PR syncs the latest troubleshooting guides from the supabase/troubleshooting repository. --------- Co-authored-by: Charis Lam <26616127+charislam@users.noreply.github.com> Co-authored-by: github-docs-bot <github-docs-bot@supabase.com> Co-authored-by: Chris Chinchilla <chris.ward@supabase.io> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This commit is contained in:
committed by
GitHub
parent
dda0b526ac
commit
f3e4f5f20d
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title = "App Store Rejection: 'TLS error' in IPv6-only environments"
|
||||
topics = [ "platform" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
If your App Store submission is rejected with a 'TLS error' when tested in an IPv6-only environment, often citing a lack of AAAA records, it typically indicates application-level issues rather than a Supabase configuration problem.
|
||||
|
||||
## Why does this happen?
|
||||
|
||||
Supabase projects are designed for compatibility with IPv6-only NAT64/DNS64 environments through automatic IPv4-to-IPv6 translation. This means explicit AAAA records are not required for your `*.supabase.co` domain. The 'TLS error' usually points to how the application handles networking requests, which can interfere with this automatic translation.
|
||||
|
||||
## How to resolve this issue
|
||||
|
||||
- Ensure you're using hostnames, not IP addresses - Use project-ref.supabase.co everywhere in your code. See [Supporting IPv6 DNS64/NAT64 Networks](https://developer.apple.com/documentation/network/supporting_ipv6_dns64_nat64_networks)
|
||||
- Use high-level networking APIs like `URLSession` that handle IPv6 automatically. See [`URLSession` Documentation](https://developer.apple.com/documentation/foundation/urlsession)
|
||||
- Review your App Transport Security settings. See [Preventing Insecure Network Connections](https://developer.apple.com/documentation/security/preventing_insecure_network_connections)
|
||||
- Test your app in an IPv6-only environment using Apple's Network Link Conditioner. See [Testing for IPv6 DNS64/NAT64 Compatibility](https://developer.apple.com/library/archive/documentation/NetworkingInternetWeb/Conceptual/NetworkingOverview/UnderstandingandPreparingfortheIPv6Transition/UnderstandingandPreparingfortheIPv6Transition.html#//apple_ref/doc/uid/TP40010220-CH213-SW1)
|
||||
@@ -0,0 +1,23 @@
|
||||
---
|
||||
title = "Auth Hooks: 'Invalid payload' when anonymous users attempt phone changes"
|
||||
topics = [ "auth", "cli" ]
|
||||
keywords = []
|
||||
[[errors]]
|
||||
http_status_code = 500
|
||||
message = "Invalid payload sent to hook"
|
||||
|
||||
---
|
||||
|
||||
An 'Invalid payload sent to hook' error (500) occurs in Auth hooks when the payload includes `new_phone` for an anonymous user.
|
||||
|
||||
## Why does this happen?
|
||||
|
||||
This error arises because anonymous users do not have an existing phone number to modify. Client application logic attempting a `phone_change` for such users results in an invalid operation. The `new_phone` field should only be present during a `phone_change` flow initiated by an _authenticated_ user.
|
||||
|
||||
## How to avoid this issue
|
||||
|
||||
Refine your client application logic to prevent this incorrect payload structure:
|
||||
|
||||
- Differentiate phone update and login flows for anonymous users from authenticated users.
|
||||
- Ensure `new_phone` is only transmitted when an authenticated user initiates a `phone_change` flow.
|
||||
- Implement distinct handling for anonymous user updates to avoid sending `new_phone` in the payload.
|
||||
@@ -0,0 +1,26 @@
|
||||
---
|
||||
title = "Autovacuum Stalled Due to Inactive Replication Slot"
|
||||
topics = [ "database" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
If you observe that `supabase inspect db vacuum-stats` reports "Expect autovacuum? yes" for your tables, but autovacuum activity has been inactive for an extended period, leading to increasing database RAM usage, this typically indicates a stalled autovacuum process. One of the reasons for autovacuum to get stalled is an inactive replication slot for which this guide talks about.
|
||||
|
||||
## Why does this happen?
|
||||
|
||||
Replication slots (logical or physical) tell Postgres “don’t remove WAL or older transaction state before this point” because a consumer/replica might still need those WAL records or visibility information. That means autovacuum will get slower, do more work, or appear to be stalled because it can't progress past the older snapshot anchored by the slot. Inactive logical replication slots can prevent the autovacuum process from running effectively. This stall prevents the cleanup of dead tuples, leading to database bloat and increased resource consumption.
|
||||
|
||||
## How to resolve this issue
|
||||
|
||||
1. **Identify inactive replication slots:**
|
||||
Execute the following query in your [SQL editor](/dashboard/project/_/sql/new) to list all replication slots and their activity status:
|
||||
```sql
|
||||
select slot_name, slot_type, active, active_pid from pg_replication_slots where active is false;
|
||||
```
|
||||
2. **Drop inactive slot(s):**
|
||||
For each `slot_name` identified as `active = f` (inactive), execute the following command. Replace `'slot_name'` with the actual name of the inactive slot (e.g., `'example_slot'`):
|
||||
```sql
|
||||
select pg_drop_replication_slot('slot_name');
|
||||
```
|
||||
3. **Confirm removal:**
|
||||
Re-run the identification query from step 1 to verify that the inactive slot(s) have been successfully removed. Once removed, autovacuum should resume normal operation.
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title = "'Cloudflare Origin Error 1016' on Custom Domain"
|
||||
topics = [ "platform" ]
|
||||
keywords = []
|
||||
[[errors]]
|
||||
code = "1016"
|
||||
message = "Cloudflare Origin Error"
|
||||
---
|
||||
|
||||
When encountering a 'Cloudflare Origin Error 1016' when accessing a custom domain URL, it indicates an SSL certificate validation failure. This error typically occurs because the custom domain's SSL certificate has expired, leading Cloudflare to deactivate routing to the origin server.
|
||||
|
||||
## How to resolve this issue
|
||||
|
||||
1. Navigate to your project's [custom domain settings](/dashboard/project/_/settings/custom-domain).
|
||||
2. Initiate a DNS record re-verification. This action prompts an attempt to renew the SSL certificate.
|
||||
3. If the error persists after re-verification, remove the custom domain configuration from your project.
|
||||
4. Re-add the custom domain configuration. Ensure all DNS records are correctly established as instructed by the dashboard. This process forces a hard reset and triggers a new certificate request.
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title = "Error: 'invalid byte sequence for encoding 'UTF8': 0x00' when accessing Triggers or Webhooks"
|
||||
topics = [ "cli", "database" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
If you encounter the error: `'invalid byte sequence for encoding "UTF8": 0x00'` when attempting to access your project's [Triggers](/dashboard/project/_/database/triggers) or [Webhooks](/dashboard/project/_/database/webhooks) via the dashboard, it indicates that the `standard_conforming_strings` database setting is currently `off`.
|
||||
|
||||
This setting, when `off`, can cause issues with how certain character sequences are interpreted by Postgres, leading to errors in dashboard queries that expect UTF8-compliant strings.
|
||||
|
||||
To resolve this issue:
|
||||
|
||||
1. Connect to your database instance using the [SQL Editor](/dashboard/project/_/sql/new) in the Dashboard or a client like `psql`.
|
||||
2. Execute the following SQL command:
|
||||
```sql
|
||||
ALTER DATABASE postgres SET standard_conforming_strings = on;
|
||||
```
|
||||
3. Allow a few minutes for this setting to take effect, as existing pooled connections might retain the previous configuration. If the error persists after this period, a database restart may be necessary.
|
||||
@@ -0,0 +1,30 @@
|
||||
---
|
||||
title = "'Get detailed Storage metrics with the AWS CLI'"
|
||||
topics = [ "cli", "storage", "studio" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
Supabase Studio primarily lists the current objects within your buckets. You can use standard S3 tooling such as the AWS CLI to review your Supabase project's Storage usage, or perform operations on the bucket contents.
|
||||
|
||||
## How to get detailed storage metrics
|
||||
|
||||
This guide makes use of the official [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). You need to install it locally on your computer in order to follow the next steps.
|
||||
|
||||
1. **Retrieve Credentials:** Generate an Access Key pair (Access key ID and Secret access key) and locate the Storage endpoint in your project's [Storage Configuration](/dashboard/project/_/storage/s3). Note that the Secret Access Key is only shown when creating a new Access Key.
|
||||
2. **Identify the Endpoint and Region**: Find your project's Storage Endpoint and Region under the Connection section in [Storage Configuration](/dashboard/project/_/storage/s3)
|
||||
3. **Configure AWS CLI:** You can configure access credentials with environment variables on your local terminal:
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID='<access-key-id>'
|
||||
export AWS_SECRET_ACCESS_KEY='<secret-access-key>'
|
||||
export AWS_DEFAULT_REGION='<storage-region>'
|
||||
```
|
||||
4. **List Buckets:** Confirm your setup by listing your project's buckets:
|
||||
```bash
|
||||
aws s3api list-buckets --endpoint-url <storage-endpoint-url>
|
||||
```
|
||||
5. **Review Bucket contents and size:** Get a detailed view and sum of a specific bucket's contents:
|
||||
```bash
|
||||
aws s3 ls s3://<example-bucket>/ --endpoint-url <storage-endpoint-url> --recursive --human-readable --summarize
|
||||
```
|
||||
|
||||
In the above commands make sure to replace `<example-bucket>` and `<storage-endpoint-url>` with the actual details of your bucket.
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title = "How can I revoke execution of a PostgreSQL function?"
|
||||
title = "How can I revoke execution of a Postgres function?"
|
||||
github_url = "https://github.com/orgs/supabase/discussions/17606"
|
||||
date_created = "2023-09-21T03:04:41+00:00"
|
||||
topics = [ "database", "functions" ]
|
||||
|
||||
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title = "'Manually created databases are not visible in the Supabase Dashboard'"
|
||||
topics = [ "auth", "cli", "database", "functions", "platform", "storage" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
If you've manually created an additional database within your Supabase project, such as `example_database`, you might observe that it's accessible via external database tools but is not visible in the Supabase Dashboard. This guide explains the underlying reasons for this behavior and how Supabase is designed to handle databases.
|
||||
|
||||
---
|
||||
|
||||
## Key concepts
|
||||
|
||||
Before diving into the problem, let's define some key terms:
|
||||
|
||||
- **What is a Postgres Cluster?**
|
||||
In Postgres terminology, a "cluster" refers to a collection of databases managed by a single Postgres server instance. This single instance can host multiple independent databases, each with its own set of tables, users, and permissions. Every Supabase project runs on top of a full Postgres cluster.
|
||||
- **What is a Supabase Project?**
|
||||
A Supabase project is an integrated platform that includes a dedicated Postgres database, authentication services, storage, real-time capabilities, and more. Each project is configured to interact seamlessly with its primary database.
|
||||
- **What is the Supabase Dashboard?**
|
||||
The Supabase Dashboard is a web-based interface that provides a graphical way to manage your project's database schema, data, authentication rules, storage buckets, functions, and other Supabase services.
|
||||
- **What is PostgREST?**
|
||||
PostgREST is a standalone web server that automatically turns your Postgres database directly into a RESTful API. It generates API endpoints based on your database schema, allowing you to interact with your data via HTTP requests without writing custom backend code. Supabase leverages PostgREST to provide its powerful API layer.
|
||||
|
||||
---
|
||||
|
||||
## Understanding the problem: Multiple databases in Supabase
|
||||
|
||||
The core of this behavior stems from the distinction between how Postgres manages databases and how Supabase integrates with them.
|
||||
|
||||
- **Postgres's Flexibility:** As a standard Postgres cluster, your Supabase backend is inherently capable of hosting multiple databases. You can connect to your project and manually create additional databases, such as `example_database` or `another_database`, alongside the default `postgres` database. These manually created databases are fully functional Postgres databases, accessible via external tools like TablePlus, `psql`, or any other Postgres client, provided you use the correct connection parameters (e.g., `user=postgres.[your_project_slug] host=... port=... dbname=example_database`).
|
||||
|
||||
- **Supabase's Integrated Design:** While Postgres itself supports multiple databases per cluster, the Supabase platform, its Dashboard, and most of its integrated features (such as PostgREST for APIs, Supabase Auth, and Supabase Storage) are specifically engineered to operate exclusively with the project's **default `postgres` database**.
|
||||
|
||||
- **The Disconnect Explained:**
|
||||
- **Dashboard Visibility:** The Supabase Dashboard is designed to manage and display information related solely to the `postgres` database within your project. Manually created databases (like `example_database`) will therefore not appear in the Dashboard's interface, as it's not configured to interact with them.
|
||||
- **Service Integration:** Supabase's integrated services are tightly coupled to the `postgres` database. For example, the API layer powered by PostgREST is designed to expose schemas _from the `postgres` database_, not to manage or expose multiple separate databases. This architectural choice simplifies the platform's design, ensures consistent behavior, and allows Supabase to layer its features effectively on top of the `postgres` database. Supporting multiple databases for all integrated services would introduce significant complexity and fundamentally alter Supabase's current model.
|
||||
|
||||
---
|
||||
|
||||
## Resolution and best practices
|
||||
|
||||
To effectively utilize Supabase and its features, consider the following approaches:
|
||||
|
||||
- **For Supabase Feature Integration:** If you intend for your data to be managed via the Supabase Dashboard, accessed through auto-generated APIs (PostgREST), or integrated with Supabase's Authentication or Storage services, all your data and schemas **must reside within the project's default `postgres` database**. This is the designated database for full Supabase ecosystem compatibility.
|
||||
|
||||
- **When a Truly Separate, Integrated Database is Needed:** If your application architecture requires a logically separate database that also benefits from full Supabase feature integration (Dashboard visibility, APIs, Auth, etc.), the recommended approach is to **create a new Supabase project**. Each new project you create will come with its own dedicated Postgres cluster and its own default `postgres` database, fully integrated with all Supabase services. This ensures that each "database" managed by Supabase has its own isolated environment and complete feature support.
|
||||
|
||||
- **Using Manually Created Databases (with caveats):** Creating additional databases within a single Supabase project's Postgres cluster (e.g., `example_database`) is technically possible and accessible via external tools. However, this approach is generally suitable only if:
|
||||
- You do not need these databases to be visible or managed through the Supabase Dashboard.
|
||||
- You do not intend to use Supabase's integrated services (like PostgREST, Auth, or Storage) with these specific databases.
|
||||
- You are comfortable managing these databases entirely through direct Postgres client connections, essentially treating them as standard Postgres databases within the Supabase-provided cluster, but operating outside the Supabase platform's feature set.
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title = "`pg_cron launcher crashes with 'duplicate key value violates unique constraint'`"
|
||||
topics = [ "platform" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
The `pg_cron` launcher process crashes approximately every minute, displaying the error `'duplicate key value violates unique constraint "job_run_details_pkey"'`.
|
||||
|
||||
## Why does this happen?
|
||||
|
||||
This issue occurs when the `cron.runid_seq` sequence is out of sync with the `cron.job_run_details` table. The sequence attempts to generate `runid` values that already exist in the table, leading to a unique key violation. This is typically due to the sequence's last value not being correctly aligned with the maximum `runid` already present in the table.
|
||||
|
||||
## How to resolve this
|
||||
|
||||
To resolve this, you need to reset the `cron.job_run_details` table. If data preservation is required, ensure you back up its contents before proceeding.
|
||||
|
||||
Execute the following SQL command via the [SQL editor](/dashboard/project/_/sql/new):
|
||||
`TRUNCATE cron.job_run_details;`
|
||||
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title = "PKCE Flow errors: 'cannot parse response' or '#ZgotmplZ' in magic link emails"
|
||||
topics = [ "auth", "cli" ]
|
||||
keywords = []
|
||||
[[errors]]
|
||||
code = "#ZgotmplZ"
|
||||
message = "Go template sanitization of unsafe URL scheme"
|
||||
|
||||
[[errors]]
|
||||
code = "cannot parse response"
|
||||
message = "PKCE flow interrupted by email client or token consumed by link scanner"
|
||||
|
||||
---
|
||||
|
||||
When setting up authentication with magic links and mobile deep linking, you might encounter specific errors like `#ZgotmplZ` in your email templates or a 'cannot parse response' error during the login flow. This guide explains the underlying causes and provides a robust solution.
|
||||
|
||||
## What is PKCE flow?
|
||||
|
||||
PKCE (Proof Key for Code Exchange) is a security extension to the OAuth 2.0 Authorization Code Flow, specifically designed for public clients like mobile apps. It prevents interception attacks by requiring the client to generate a secret (a "code verifier") which is hashed and sent to the authorization server during the initial authorization request. This hash (the "code challenge") is later compared to the actual code verifier when the client exchanges the authorization code for an access token. It acts as a secure "handshake" between your mobile application and the authentication service.
|
||||
|
||||
## What are magic links?
|
||||
|
||||
Magic links are a passwordless authentication method where users receive a unique, time-sensitive link via email. Clicking this link directly logs them into the application without needing a password.
|
||||
|
||||
## What is go template security and `{{ .SiteURL }}`?
|
||||
|
||||
The email templating system often uses a language like Go, which has built-in security features to prevent cross-site scripting (XSS) and other vulnerabilities. When a variable like `{{ .SiteURL }}` (intended to represent your application's primary URL) is used in an email template, Go's security model automatically sanitizes the output. If the URL provided in `{{ .SiteURL }}` does not start with a recognized safe scheme like `http://` or `https://` (e.g., `your-app-scheme://`), Go considers it potentially malicious. To protect users, it replaces the unsafe link with a placeholder, typically `#ZgotmplZ`.
|
||||
|
||||
## Understanding the problem
|
||||
|
||||
You may observe two distinct issues:
|
||||
|
||||
1. **The `#ZgotmplZ` Error**:
|
||||
|
||||
- **Cause**: This error occurs when you attempt to use a non-standard URL scheme (like `your-app-scheme://` for a mobile app deep link) directly within a template variable like `{{ .SiteURL }}` in your email templates. Go's security features sanitize this perceived "unsafe" URL, replacing it with the `#ZgotmplZ` placeholder. This prevents the link from being rendered correctly and thus from working.
|
||||
|
||||
2. **The 'Cannot Parse Response' Error**:
|
||||
- **Cause 1: Broken PKCE Handshake**: This error typically arises when the authentication flow, especially PKCE, is interrupted. If an email client attempts to open the magic link in its own internal browser, this can disrupt the "handshake" process between your mobile application and the authentication service. The mobile app expects to receive specific parameters to complete the PKCE flow, and if these are not delivered correctly due to the intermediary email browser, the app reports a 'cannot parse response' error.
|
||||
- **Cause 2: Email Link Scanners**: Many email providers and security tools automatically scan or "pre-click" links in emails to check for malicious content. For one-time magic links, this automated scanning can inadvertently consume the authentication token before the legitimate user even clicks the link. When the user then clicks the (now consumed) link, the authentication attempt fails, resulting in a 'cannot parse response' error. This can lead to occasional failures, even if the general flow appears to work most of the time.
|
||||
|
||||
## Resolving the problem: The recommended flow (email → website → mobile app)
|
||||
|
||||
To achieve a reliable and secure authentication experience that accommodates both Go's security model and the complexities of mobile deep linking and PKCE, the recommended approach is a multi-step redirection: **Email → Website → Mobile App**.
|
||||
|
||||
Here’s how to implement this solution:
|
||||
|
||||
### Step 1: Configure your authentication project
|
||||
|
||||
Ensure your authentication service (e.g., Supabase) is correctly configured to use your website as the primary callback destination.
|
||||
|
||||
- **Set Your Project's `SITE_URL`**: Update this setting to your website's primary domain. This should be a standard web URL (e.g., `https://example.com`). This ensures that `{{ .SiteURL }}` and `{{ .ConfirmationURL }}` will generate safe, web-standard links.
|
||||
- **Add Your Mobile Deep Link Scheme to `Additional Redirect URLs`**: Include your mobile app's deep link scheme with a wildcard (e.g., `your-app-scheme://*`) in the list of authorized redirect URLs. This tells the authentication service that your app's scheme is a valid destination _after_ the initial web redirect.
|
||||
|
||||
### Step 2: Update your email template
|
||||
|
||||
Now that your `SITE_URL` is correctly set to a web domain, you can safely use the built-in variables.
|
||||
|
||||
- **Use `{{ .ConfirmationURL }}` for Magic Links**: In your email template, use `{{ .ConfirmationURL }}`. This variable will generate a secure web link pointing to your configured `SITE_URL`, typically `https://example.com/auth/callback` or a similar path. This link is safe from Go's sanitization as it uses a standard `https://` scheme.
|
||||
|
||||
### Step 3: Implement an intermediary web page with user action
|
||||
|
||||
This is the most crucial step to resolve both the broken PKCE handshake and the email link scanner issues.
|
||||
|
||||
- **Create a Web Callback Page**: Develop a simple web page on your `SITE_URL` (e.g., `https://example.com/auth/callback`). This page will be the initial destination when the user clicks the magic link in the email.
|
||||
- **Add a User-Initiated Button**: On this callback page, include a button or explicit action that the user _must_ click to proceed. For example, a button labeled "Verify & Open App."
|
||||
- **Why this is essential**:
|
||||
- It prevents email link scanners from automatically triggering the final deep link, ensuring the one-time token is not consumed prematurely.
|
||||
- It ensures the user is in a standard web browser (like Chrome or Safari), which can correctly handle the deep link redirection, rather than an email client's internal browser that might break the PKCE flow.
|
||||
- **Implement Deep Link Redirection**: When the user clicks the "Verify & Open App." button, execute a script that redirects them to your mobile application using its deep link scheme. For example, `window.location.href = "your-app-scheme://login-callback?code=..."` where `code` would contain any necessary authentication parameters received by the web callback page.
|
||||
|
||||
By following this Email → Website → Mobile App flow with a user-initiated action on the web page, you establish a robust and secure authentication process that sidesteps common pitfalls associated with email clients and link scanners, ensuring a consistent user experience.
|
||||
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title = "PostgREST not recognizing new columns, tables, views or functions"
|
||||
topics = [ "cli", "data api", "database", "functions" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
If PostgREST is returning errors by not recognizing new database columns, tables, views or functions, and logging errors similar to:
|
||||
|
||||
```console
|
||||
24/Dec/2025:18:16:33 -0500: Failed listening for database notifications on the "pgrst" channel. ERROR: could not access status of transaction 12037872 DETAIL: Could not open file "pg_xact/000B": No such file or directory.
|
||||
```
|
||||
|
||||
Then this indicates that there's a stale PostgREST schema cache.
|
||||
|
||||
## Why this happens
|
||||
|
||||
This can occur when an [underlying Postgres notification queue issue](https://www.postgresql.org/message-id/CAK98qZ3wZLE-RZJN_Y%2BTFjiTRPPFPBwNBpBi5K5CU8hUHkzDpw%40mail.gmail.com) prevents PostgREST from receiving cache reload signals after schema changes.
|
||||
|
||||
## How to fix this
|
||||
|
||||
Execute the following SQL command via the [SQL Editor](/dashboard/project/_/sql/new) or a direct client:
|
||||
|
||||
```sql
|
||||
select pg_notification_queue_usage();
|
||||
```
|
||||
|
||||
This command directly refreshes the Postgres notification queue, prompting PostgREST to update its schema cache with the latest database schema. This process is non-disruptive and does not require a database instance or PostgREST service restart.
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title = "rclone error: 's3 protocol error: received listing v1 with IsTruncated set, no NextMarker and no Contents'"
|
||||
topics = [ "storage" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
When attempting to list objects from a Supabase bucket using `rclone lsf`, you might encounter the error: `'s3 protocol error: received listing v1 with IsTruncated set, no NextMarker and no Contents'`.
|
||||
|
||||
## Why this happens
|
||||
|
||||
This issue arises because `rclone` defaults to using S3 List API Version 1. This version may not correctly handle large, paginated bucket listings when interacting with Supabase Storage.
|
||||
|
||||
## How to resolve this
|
||||
|
||||
To resolve this, explicitly instruct `rclone` to use S3 List API Version 2. This can be done by adding the `--s3-list-version 2` flag to your `rclone lsf` command.
|
||||
|
||||
Example:
|
||||
`rclone lsf supabase:bucket-name --s3-list-version 2`
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title = "'Scan error on column confirmation_token: converting NULL to string is unsupported' during Auth login"
|
||||
topics = [ "auth" ]
|
||||
keywords = []
|
||||
[[errors]]
|
||||
http_status_code = 500
|
||||
message = "error finding user: sql: Scan error on column 'confirmation_token': converting NULL to string is unsupported"
|
||||
|
||||
---
|
||||
|
||||
If you encounter a HTTP 500 error during authentication with the message `error finding user: sql: Scan error on column "confirmation_token": converting NULL to string is unsupported`, this typically indicates that the GoTrue Auth service found a `NULL` value in the `auth.users.confirmation_token` column, where a non-nullable string is expected.
|
||||
|
||||
To resolve this, update the `confirmation_token` to an empty string where it is `NULL`. You can execute the following SQL query in the [SQL Editor](/dashboard/project/_/sql/new):
|
||||
|
||||
```sql
|
||||
update auth.users set confirmation_token = '' where confirmation_token is null;
|
||||
```
|
||||
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title = "'Supabase Storage: Inefficient folder operations and hierarchical RLS challenges'"
|
||||
topics = [ "functions", "storage" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
Supabase Storage lacks native folder concepts or APIs for batch folder operations, which can lead to inefficient folder operations (move, rename, delete) and difficulties in implementing hierarchical access controls for objects.
|
||||
|
||||
## Why does this happen?
|
||||
|
||||
Storage buckets treat "folders" purely as key prefixes. This means file system-like folder behavior and inherited permissions are not built-in features of Supabase Storage.
|
||||
|
||||
## How to address these challenges
|
||||
|
||||
To overcome these limitations and implement robust folder management with hierarchical RLS, consider the following approach:
|
||||
|
||||
- **Model your folder hierarchy in a custom Postgres table.** This table should manage folder metadata such as folder IDs, parent IDs, paths, and permissions.
|
||||
- **Reference `storage.objects` within your custom metadata.** Store a reference to `storage.objects.id` in your custom table to link files to their respective folders.
|
||||
- **Implement RLS policies on `storage.objects`.** These policies must `JOIN` with your custom metadata table to enforce hierarchical access permissions based on your defined folder structure.
|
||||
- **Handle batch folder operations via your metadata table.** For operations like moving or renaming folders, update the relevant entries in your custom metadata table. Note that actual file paths in Storage are not directly altered by these operations.
|
||||
- **Optimize RLS policies for performance.** `JOIN`s in RLS policies can lead to performance degradation, especially with large datasets. Ensure proper indexing on your custom metadata table and consider using `SECURITY DEFINER` functions to optimize policy execution.
|
||||
@@ -13,7 +13,7 @@ Postgres stands out from other databases by opting to create a new process, not
|
||||
|
||||
When a client (backend server) connects to Postgres, the connection is stateful and enduring, allowing clients to greedily hold onto connections without any obligation to give them up. Typically, applications do not continuously send queries to a database, so this pattern underutilizes finite connections that could have serviced other clients.
|
||||
|
||||
PostgreSQL's shortcomings are particularly evident when handling transient servers, like edge functions. They not only hoard connections for brief queries but also aggressively open and close connections, straining the database.
|
||||
Postgres's shortcomings are particularly evident when handling transient servers, like edge functions. They not only hoard connections for brief queries but also aggressively open and close connections, straining the database.
|
||||
|
||||
### How do poolers solve the problem?
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title = "Understanding PostgreSQL EXPLAIN Output"
|
||||
title = "Understanding Postgres EXPLAIN Output"
|
||||
github_url = "https://github.com/orgs/supabase/discussions/22839"
|
||||
date_created = "2024-04-17T16:31:47+00:00"
|
||||
topics = [ "database" ]
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title = "Understanding PostgreSQL Logging Levels and How They Impact Your Project"
|
||||
title = "Understanding Postgres Logging Levels and How They Impact Your Project"
|
||||
github_url = "https://github.com/orgs/supabase/discussions/29877"
|
||||
date_created = "2024-10-14T09:17:02+00:00"
|
||||
topics = [ "database" ]
|
||||
@@ -110,7 +110,7 @@ For a detailed explanation of the types of events logged in your database (such
|
||||
|
||||
**b. PGAudit: Postgres Auditing for Compliance and Security**
|
||||
|
||||
We support PGAudit extension, which extends PostgreSQL’s built-in logging capabilities to track database activities for auditing purposes.
|
||||
We support PGAudit extension, which extends Postgres’s built-in logging capabilities to track database activities for auditing purposes.
|
||||
|
||||
[How to Enable the PGAudit Extension](/docs/guides/database/extensions/pgaudit?queryGroups=database-method&database-method=dashboard#enable-the-extension)
|
||||
|
||||
|
||||
@@ -0,0 +1,83 @@
|
||||
---
|
||||
title = "'Why Supabase Edge Functions cannot provide static egress IPs for allow listing'"
|
||||
topics = [ "auth", "functions", "platform", "self-hosting" ]
|
||||
keywords = []
|
||||
---
|
||||
|
||||
When trying to establish secure connections from Supabase Edge Functions to external services, you might encounter difficulties with traditional IP allow listing. This guide explains why this problem occurs and provides various solutions to address it.
|
||||
|
||||
## What are egress IP addresses?
|
||||
|
||||
An **Egress IP address** is the public IP address from which network traffic originates when leaving a network or service. When an application or a serverless function connects to an external third-party service, the third-party service sees the connection coming from this Egress IP.
|
||||
|
||||
## What is CIDR?
|
||||
|
||||
**CIDR (Classless Inter-Domain Routing)** is a method for efficiently allocating IP addresses and routing IP packets. It allows IP addresses to be grouped into blocks using a notation like `[IP]/prefix_length` (e.g., `192.168.1.0/24`). Allow listing based on CIDR ranges typically grants access to all IP addresses within a specified block.
|
||||
|
||||
## Understanding Supabase Edge Functions and the problem
|
||||
|
||||
**Supabase Edge Functions** are serverless functions that run globally, close to your users, powered by Deno Deploy. They are designed for low-latency execution and adhere to "serverless" and "edge-first" principles:
|
||||
|
||||
- **Serverless:** The underlying infrastructure is dynamically managed by the cloud provider. Developers don't directly manage servers; they deploy their code, and the platform handles execution. This means Edge Functions do not run on a single, dedicated server.
|
||||
- **Edge-first:** They are deployed on a globally distributed network. This ensures they execute geographically close to your users, minimizing latency and improving performance.
|
||||
|
||||
Due to their serverless and globally distributed nature, Supabase Edge Functions do not originate from a single static IP address or a small, stable range of IPs. This prevents the assignment of fixed egress IP addresses necessary for traditional network-level allow listing.
|
||||
|
||||
## Problem summary
|
||||
|
||||
Because Supabase Edge Functions lack static or stable egress IP addresses, standard outbound calls from an Edge Function to a service that requires IP allow listing (e.g., specifying an IPv4 CIDR range like `[IP]/24`) will likely be blocked.
|
||||
|
||||
## Solutions for allow listing external services
|
||||
|
||||
This section details the approaches to enable secure connections from your Edge Functions.
|
||||
|
||||
### Solution 1: Use an outbound proxy (recommended)
|
||||
|
||||
This is the standard and recommended solution for connecting to third-party services that you do not control but which require a static IP address for allow listing.
|
||||
|
||||
- **How it Works:** Instead of your Edge Function directly connecting to the third-party service, you route all its requests through an intermediary "outbound proxy" server. This proxy server is configured with a static public IP address.
|
||||
- **Implementation:**
|
||||
1. Deploy a small, dedicated instance (for example, an AWS EC2 instance or similar cloud VM) with a fixed egress IP. This instance will act as your gateway or proxy.
|
||||
2. Configure your Supabase Edge Function to route its outbound requests through this proxy server.
|
||||
3. Allow list the static IP address of your proxy server on the external third-party service.
|
||||
- **Benefit:** All traffic from your Edge Function will appear to originate from the proxy's static IP, satisfying the allow listing requirement of the external service.
|
||||
|
||||
### Solution 2: Header-based authentication (best for servers you control)
|
||||
|
||||
If you own or control the destination server, you can shift your security strategy from the network layer to the application layer. Instead of relying on IP addresses for authentication, you use a shared secret.
|
||||
|
||||
- **What are Application Layer and Network Layer Security?**
|
||||
- **Network Layer Security:** Filters traffic based on originating IP addresses, operating at a lower level of the network stack. IP allow listing is an example.
|
||||
- **Application Layer Security:** Authenticates requests based on credentials or secrets embedded within the application's communication (like HTTP headers), operating at a higher level of the network stack.
|
||||
- **How it Works:**
|
||||
1. **Generate a Secret:** Create a unique, random secret token.
|
||||
2. **Store the Secret:** Store this token securely within your Supabase project (e.g., using Supabase Secrets, like `supabase secrets set CUSTOM_AUTH_TOKEN=your_random_string`).
|
||||
3. **Edge Function Configuration:** In your Edge Function, retrieve this secret token (e.g., `const token = Deno.env.get("CUSTOM_AUTH_TOKEN");`).
|
||||
4. **Send Request:** Include the secret token in a custom HTTP header with every request to your destination server.
|
||||
```javascript
|
||||
await fetch('https://api.example.com/data', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'X-Custom-Auth': token, // Example custom header carrying the secret
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
data: 'example',
|
||||
}),
|
||||
})
|
||||
```
|
||||
5. **Destination Server Configuration:** On your destination server, configure your middleware or application logic to:
|
||||
- Inspect incoming requests for the presence and correctness of this custom header and its secret token.
|
||||
- Reject any request that does not carry the correct secret token.
|
||||
- **Benefit:** This creates a "virtual allow list" at the application level, effectively controlling access without relying on static IP addresses.
|
||||
|
||||
### Solution 3: Self-hosting the edge runtime
|
||||
|
||||
This solution is for scenarios with extremely strict security policies (e.g., government or health data regulations) that might forbid the use of proxies or secret-based allow listing for compliance reasons.
|
||||
|
||||
- **How it Works:** Supabase Edge Functions are built on the open-source Edge Runtime. You have the option to deploy this runtime environment yourself.
|
||||
- **Implementation:**
|
||||
1. Deploy the open-source Edge Runtime on your own Virtual Private Server (VPS) or similar infrastructure.
|
||||
2. Ensure your VPS has a static IP address that you can control and allow list.
|
||||
- **Benefit:** This approach gives you full control over the network environment, including the ability to assign and manage static IP addresses, while still leveraging the Deno developer experience for your functions.
|
||||
- **Further Reading:** More details can typically be found in documentation or blog posts on self-hosted Deno functions.
|
||||
@@ -320,6 +320,7 @@ allow_list = [
|
||||
"SwiftUI",
|
||||
"Supabase",
|
||||
"Remapper",
|
||||
"TablePlus",
|
||||
"TextLocal",
|
||||
"Thanos",
|
||||
"TimescaleDB",
|
||||
|
||||
Reference in New Issue
Block a user