- Go 88.1%
- HTML 6.4%
- Shell 2%
- CSS 1.7%
- Roff 1.7%
| .forgejo/workflows | ||
| cmd/bidone | ||
| container-sample | ||
| doc | ||
| internal | ||
| packaging | ||
| service-sample | ||
| web | ||
| .gitignore | ||
| AGENTS.md | ||
| build_static.sh | ||
| build_static_nozig.sh | ||
| config.yaml | ||
| Containerfile | ||
| encryption_proposal.md | ||
| entrypoint.sh | ||
| go.mod | ||
| go.sum | ||
| LICENSE | ||
| minisign.pub | ||
| README.md | ||
Bidone
S3-compatible storage service with Unix-like permissions (rwx owner/group/others) instead of standard S3 IAM policies.
Table of Contents
- Features
- Quick Start
- Configuration
- Unix Permissions Model
- S3 API Reference
- Web UI
- Database Schema
- Storage Layout
- Technical Docs
- Examples
- Development
- Profiling
- Limitations
- Security Considerations
- License
- Contributing
Features
- S3 API Compatibility: Works with AWS CLI, SDKs, and S3-compatible tools
- Unix Permissions: Familiar rwx permission model for owner/group/others
- Web UI: HTMX-powered admin interface for managing buckets, objects, users, and permissions
- Versioning: Full object versioning support
- Multipart Upload: Support for large file uploads
- Presigned URLs: Generate time-limited access URLs
- SQLite Backend: Simple, embedded database for metadata
- Filesystem Storage: Objects stored on local filesystem
Quick Start
Build and Run
# Clone and build
cd bidone
go build -o bidone ./cmd/bidone
# Run with defaults
./bidone
# Or run directly
go run ./cmd/bidone
First Run - Admin Credentials
On first startup, Bidone automatically creates an admin user and prints the credentials to the terminal:
============================================================
ADMIN CREDENTIALS
============================================================
Web UI Login:
Username: admin
Password: xK#9mPq2$nL5vR8w (generated)
S3 API Credentials:
Access Key: AKIA7F3E9A2B1C4D5E6F (generated)
Secret Key: 8a9b7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a0b (generated)
------------------------------------------------------------
Save these credentials! They won't be shown again.
You can change them later via the Web UI.
============================================================
Important: Save these credentials immediately - they are only displayed once!
You can also set credentials via environment variables (see Configuration section).
Test with AWS CLI
# Configure credentials (use values from first run output)
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-key>
# Create bucket
aws --endpoint-url http://localhost:8080 s3 mb s3://my-bucket
# Upload file
aws --endpoint-url http://localhost:8080 s3 cp README.md s3://my-bucket/
# List objects
aws --endpoint-url http://localhost:8080 s3 ls s3://my-bucket/
# Download file
aws --endpoint-url http://localhost:8080 s3 cp s3://my-bucket/README.md ./downloaded.md
# Delete file
aws --endpoint-url http://localhost:8080 s3 rm s3://my-bucket/README.md
# Delete bucket
aws --endpoint-url http://localhost:8080 s3 rb s3://my-bucket
Configuration
Configuration File
Create a config.yaml file:
server:
host: "0.0.0.0"
port: 8080 # S3 API port
web_host: "0.0.0.0" # Optional Web UI bind host (defaults to host when omitted)
web_port: 8081 # Web UI port
web_ui_enabled: true # Set false to disable Web UI server
pprof_host: "0.0.0.0" # Optional pprof bind host (defaults to host when omitted)
pprof_port: 6060 # pprof port (optional, disabled if 0)
strict_consistency: false # Serialize object ops per key (optional)
cors_enabled: true # Enable/disable CORS handling on S3 endpoint
cors_origins: "*" # S3 CORS allowed origins (comma-separated, supports *.example.com)
auth_cache_enabled: true # Enable cache for S3 auth material/signing keys
auth_cache_ttl_seconds: 90 # TTL for auth material cache entries
auth_negative_cache_ttl_ms: 2000 # TTL for access key not-found cache entries
auth_cache_max_entries: 256 # Max entries for auth material cache
auth_signing_cache_max_entries: 256 # Max entries for derived signing-key cache
audit_enabled: false # Enable audit logging
audit_file: "" # Audit file path (empty = stdout, enables audit if set)
audit_format: "text" # Audit log format: text or json
database:
path: "./bidone.db"
busy_timeout_ms: 15000 # SQLite busy timeout in milliseconds
storage:
data_dir: "./data"
path_hardening: "strict" # strict (default) or plain
fsync_on_write: false # Call fsync after writes (optional)
multipart_layout: "files" # files (default) or packed
security:
encryption_key: "base64-encoded-32-byte-key" # For encrypting secret keys (see below)
Run with config:
./bidone -config config.yaml
A complete commented example is available at doc/config.full.yaml.
Environment Variables
Server Configuration
| Variable | Description | Default |
|---|---|---|
BIDONE_HOST |
Server bind address | 0.0.0.0 |
BIDONE_WEB_HOST |
Web UI bind address (defaults to BIDONE_HOST) |
(same as BIDONE_HOST) |
BIDONE_PORT |
S3 API server port | 8080 |
BIDONE_WEB_PORT |
Web UI server port | 8081 |
BIDONE_DISABLE_WEB_UI |
Disable Web UI server (true/false) |
false |
BIDONE_PPROF_HOST |
pprof bind address (defaults to BIDONE_HOST) |
(same as BIDONE_HOST) |
BIDONE_DB_PATH |
SQLite database path | ./bidone.db |
BIDONE_DB_BUSY_TIMEOUT_MS |
SQLite busy timeout in milliseconds | 15000 |
BIDONE_DATA_DIR |
Object storage directory | ./data |
BIDONE_PATH_HARDENING |
Storage path hardening mode: strict or plain |
strict |
BIDONE_FSYNC_ON_WRITE |
Call fsync after object writes | false |
BIDONE_MULTIPART_LAYOUT |
Multipart staging layout: files or packed |
files |
BIDONE_PPROF_PORT |
pprof server port (disabled if not set) | (disabled) |
BIDONE_ENCRYPTION_KEY |
Base64-encoded 32-byte key for encrypting S3 secret keys | (auto-generated) |
BIDONE_STRICT_CONSISTENCY |
Serialize object operations per key | false |
BIDONE_CORS_ENABLED |
Enable CORS handling for S3 API (true/false) |
true |
BIDONE_CORS_ORIGINS |
Comma-separated S3 CORS allowed origins (default *) |
* |
BIDONE_AUTH_CACHE_ENABLED |
Enable S3 auth cache (true/false) |
true |
BIDONE_AUTH_CACHE_TTL_SECONDS |
S3 auth cache TTL in seconds | 90 |
BIDONE_AUTH_NEGATIVE_CACHE_TTL_MS |
S3 auth negative-cache TTL in milliseconds | 2000 |
BIDONE_AUTH_CACHE_MAX_ENTRIES |
Max entries for S3 auth cache | 256 |
BIDONE_AUTH_SIGNING_CACHE_MAX_ENTRIES |
Max entries for S3 signing-key cache | 256 |
BIDONE_AUDIT |
Enable audit logging | false |
BIDONE_AUDIT_FILE |
Audit output file path (enables audit if set) | (stdout) |
BIDONE_AUDIT_FORMAT |
Audit log format: text or json (JSONL) |
text |
Admin Credentials (first run only)
| Variable | Description | Default |
|---|---|---|
BIDONE_ADMIN_USERNAME |
Admin username | admin |
BIDONE_ADMIN_PASSWORD |
Admin password | (generated) |
BIDONE_ADMIN_ACCESS_KEY |
S3 access key | (generated) |
BIDONE_ADMIN_SECRET_KEY |
S3 secret key | (generated) |
Example with custom credentials:
BIDONE_ADMIN_USERNAME=superadmin \
BIDONE_ADMIN_PASSWORD=mysecretpassword \
./bidone
Note: Admin environment variables are only used on first run when no admin user exists.
Command Line Flags
./bidone [flags]
| Flag | Description | Example |
|---|---|---|
-config |
Path to config file | -config /etc/bidone/config.yaml |
-data |
Storage data directory | -data /var/lib/bidone/data |
-db |
SQLite database path | -db /var/lib/bidone/bidone.db |
-db-busy-timeout |
SQLite busy timeout in milliseconds | -db-busy-timeout 15000 |
-help |
Show help message | -help |
-listen |
S3 API listen address (host:port) | -listen :8080 or -listen 0.0.0.0:9000 |
-web-listen |
Web UI listen address (host:port) | -web-listen :8081 or -web-listen :9001 |
-disable-web-ui |
Disable Web UI server | -disable-web-ui |
-pprof-listen |
pprof server address (disabled if not set) | -pprof-listen :6060 |
-strict-consistency |
Serialize object operations per key | -strict-consistency |
-fsync-on-write |
Call fsync after object writes | -fsync-on-write |
-path-hardening |
Storage path hardening mode: strict or plain |
-path-hardening strict |
-multipart-layout |
Multipart staging layout: files or packed |
-multipart-layout files |
-cors-enabled |
Enable CORS handling for S3 API | -cors-enabled=false |
-cors-origins |
Comma-separated S3 CORS allowed origins (supports *.example.com) |
-cors-origins "https://app.example.com,*.example.com" |
-auth-cache |
Enable S3 auth cache | -auth-cache=false |
-auth-cache-ttl |
S3 auth cache TTL in seconds | -auth-cache-ttl 90 |
-auth-cache-negative-ttl-ms |
S3 auth negative-cache TTL in milliseconds | -auth-cache-negative-ttl-ms 2000 |
-auth-cache-max-entries |
Max entries for S3 auth cache | -auth-cache-max-entries 256 |
-auth-signing-cache-max-entries |
Max entries for S3 signing-key cache | -auth-signing-cache-max-entries 256 |
-audit |
Enable audit logging | -audit |
-audit-file |
Write audit logs to file (enables audit) | -audit-file /var/log/bidone/audit.log |
-audit-format |
Audit format: text or json (JSONL) |
-audit-format json |
Examples:
# Basic usage with custom data directory
./bidone -data /mnt/storage/bidone
# Full configuration via flags
./bidone -listen :9000 -data /var/lib/bidone/data -db /var/lib/bidone/bidone.db
# Bind S3 and Web UI to different addresses
./bidone -listen 127.0.0.1:8080 -web-listen 0.0.0.0:8081
# Using config file
./bidone -config /etc/bidone/config.yaml
Priority order (highest to lowest):
- Command line flags
- Environment variables
- Config file
- Default values
User Management Commands
Use the user subcommands to manage Web UI login users:
# Create user (password auto-generated if omitted)
bidone user create [--admin] [--password pw] <username>
# Delete user
bidone user delete [--force] <username>
# List users
bidone user list
# Change user password
bidone user passwd --password <new-password> <username>
# Group membership
bidone user add-group <username> <group>
bidone user remove-group <username> <group>
Importing Existing Files
You can point Bidone to an existing directory and import all files using the reindex subcommand. Files live directly inside bucket directories (flat layout, no intermediate objects/ folder).
Directory structure:
<data-dir>/
├── bucket1/
│ ├── file1.txt
│ ├── file2.jpg
│ └── subdir/
│ └── file3.pdf
└── bucket2/
└── document.docx
Import commands:
# Reindex all buckets (top-level files only)
bidone reindex
# Reindex all buckets recursively (including subdirectories)
bidone reindex -R
# Reindex with custom ownership/group/mode for newly indexed files
bidone reindex -uid 2 -gid 3 -mode 640 mybucket
# Reindex a specific bucket
bidone reindex mybucket
# Reindex a specific prefix within a bucket
bidone reindex mybucket/uploads
# Custom data directory
bidone reindex -data /path/to/data -R
What happens during reindex:
- Each top-level directory in
<data-dir>/becomes a bucket - By default, files are indexed with admin ownership, group 1, and mode 0700
- Use
-uid,-gid, and-modeto override ownership/group/mode for newly indexed files - Calculates MD5 hash (ETag) for each file
- Detects content type from file extension
- Preserves existing metadata (won't overwrite already-indexed files)
- Use
-Rto recurse into subdirectories
Quick setup for existing files:
# Create the bucket directory
mkdir -p /mnt/storage/my-bucket
# Copy or move your files directly into it
cp -r /path/to/files/* /mnt/storage/my-bucket/
# Reindex
bidone -data /mnt/storage reindex -R
Output example:
Reindexing all buckets...
Creating bucket: my-bucket
Indexed: file1.txt (1024 bytes)
Indexed: images/photo.jpg (2048576 bytes)
Indexed: docs/report.pdf (512000 bytes)
Reindex complete
Unix Permissions Model
Bidone uses Unix-style permissions instead of S3 IAM policies:
Detailed Guide
For an in-depth explanation with real-world setups and examples, see doc/permissions.md.
Permission Bits
a rwx rwx rwx
│ │││ │││ │││
│ │││ │││ ││└─ Others: Execute
│ │││ │││ │└── Others: Write
│ │││ │││ └─── Others: Read
│ │││ ││└───── Group: Execute
│ │││ │└────── Group: Write
│ │││ └─────── Group: Read
│ ││└───────── Owner: Execute
│ │└────────── Owner: Write
│ └─────────── Owner: Read
└───────────── Anonymous download (bit 9, octal 01000)
Common Modes
| Mode | Octal | Description |
|---|---|---|
rwx------ |
700 | Owner only (default for buckets) |
rw-r--r-- |
644 | Owner write, all read (common S3-friendly object mode) |
rw-rw-r-- |
664 | Owner/group write, all read |
rwxr-xr-x |
755 | Owner all, group/others read+execute |
rwxrwxrwx |
777 | Full access for everyone |
arwx------ |
1700 | Anonymous download + owner only |
arw-r--r-- |
1644 | Anonymous download + owner write, all read |
Anonymous Download
When the anonymous bit (a, octal 01000) is set on both a bucket and an object, the file can be downloaded via a plain HTTP GET without any authentication. This works on both the S3 API port (8080) and the Web UI port (8081).
# Enable anonymous download on a bucket and its objects
bidone chmod 1700 mybucket
bidone chmod -R 1644 mybucket
# Now anyone can download via plain HTTP GET:
curl http://localhost:8080/mybucket/file.txt # S3 API port
curl http://localhost:8081/ui/objects/mybucket/file.txt # Web UI port
Only GET (download) requests are served anonymously. All other operations (upload, delete, list) still require authentication.
Bucket Website Serving (web_serve)
You can mark a bucket to serve a website index object on the S3 endpoint:
# Enable website serving on bucket root (serves /site/index.html for /site/)
bidone web_serve site
# Use a custom index file and also enable trailing-slash prefix fallback
# (/site/docs/ -> /site/docs/home.html)
bidone web_serve --index-file home.html --dir-index site
# Enable website serving without changing anonymous bits
bidone web_serve --no-anonymous-flag site
# Disable website serving
bidone web_serve --disable site
Behavior notes:
web_serveis evaluated on the S3 endpoint.- Root fallback works when bucket
web_serveis enabled and anonymous download is enabled on the bucket. --dir-indexenables fallback for keys ending with/.- By default, enabling
web_servesets the bucket anonymous bit and recursively adds the anonymous bit to existing objects. Use--no-anonymous-flagto skip this. --index-fileaccepts basename values only (for exampleindex.html).
Bucket CORS Override (bucket_cors)
You can override CORS policy per bucket with fallback to the global server policy (cors_origins):
# Set bucket-specific CORS origins
bidone bucket_cors --origins "https://app.example.com,*.example.com" site
# Clear bucket override and fallback to global cors_origins
bidone bucket_cors --reset site
Behavior notes:
- Per-bucket override applies only to that bucket on S3 endpoint requests.
- If bucket override is empty, Bidone uses the global
cors_originspolicy. - This can be configured from both CLI and Web UI bucket settings.
Database Backup (backupdb)
Create a consistent SQLite metadata backup using VACUUM INTO:
# Create backup in current directory (default --dest ".")
bidone backupdb
# Create backup inside an existing directory
bidone backupdb --dest /var/backups/bidone
# Create/overwrite a specific backup file
bidone backupdb --dest /var/backups/bidone/metadata-backup.db
Behavior notes:
- If
--destpoints to an existing directory, Bidone creates a timestamped file inside it. - If
--destpoints to a file path, that file is used as backup destination. --configand--dbcan be used exactly like the other admin subcommands.
S3 Operations to Permissions Mapping
| S3 Operation | Required Permission |
|---|---|
| GetObject, HeadObject | Read |
| ListObjects, ListObjectVersions | Read (on bucket) |
| PutObject, DeleteObject | Write |
| CreateBucket | Authenticated user (new bucket is created with private mode) |
| DeleteBucket | Write (on bucket) |
| GetBucketVersioning | Read |
| PutBucketVersioning | Write |
Setting Permissions
Via Web UI:
- Navigate to bucket or object
- Click "Settings" or "Permissions"
- Enter octal mode (e.g.,
755) - Save
Via API (planned):
# Custom header for permission setting (future feature)
aws --endpoint-url http://localhost:8080 s3 cp file.txt s3://bucket/ \
--metadata "x-amz-meta-mode=644"
S3 API Reference
Supported Operations
Service Operations
GET /- ListBuckets
Bucket Operations
PUT /{bucket}- CreateBucketDELETE /{bucket}- DeleteBucketHEAD /{bucket}- HeadBucketGET /{bucket}- ListObjectsV2GET /{bucket}?versions- ListObjectVersionsGET /{bucket}?versioning- GetBucketVersioningPUT /{bucket}?versioning- PutBucketVersioningGET /{bucket}?uploads- ListMultipartUploads
Object Operations
GET /{bucket}/{key}- GetObjectPUT /{bucket}/{key}- PutObjectDELETE /{bucket}/{key}- DeleteObjectHEAD /{bucket}/{key}- HeadObjectPUT /{bucket}/{key}(withx-amz-copy-source) - CopyObject
Versioning
GET /{bucket}/{key}?versionId={id}- GetObject (specific version)DELETE /{bucket}/{key}?versionId={id}- DeleteObject (specific version)
Multipart Upload
POST /{bucket}/{key}?uploads- CreateMultipartUploadPUT /{bucket}/{key}?uploadId={id}&partNumber={n}- UploadPartPOST /{bucket}/{key}?uploadId={id}- CompleteMultipartUploadDELETE /{bucket}/{key}?uploadId={id}- AbortMultipartUploadGET /{bucket}/{key}?uploadId={id}- ListParts
Authentication
Bidone supports AWS Signature Version 4 authentication:
# Using AWS CLI (automatically signs requests)
aws --endpoint-url http://localhost:8080 s3 ls
# Using curl with presigned URL
curl "http://localhost:8080/bucket/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&..."
Request Headers
| Header | Description |
|---|---|
Authorization |
AWS Signature V4 |
X-Amz-Date |
Request timestamp |
X-Amz-Content-SHA256 |
Payload hash |
X-Amz-Copy-Source |
Source for copy operation |
Content-Type |
Object content type |
Response Headers
| Header | Description |
|---|---|
ETag |
Object hash |
X-Amz-Version-Id |
Version ID (if versioning enabled) |
X-Amz-Delete-Marker |
Delete marker indicator |
Last-Modified |
Object modification time |
Healthcheck Endpoint
S3 service healthcheck endpoint:
GET /_health
Response:
{"status":"OK"}
Optional query parameters:
?check_dbchecks DB writeability (transactional probe, rolled back)?check_fschecks filesystem writeability (temp file create/write/remove)?check_allruns both checks
On check failure, response is HTTP 503 with JSON:
{"status":"KO","error":"..."}
Web UI
Dashboard
Access the web interface at http://localhost:8081/ui
Features
- Buckets: Create, delete, configure versioning, manage permissions
- Objects: Upload, download, delete, view metadata
- Users (admin only): Create, delete, manage groups, regenerate API keys
- Groups (admin only): Create, delete, manage members
- Permissions: Set owner, group, and mode for buckets and objects
Navigation
| URL | Description |
|---|---|
/ui/login |
Login page |
/ui/buckets |
Bucket list |
/ui/buckets/{name} |
Bucket detail / object browser |
/ui/users |
User management (admin) |
/ui/users/{id} |
User detail |
/ui/groups |
Group management (admin) |
/ui/groups/{id} |
Group detail |
Database Schema
Tables
The schema evolved across multiple migrations.
For the current structure and index strategy, see:
doc/database-structure.mdinternal/metadata/migrations/
Storage Layout
Object payloads are stored directly under <data_dir>/<bucket>/<key>.
Version and multipart state is stored in <data_dir>/.bidone/<bucket>/....
For current layout details (including files vs packed multipart behavior), see:
Technical Docs
doc/permissions.mddoc/database-structure.mddoc/multipart-behavior.mddoc/consistency-and-durability.mddoc/path-hardening.mddoc/reverse_proxy.mddoc/security.mddoc/debian-apparmor-path-reconfiguration.mddoc/selinux-package-reconfiguration.md
Examples
Python (boto3)
import boto3
# Use credentials from first run output
s3 = boto3.client(
's3',
endpoint_url='http://localhost:8080',
aws_access_key_id='<your-access-key>',
aws_secret_access_key='<your-secret-key>',
region_name='us-east-1'
)
# Create bucket
s3.create_bucket(Bucket='my-bucket')
# Upload file
s3.upload_file('local-file.txt', 'my-bucket', 'remote-key.txt')
# List objects
response = s3.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
print(obj['Key'], obj['Size'])
# Download file
s3.download_file('my-bucket', 'remote-key.txt', 'downloaded.txt')
# Enable versioning
s3.put_bucket_versioning(
Bucket='my-bucket',
VersioningConfiguration={'Status': 'Enabled'}
)
# Upload with versioning (returns version ID)
response = s3.put_object(
Bucket='my-bucket',
Key='versioned-file.txt',
Body=b'content v1'
)
print('Version ID:', response.get('VersionId'))
JavaScript (AWS SDK v3)
import { S3Client, CreateBucketCommand, PutObjectCommand } from "@aws-sdk/client-s3";
// Use credentials from first run output
const client = new S3Client({
endpoint: "http://localhost:8080",
region: "us-east-1",
credentials: {
accessKeyId: "<your-access-key>",
secretAccessKey: "<your-secret-key>"
},
forcePathStyle: true
});
// Create bucket
await client.send(new CreateBucketCommand({ Bucket: "my-bucket" }));
// Upload object
await client.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "hello.txt",
Body: "Hello, World!"
}));
Go
package main
import (
"context"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
// Use credentials from first run output
cfg, _ := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
"<your-access-key>",
"<your-secret-key>",
"",
)),
config.WithRegion("us-east-1"),
)
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("http://localhost:8080")
o.UsePathStyle = true
})
// Create bucket
client.CreateBucket(context.TODO(), &s3.CreateBucketInput{
Bucket: aws.String("my-bucket"),
})
}
curl
# Note: curl requires manual signature generation or presigned URLs
# List buckets (with presigned URL from web UI)
curl "http://localhost:8080/?X-Amz-Algorithm=AWS4-HMAC-SHA256&..."
# Upload with presigned URL
curl -X PUT -T file.txt "http://localhost:8080/bucket/key?X-Amz-Algorithm=..."
Multipart Upload (Large Files)
Multipart staging layout is configurable:
files(default): one temp file per part, best parallel upload throughputpacked: append parts into one temp file, fewer filesystem metadata operations
Set via config (storage.multipart_layout), env (BIDONE_MULTIPART_LAYOUT), or flag (-multipart-layout).
# AWS CLI automatically uses multipart for files > 8MB
aws --endpoint-url http://localhost:8080 s3 cp large-file.zip s3://my-bucket/
# Or explicitly with lower threshold
aws --endpoint-url http://localhost:8080 s3 cp large-file.zip s3://my-bucket/ \
--expected-size 1073741824
Versioning
# Enable versioning
aws --endpoint-url http://localhost:8080 s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled
# Upload multiple versions
echo "v1" | aws --endpoint-url http://localhost:8080 s3 cp - s3://my-bucket/file.txt
echo "v2" | aws --endpoint-url http://localhost:8080 s3 cp - s3://my-bucket/file.txt
echo "v3" | aws --endpoint-url http://localhost:8080 s3 cp - s3://my-bucket/file.txt
# List versions
aws --endpoint-url http://localhost:8080 s3api list-object-versions \
--bucket my-bucket
# Get specific version
aws --endpoint-url http://localhost:8080 s3api get-object \
--bucket my-bucket \
--key file.txt \
--version-id "abc123" \
output.txt
# Delete specific version
aws --endpoint-url http://localhost:8080 s3api delete-object \
--bucket my-bucket \
--key file.txt \
--version-id "abc123"
Development
Project Structure
High-level layout:
cmd/bidone/: main server binary + admin CLI subcommands (user,key,group,chmod,chown,reindex,crosscheck,web_serve,bucket_cors,presign,backupdb)internal/api/s3/: S3 handlers (object, list, multipart, versioning)internal/api/web/: Web UI handlersinternal/storage/: filesystem backend and multipart implementationinternal/metadata/: SQLite access layer and migrationsinternal/auth/: users, groups, keys, sessions, S3 authinternal/permissions/: permission model and checkerpackaging/: distro packaging assets/scriptsdoc/: technical documentationweb/: embedded templates and static assets
For exact current contents, use:
find cmd internal packaging doc web -maxdepth 3 -type f | sort
Building
# Development build
go build -o bidone ./cmd/bidone
# Production build
CGO_ENABLED=1 go build -ldflags="-s -w" -o bidone ./cmd/bidone
# Cross-compile (requires CGO for SQLite)
# Linux
GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -o bidone-linux ./cmd/bidone
# macOS
GOOS=darwin GOARCH=amd64 CGO_ENABLED=1 go build -o bidone-darwin ./cmd/bidone
Container Build
Build an OCI-compliant container image using Podman or Docker:
# Build with Podman
podman build -t bidone -f Containerfile .
# Build with Docker
docker build -t bidone -f Containerfile .
Run the container:
# Basic run with persistent data
podman run -d --name bidone \
-p 8080:8080 \
-p 8081:8081 \
-v ./data:/data \
bidone
# With environment variables
podman run -d --name bidone \
-p 8080:8080 \
-p 8081:8081 \
-v ./data:/data \
-e BIDONE_ENCRYPTION_KEY="your-base64-key" \
bidone
# Maintenance shell (runs as root)
podman run -it --rm -v ./data:/data bidone sh
Container details:
- Base image: Alpine Linux 3.21
- User:
bidone(uid/gid 1000) - Data volume:
/data(database at/data/bidone.db, objects at/data/data/) - Ports: 8080 (S3 API), 8081 (Web UI)
The entrypoint automatically fixes ownership of /data and /data/data directories on startup.
Running on Container
Sample files are provided under container-sample/:
- Quadlet (Podman + systemd):
container-sample/quadlet/bidone.containercontainer-sample/quadlet/bidone-caddy.containercontainer-sample/quadlet/Caddyfile
- Compose:
container-sample/compose/docker-compose.ymlcontainer-sample/compose/Caddyfile
The Bidone image used in the samples is:
git.lattuga.net/bida/bidone/bidone:latest
Quick start:
# Quadlet: install unit files and reload systemd
sudo cp container-sample/quadlet/*.container /etc/containers/systemd/
sudo mkdir -p /etc/bidone/caddy
sudo cp container-sample/quadlet/Caddyfile /etc/bidone/caddy/Caddyfile
sudo systemctl daemon-reload
sudo systemctl enable --now bidone.service bidone-caddy.service
# Compose
cd container-sample/compose
docker compose up -d
Dependencies
go mod download
go mod tidy
Required packages:
github.com/google/uuid- UUID generationgithub.com/mattn/go-sqlite3- SQLite driver (requires CGO)golang.org/x/crypto- Password hashing (bcrypt)gopkg.in/yaml.v3- YAML configuration
Testing
# Run all tests
go test ./...
# With coverage
go test -cover ./...
# Verbose
go test -v ./...
Packaging
Bidone can be packaged using distro-standard paths:
- binary:
/usr/bin/bidone - config:
/etc/bidone/bidone.yaml - docs:
/usr/share/doc/bidone/README.md,/usr/share/doc/bidone/permissions.md,/usr/share/doc/bidone/path-hardening.md,/usr/share/doc/bidone/reverse_proxy.md,/usr/share/doc/bidone/security.md - state/data:
/var/lib/bidone(DB at/var/lib/bidone/bidone.db, objects at/var/lib/bidone/data) - systemd unit:
/lib/systemd/system/bidone.service(Deb),/usr/lib/systemd/system/bidone.service(RPM/Arch)
Packaged systemd units start Bidone with only -config /etc/bidone/bidone.yaml, so the packaged DB path and data directory come from that config file and are not overridden on the command line.
Repository packaging assets:
packaging/build-packages.sh(build.debviafpmand.rpmvia nativerpmbuild)packaging/arch/PKGBUILD(Arch package recipe)packaging/apparmor/usr.bin.bidone(default AppArmor profile for packaged installs)packaging/selinux/(SELinux policy sources and helper build script)packaging/man/bidone.1(man page)packaging/systemd/bidone.servicepackaging/sysusers/bidone.confpackaging/tmpfiles/bidone.conf
Packaged installs also ship an AppArmor profile at /etc/apparmor.d/usr.bin.bidone for the default layout (/etc/bidone, /var/lib/bidone). If you move the DB path, data directory, or audit log path outside that layout, adjust the profile locally before enforcing it.
RPM packages also ship a prebuilt SELinux module at /usr/share/selinux/packages/targeted/bidone.pp plus the source files under /usr/share/selinux/devel/bidone/. On SELinux-enabled RPM systems, the package installs the module and restores contexts automatically for the default packaged paths.
After installing a distro package, view CLI documentation with:
man bidone
Build deb/rpm packages (requires fpm for .deb, rpmbuild for .rpm; builds amd64/x86_64 and, when available, also arm64/aarch64):
./packaging/build-packages.sh 0.0.16 dist/bidone-x86_64 dist/packages dist/bidone-aarch64
Build Arch package:
./packaging/build-archlinux-release.sh --sync-pkgbuild
cd packaging/arch
BIDONE_PKGVER=0.0.16 makepkg -si
Build Arch package from the signed release archive:
tar -xzf bidone-archlinux-0.0.16.tar.gz
cd bidone-archlinux-0.0.16
makepkg -si
Version propagation helpers:
# Normalize tag/ref into package version (v0.0.16 -> 0.0.16)
./packaging/version.sh v0.0.16
# Build deb/rpm using env-provided version
BIDONE_VERSION=0.0.16 ./packaging/build-packages.sh
If BIDONE_PKGVER is not set, Arch PKGBUILD defaults to 0.0.0.
When Arch packaging files change, refresh the checksums in packaging/arch/PKGBUILD before committing:
./packaging/build-archlinux-release.sh --sync-pkgbuild
Profiling
Bidone includes built-in support for Go's pprof profiler, useful for diagnosing performance issues, memory leaks, and goroutine problems.
Enabling pprof
The pprof server runs on a separate port for security (not exposed on the S3 API port).
# Via command line flag
./bidone -pprof-listen :6060
# Via environment variable
BIDONE_PPROF_PORT=6060 ./bidone
# Via config file
# server:
# pprof_port: 6060
Available Endpoints
Once enabled, access http://localhost:6060/debug/pprof/ for the index page.
| Endpoint | Description |
|---|---|
/debug/pprof/ |
Index page with links to all profiles |
/debug/pprof/heap |
Memory allocations of live objects |
/debug/pprof/goroutine |
Stack traces of all current goroutines |
/debug/pprof/allocs |
Sampling of all past memory allocations |
/debug/pprof/block |
Stack traces that led to blocking on sync primitives |
/debug/pprof/mutex |
Stack traces of holders of contended mutexes |
/debug/pprof/profile |
CPU profile (30s by default) |
/debug/pprof/trace |
Execution trace |
/debug/pprof/cmdline |
Command line invocation |
/debug/pprof/symbol |
Symbol lookup |
Using go tool pprof
# Analyze heap memory
go tool pprof http://localhost:6060/debug/pprof/heap
# CPU profile (30 seconds)
go tool pprof http://localhost:6060/debug/pprof/profile
# CPU profile with custom duration
go tool pprof 'http://localhost:6060/debug/pprof/profile?seconds=60'
# Goroutine analysis
go tool pprof http://localhost:6060/debug/pprof/goroutine
# Interactive commands in pprof:
# top - Show top functions by resource usage
# top20 - Show top 20 functions
# list foo - Show source code for function 'foo'
# web - Open graph in browser (requires graphviz)
# png - Generate PNG graph
Trace Analysis
# Capture 5 second trace
curl -o trace.out 'http://localhost:6060/debug/pprof/trace?seconds=5'
# Analyze trace
go tool trace trace.out
Common Debugging Scenarios
High memory usage:
go tool pprof http://localhost:6060/debug/pprof/heap
(pprof) top
(pprof) list <function_name>
Goroutine leak:
curl 'http://localhost:6060/debug/pprof/goroutine?debug=2'
Slow requests:
go tool pprof 'http://localhost:6060/debug/pprof/profile?seconds=30'
(pprof) top
(pprof) web # requires graphviz
Security Note
The pprof endpoints expose internal application details. In production:
- Use a non-public port
- Restrict access via firewall rules
- Consider disabling when not actively debugging
Limitations
- Single node only: No clustering or replication
- Filesystem storage: Not suitable for very large deployments
- No lifecycle policies: Objects don't auto-expire
- No bucket policies: Only Unix permissions
- No cross-region replication: Single location
- No server-side encryption: Objects stored in plaintext
- No object locking: No WORM compliance
Security Considerations
- Save credentials on first run - they are randomly generated and shown only once
- Use HTTPS in production (put behind reverse proxy)
- Reverse-proxy and forwarded-header guidance:
doc/reverse_proxy.md
- Reverse-proxy and forwarded-header guidance:
- Secure the database file (
bidone.db) - contains password hashes and API keys - Secure the data directory (
./data) - contains all stored objects - Regular backups of database and data directory
- Use environment variables for credentials in automated deployments
- Set a persistent encryption key in production (see below)
- Debian package AppArmor profile: the
.debpackage ships/etc/apparmor.d/usr.bin.bidonefor the packaged default paths. If you move the database or data directory, update both the AppArmor profile and the systemd override together. - SQLite temporary files under confinement: packaged
systemdservice setsSQLITE_TMPDIR=/var/lib/bidone/tmpso SQLite temp files (etilqs_*) stay inside the allowed write tree.- Detailed Debian package instructions:
doc/debian-apparmor-path-reconfiguration.md
- Detailed Debian package instructions:
- RPM SELinux module: RPM packages ship and install a SELinux policy module for the packaged default paths.
- Detailed SELinux instructions:
doc/selinux-package-reconfiguration.md
- Detailed SELinux instructions:
Encryption Key
Bidone encrypts S3 secret keys at rest using AES-256-GCM. The encryption key can be configured via:
- Environment variable (highest priority):
BIDONE_ENCRYPTION_KEY - Config file:
security.encryption_key
The key must be exactly 32 bytes, base64-encoded (44 characters).
Generate a key:
# Using openssl
openssl rand -base64 32
# Using Python
python3 -c "import secrets, base64; print(base64.b64encode(secrets.token_bytes(32)).decode())"
Set via environment variable:
export BIDONE_ENCRYPTION_KEY="your-base64-encoded-32-byte-key"
./bidone
Set via config file:
security:
encryption_key: "your-base64-encoded-32-byte-key"
Important:
- In development, if no key is set, a random key is auto-generated at startup. This means secret keys cannot be decrypted after a restart.
- In production, always set a persistent encryption key and back it up securely.
- If you lose the encryption key, all stored S3 secret keys become unrecoverable.
- The environment variable takes precedence over the config file.
License
GNU Affero General Public License v3.0 (AGPLv3)
Contributing
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request