- Go 86.9%
- HTML 9.3%
- CSS 3.5%
- Dockerfile 0.2%
- Shell 0.1%
- Convert -reindex flag to 'bidone reindex' subcommand - Support optional path: all buckets, single bucket, or bucket/prefix - Add -R flag for recursive indexing (off by default) - Enable partial reindexing for batching large file trees - Set ownership to admin with mode 0700 for new entries - Add comprehensive test coverage for all reindex scenarios Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| cmd/bidone | ||
| internal | ||
| migrations | ||
| web | ||
| .gitignore | ||
| config.yaml | ||
| Containerfile | ||
| entrypoint.sh | ||
| go.mod | ||
| go.sum | ||
| LICENSE | ||
| minisign.pub | ||
| README.md | ||
Bidone
S3-compatible storage service with Unix-like permissions (rwx owner/group/others) instead of standard S3 IAM policies.
Table of Contents
- Features
- Quick Start
- Configuration
- Unix Permissions Model
- S3 API Reference
- Web UI
- Database Schema
- Storage Layout
- Examples
- Development
- Profiling
- Limitations
- Security Considerations
- License
- Contributing
Features
- S3 API Compatibility: Works with AWS CLI, SDKs, and S3-compatible tools
- Unix Permissions: Familiar rwx permission model for owner/group/others
- Web UI: HTMX-powered admin interface for managing buckets, objects, users, and permissions
- Versioning: Full object versioning support
- Multipart Upload: Support for large file uploads
- Presigned URLs: Generate time-limited access URLs
- SQLite Backend: Simple, embedded database for metadata
- Filesystem Storage: Objects stored on local filesystem
Quick Start
Build and Run
# Clone and build
cd bidone
go build -o bidone ./cmd/bidone
# Run with defaults
./bidone
# Or run directly
go run ./cmd/bidone
First Run - Admin Credentials
On first startup, Bidone automatically creates an admin user and prints the credentials to the terminal:
============================================================
ADMIN CREDENTIALS
============================================================
Web UI Login:
Username: admin
Password: xK#9mPq2$nL5vR8w (generated)
S3 API Credentials:
Access Key: AKIA7F3E9A2B1C4D5E6F (generated)
Secret Key: 8a9b7c6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a0b (generated)
------------------------------------------------------------
Save these credentials! They won't be shown again.
You can change them later via the Web UI.
============================================================
Important: Save these credentials immediately - they are only displayed once!
You can also set credentials via environment variables (see Configuration section).
Test with AWS CLI
# Configure credentials (use values from first run output)
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-key>
# Create bucket
aws --endpoint-url http://localhost:8080 s3 mb s3://my-bucket
# Upload file
aws --endpoint-url http://localhost:8080 s3 cp README.md s3://my-bucket/
# List objects
aws --endpoint-url http://localhost:8080 s3 ls s3://my-bucket/
# Download file
aws --endpoint-url http://localhost:8080 s3 cp s3://my-bucket/README.md ./downloaded.md
# Delete file
aws --endpoint-url http://localhost:8080 s3 rm s3://my-bucket/README.md
# Delete bucket
aws --endpoint-url http://localhost:8080 s3 rb s3://my-bucket
Configuration
Configuration File
Create a config.yaml file:
server:
host: "0.0.0.0"
port: 8080 # S3 API port
web_port: 8081 # Web UI port
pprof_port: 6060 # pprof port (optional, disabled if 0)
database:
path: "./bidone.db"
storage:
data_dir: "./data"
security:
encryption_key: "base64-encoded-32-byte-key" # For encrypting secret keys (see below)
Run with config:
./bidone -config config.yaml
Environment Variables
Server Configuration
| Variable | Description | Default |
|---|---|---|
BIDONE_HOST |
Server bind address | 0.0.0.0 |
BIDONE_PORT |
S3 API server port | 8080 |
BIDONE_WEB_PORT |
Web UI server port | 8081 |
BIDONE_DB_PATH |
SQLite database path | ./bidone.db |
BIDONE_DATA_DIR |
Object storage directory | ./data |
BIDONE_TEMPLATES_DIR |
HTML templates directory | web/templates |
BIDONE_PPROF_PORT |
pprof server port (disabled if not set) | (disabled) |
BIDONE_ENCRYPTION_KEY |
Base64-encoded 32-byte key for encrypting S3 secret keys | (auto-generated) |
Admin Credentials (first run only)
| Variable | Description | Default |
|---|---|---|
BIDONE_ADMIN_USERNAME |
Admin username | admin |
BIDONE_ADMIN_PASSWORD |
Admin password | (generated) |
BIDONE_ADMIN_ACCESS_KEY |
S3 access key | (generated) |
BIDONE_ADMIN_SECRET_KEY |
S3 secret key | (generated) |
Example with custom credentials:
BIDONE_ADMIN_USERNAME=superadmin \
BIDONE_ADMIN_PASSWORD=mysecretpassword \
./bidone
Note: Admin environment variables are only used on first run when no admin user exists.
Command Line Flags
./bidone [flags]
| Flag | Description | Example |
|---|---|---|
-config |
Path to config file | -config /etc/bidone/config.yaml |
-data |
Storage data directory | -data /var/lib/bidone/data |
-db |
SQLite database path | -db /var/lib/bidone/bidone.db |
-help |
Show help message | -help |
-listen |
S3 API listen address (host:port) | -listen :8080 or -listen 0.0.0.0:9000 |
-web-listen |
Web UI listen address (host:port) | -web-listen :8081 or -web-listen :9001 |
-migrations |
Path to migrations file | -migrations /etc/bidone/migrations.sql |
-reindex |
Scan storage and index existing files | -reindex |
-pprof-listen |
pprof server address (disabled if not set) | -pprof-listen :6060 |
Examples:
# Basic usage with custom data directory
./bidone -data /mnt/storage/bidone
# Full configuration via flags
./bidone -listen :9000 -data /var/lib/bidone/data -db /var/lib/bidone/bidone.db
# Using config file
./bidone -config /etc/bidone/config.yaml
Priority order (highest to lowest):
- Command line flags
- Environment variables
- Config file
- Default values
Importing Existing Files
You can point Bidone to an existing directory and import all files using the -reindex flag.
Directory structure required:
<data-dir>/
├── bucket1/
│ └── objects/
│ ├── file1.txt
│ ├── file2.jpg
│ └── subdir/
│ └── file3.pdf
└── bucket2/
└── objects/
└── document.docx
Import commands:
# Import existing directory
./bidone -data /path/to/existing/data -reindex
# Or if data is already configured
./bidone -reindex
What happens during reindex:
- Scans all directories in
<data-dir>/- each becomes a bucket - Scans
objects/subdirectory for files - Creates metadata entries for files not already in database
- Calculates MD5 hash (ETag) for each file
- Detects content type from file extension
- Preserves existing metadata (won't overwrite)
Quick setup for existing files:
# Create the structure
mkdir -p /mnt/storage/my-bucket/objects
# Copy or move your files
cp -r /path/to/files/* /mnt/storage/my-bucket/objects/
# Start Bidone with reindex
./bidone -data /mnt/storage -reindex
Output example:
2024/01/15 10:30:00 Starting reindex of storage directory...
Creating bucket: my-bucket
Indexed: file1.txt (1024 bytes)
Indexed: images/photo.jpg (2048576 bytes)
Indexed: docs/report.pdf (512000 bytes)
2024/01/15 10:30:05 Reindex complete
Unix Permissions Model
Bidone uses Unix-style permissions instead of S3 IAM policies:
Permission Bits
rwx rwx rwx
│││ │││ │││
│││ │││ ││└─ Others: Execute
│││ │││ │└── Others: Write
│││ │││ └─── Others: Read
│││ ││└───── Group: Execute
│││ │└────── Group: Write
│││ └─────── Group: Read
││└───────── Owner: Execute
│└────────── Owner: Write
└─────────── Owner: Read
Common Modes
| Mode | Octal | Description |
|---|---|---|
rwx------ |
700 | Owner only (default for buckets) |
rw-r--r-- |
644 | Owner write, all read (default for objects) |
rw-rw-r-- |
664 | Owner/group write, all read |
rwxr-xr-x |
755 | Owner all, group/others read+execute |
rwxrwxrwx |
777 | Full access for everyone |
S3 Operations to Permissions Mapping
| S3 Operation | Required Permission |
|---|---|
| GetObject, HeadObject | Read |
| ListObjects, ListObjectVersions | Read (on bucket) |
| PutObject, DeleteObject | Write |
| CreateBucket, DeleteBucket | Write (admin or owner) |
| GetBucketVersioning | Read |
| PutBucketVersioning | Write |
Setting Permissions
Via Web UI:
- Navigate to bucket or object
- Click "Settings" or "Permissions"
- Enter octal mode (e.g.,
755) - Save
Via API (planned):
# Custom header for permission setting (future feature)
aws --endpoint-url http://localhost:8080 s3 cp file.txt s3://bucket/ \
--metadata "x-amz-meta-mode=644"
S3 API Reference
Supported Operations
Service Operations
GET /- ListBuckets
Bucket Operations
PUT /{bucket}- CreateBucketDELETE /{bucket}- DeleteBucketHEAD /{bucket}- HeadBucketGET /{bucket}- ListObjectsV2GET /{bucket}?versions- ListObjectVersionsGET /{bucket}?versioning- GetBucketVersioningPUT /{bucket}?versioning- PutBucketVersioningGET /{bucket}?uploads- ListMultipartUploads
Object Operations
GET /{bucket}/{key}- GetObjectPUT /{bucket}/{key}- PutObjectDELETE /{bucket}/{key}- DeleteObjectHEAD /{bucket}/{key}- HeadObjectPUT /{bucket}/{key}(withx-amz-copy-source) - CopyObject
Versioning
GET /{bucket}/{key}?versionId={id}- GetObject (specific version)DELETE /{bucket}/{key}?versionId={id}- DeleteObject (specific version)
Multipart Upload
POST /{bucket}/{key}?uploads- CreateMultipartUploadPUT /{bucket}/{key}?uploadId={id}&partNumber={n}- UploadPartPOST /{bucket}/{key}?uploadId={id}- CompleteMultipartUploadDELETE /{bucket}/{key}?uploadId={id}- AbortMultipartUploadGET /{bucket}/{key}?uploadId={id}- ListParts
Authentication
Bidone supports AWS Signature Version 4 authentication:
# Using AWS CLI (automatically signs requests)
aws --endpoint-url http://localhost:8080 s3 ls
# Using curl with presigned URL
curl "http://localhost:8080/bucket/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&..."
Request Headers
| Header | Description |
|---|---|
Authorization |
AWS Signature V4 |
X-Amz-Date |
Request timestamp |
X-Amz-Content-SHA256 |
Payload hash |
X-Amz-Copy-Source |
Source for copy operation |
Content-Type |
Object content type |
Response Headers
| Header | Description |
|---|---|
ETag |
Object hash |
X-Amz-Version-Id |
Version ID (if versioning enabled) |
X-Amz-Delete-Marker |
Delete marker indicator |
Last-Modified |
Object modification time |
Web UI
Dashboard
Access the web interface at http://localhost:8080/ui
Features
- Buckets: Create, delete, configure versioning, manage permissions
- Objects: Upload, download, delete, view metadata
- Users (admin only): Create, delete, manage groups, regenerate API keys
- Groups (admin only): Create, delete, manage members
- Permissions: Set owner, group, and mode for buckets and objects
Navigation
| URL | Description |
|---|---|
/ui/login |
Login page |
/ui/buckets |
Bucket list |
/ui/buckets/{name} |
Bucket detail / object browser |
/ui/users |
User management (admin) |
/ui/users/{id} |
User detail |
/ui/groups |
Group management (admin) |
/ui/groups/{id} |
Group detail |
Database Schema
Tables
-- Users
CREATE TABLE users (
id INTEGER PRIMARY KEY,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
access_key TEXT UNIQUE,
secret_key TEXT,
is_admin BOOLEAN DEFAULT FALSE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Groups
CREATE TABLE groups (
id INTEGER PRIMARY KEY,
name TEXT UNIQUE NOT NULL
);
-- User-Group membership
CREATE TABLE user_groups (
user_id INTEGER REFERENCES users(id),
group_id INTEGER REFERENCES groups(id),
PRIMARY KEY (user_id, group_id)
);
-- Buckets
CREATE TABLE buckets (
id INTEGER PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
owner_id INTEGER REFERENCES users(id),
group_id INTEGER REFERENCES groups(id),
mode INTEGER DEFAULT 448, -- 0700
versioning_enabled BOOLEAN DEFAULT FALSE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Objects
CREATE TABLE objects (
id INTEGER PRIMARY KEY,
bucket_id INTEGER REFERENCES buckets(id),
key TEXT NOT NULL,
version_id TEXT,
size INTEGER,
etag TEXT,
content_type TEXT,
owner_id INTEGER REFERENCES users(id),
group_id INTEGER REFERENCES groups(id),
mode INTEGER DEFAULT 420, -- 0644
is_delete_marker BOOLEAN DEFAULT FALSE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
UNIQUE(bucket_id, key, version_id)
);
-- Multipart uploads
CREATE TABLE multipart_uploads (
id INTEGER PRIMARY KEY,
upload_id TEXT UNIQUE NOT NULL,
bucket_id INTEGER REFERENCES buckets(id),
key TEXT NOT NULL,
owner_id INTEGER REFERENCES users(id),
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Multipart parts
CREATE TABLE multipart_parts (
upload_id TEXT REFERENCES multipart_uploads(upload_id),
part_number INTEGER,
etag TEXT,
size INTEGER,
PRIMARY KEY (upload_id, part_number)
);
-- Web sessions
CREATE TABLE sessions (
id TEXT PRIMARY KEY,
user_id INTEGER REFERENCES users(id),
expires_at DATETIME NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
Storage Layout
Objects are stored directly in bucket directories:
data/
├── .bidone/ # Hidden metadata directory
│ └── bucket-name/
│ ├── versions/
│ │ └── path/to/key/
│ │ ├── version-id-1
│ │ └── version-id-2
│ └── multipart/
│ └── upload-id/
│ ├── 1 # Part 1
│ ├── 2 # Part 2
│ └── ...
├── bucket-name/
│ └── path/to/key # Current version (direct access)
This layout allows direct filesystem access to objects without navigating through intermediate directories.
Examples
Python (boto3)
import boto3
# Use credentials from first run output
s3 = boto3.client(
's3',
endpoint_url='http://localhost:8080',
aws_access_key_id='<your-access-key>',
aws_secret_access_key='<your-secret-key>',
region_name='us-east-1'
)
# Create bucket
s3.create_bucket(Bucket='my-bucket')
# Upload file
s3.upload_file('local-file.txt', 'my-bucket', 'remote-key.txt')
# List objects
response = s3.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
print(obj['Key'], obj['Size'])
# Download file
s3.download_file('my-bucket', 'remote-key.txt', 'downloaded.txt')
# Enable versioning
s3.put_bucket_versioning(
Bucket='my-bucket',
VersioningConfiguration={'Status': 'Enabled'}
)
# Upload with versioning (returns version ID)
response = s3.put_object(
Bucket='my-bucket',
Key='versioned-file.txt',
Body=b'content v1'
)
print('Version ID:', response.get('VersionId'))
JavaScript (AWS SDK v3)
import { S3Client, CreateBucketCommand, PutObjectCommand } from "@aws-sdk/client-s3";
// Use credentials from first run output
const client = new S3Client({
endpoint: "http://localhost:8080",
region: "us-east-1",
credentials: {
accessKeyId: "<your-access-key>",
secretAccessKey: "<your-secret-key>"
},
forcePathStyle: true
});
// Create bucket
await client.send(new CreateBucketCommand({ Bucket: "my-bucket" }));
// Upload object
await client.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "hello.txt",
Body: "Hello, World!"
}));
Go
package main
import (
"context"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
// Use credentials from first run output
cfg, _ := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
"<your-access-key>",
"<your-secret-key>",
"",
)),
config.WithRegion("us-east-1"),
)
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("http://localhost:8080")
o.UsePathStyle = true
})
// Create bucket
client.CreateBucket(context.TODO(), &s3.CreateBucketInput{
Bucket: aws.String("my-bucket"),
})
}
curl
# Note: curl requires manual signature generation or presigned URLs
# List buckets (with presigned URL from web UI)
curl "http://localhost:8080/?X-Amz-Algorithm=AWS4-HMAC-SHA256&..."
# Upload with presigned URL
curl -X PUT -T file.txt "http://localhost:8080/bucket/key?X-Amz-Algorithm=..."
Multipart Upload (Large Files)
# AWS CLI automatically uses multipart for files > 8MB
aws --endpoint-url http://localhost:8080 s3 cp large-file.zip s3://my-bucket/
# Or explicitly with lower threshold
aws --endpoint-url http://localhost:8080 s3 cp large-file.zip s3://my-bucket/ \
--expected-size 1073741824
Versioning
# Enable versioning
aws --endpoint-url http://localhost:8080 s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled
# Upload multiple versions
echo "v1" | aws --endpoint-url http://localhost:8080 s3 cp - s3://my-bucket/file.txt
echo "v2" | aws --endpoint-url http://localhost:8080 s3 cp - s3://my-bucket/file.txt
echo "v3" | aws --endpoint-url http://localhost:8080 s3 cp - s3://my-bucket/file.txt
# List versions
aws --endpoint-url http://localhost:8080 s3api list-object-versions \
--bucket my-bucket
# Get specific version
aws --endpoint-url http://localhost:8080 s3api get-object \
--bucket my-bucket \
--key file.txt \
--version-id "abc123" \
output.txt
# Delete specific version
aws --endpoint-url http://localhost:8080 s3api delete-object \
--bucket my-bucket \
--key file.txt \
--version-id "abc123"
Development
Project Structure
bidone/
├── cmd/
│ └── bidone/
│ └── main.go # Application entry point
├── internal/
│ ├── server/
│ │ └── server.go # HTTP server and routing
│ ├── api/
│ │ ├── s3/
│ │ │ ├── handler.go # S3 API router
│ │ │ ├── bucket.go # Bucket operations
│ │ │ ├── object.go # Object operations
│ │ │ ├── list.go # List operations
│ │ │ ├── multipart.go # Multipart upload
│ │ │ ├── versioning.go # Versioning operations
│ │ │ └── presigned.go # Presigned URL generation
│ │ └── web/
│ │ ├── handler.go # Web UI router
│ │ ├── auth.go # Login/logout
│ │ ├── buckets.go # Bucket management
│ │ ├── objects.go # Object management
│ │ └── users.go # User/group management
│ ├── storage/
│ │ ├── backend.go # Storage interface
│ │ └── filesystem.go # Filesystem implementation
│ ├── permissions/
│ │ ├── model.go # Permission constants and types
│ │ └── checker.go # Permission verification
│ ├── auth/
│ │ ├── users.go # User CRUD
│ │ ├── groups.go # Group CRUD
│ │ ├── session.go # Web session management
│ │ └── s3auth.go # AWS Signature V4
│ ├── metadata/
│ │ ├── db.go # SQLite setup
│ │ ├── buckets.go # Bucket metadata
│ │ ├── objects.go # Object metadata
│ │ └── versions.go # Multipart upload metadata
│ └── config/
│ └── config.go # Configuration loading
├── web/
│ ├── templates/ # HTML templates
│ │ ├── layout.html
│ │ ├── login.html
│ │ ├── buckets.html
│ │ ├── objects.html
│ │ ├── users.html
│ │ ├── user_detail.html
│ │ ├── groups.html
│ │ ├── group_detail.html
│ │ └── partials/
│ └── static/
│ └── style.css
├── migrations/
│ └── 001_initial.sql # Database schema
├── config.yaml # Default configuration
└── go.mod # Go module definition
Building
# Development build
go build -o bidone ./cmd/bidone
# Production build
CGO_ENABLED=1 go build -ldflags="-s -w" -o bidone ./cmd/bidone
# Cross-compile (requires CGO for SQLite)
# Linux
GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -o bidone-linux ./cmd/bidone
# macOS
GOOS=darwin GOARCH=amd64 CGO_ENABLED=1 go build -o bidone-darwin ./cmd/bidone
Container Build
Build an OCI-compliant container image using Podman or Docker:
# Build with Podman
podman build -t bidone -f Containerfile .
# Build with Docker
docker build -t bidone -f Containerfile .
Run the container:
# Basic run with persistent data
podman run -d --name bidone \
-p 8080:8080 \
-p 8081:8081 \
-v ./data:/data \
bidone
# With environment variables
podman run -d --name bidone \
-p 8080:8080 \
-p 8081:8081 \
-v ./data:/data \
-e BIDONE_ENCRYPTION_KEY="your-base64-key" \
bidone
# Maintenance shell (runs as root)
podman run -it --rm -v ./data:/data bidone sh
Container details:
- Base image: Alpine Linux 3.21
- User:
bidone(uid/gid 1000) - Data volume:
/data(database at/data/bidone.db, objects at/data/data/) - Ports: 8080 (S3 API), 8081 (Web UI)
The entrypoint automatically fixes ownership of /data and /data/data directories on startup.
Dependencies
go mod download
go mod tidy
Required packages:
github.com/google/uuid- UUID generationgithub.com/mattn/go-sqlite3- SQLite driver (requires CGO)golang.org/x/crypto- Password hashing (bcrypt)gopkg.in/yaml.v3- YAML configuration
Testing
# Run all tests
go test ./...
# With coverage
go test -cover ./...
# Verbose
go test -v ./...
Profiling
Bidone includes built-in support for Go's pprof profiler, useful for diagnosing performance issues, memory leaks, and goroutine problems.
Enabling pprof
The pprof server runs on a separate port for security (not exposed on the S3 API port).
# Via command line flag
./bidone -pprof-listen :6060
# Via environment variable
BIDONE_PPROF_PORT=6060 ./bidone
# Via config file
# server:
# pprof_port: 6060
Available Endpoints
Once enabled, access http://localhost:6060/debug/pprof/ for the index page.
| Endpoint | Description |
|---|---|
/debug/pprof/ |
Index page with links to all profiles |
/debug/pprof/heap |
Memory allocations of live objects |
/debug/pprof/goroutine |
Stack traces of all current goroutines |
/debug/pprof/allocs |
Sampling of all past memory allocations |
/debug/pprof/block |
Stack traces that led to blocking on sync primitives |
/debug/pprof/mutex |
Stack traces of holders of contended mutexes |
/debug/pprof/profile |
CPU profile (30s by default) |
/debug/pprof/trace |
Execution trace |
/debug/pprof/cmdline |
Command line invocation |
/debug/pprof/symbol |
Symbol lookup |
Using go tool pprof
# Analyze heap memory
go tool pprof http://localhost:6060/debug/pprof/heap
# CPU profile (30 seconds)
go tool pprof http://localhost:6060/debug/pprof/profile
# CPU profile with custom duration
go tool pprof 'http://localhost:6060/debug/pprof/profile?seconds=60'
# Goroutine analysis
go tool pprof http://localhost:6060/debug/pprof/goroutine
# Interactive commands in pprof:
# top - Show top functions by resource usage
# top20 - Show top 20 functions
# list foo - Show source code for function 'foo'
# web - Open graph in browser (requires graphviz)
# png - Generate PNG graph
Trace Analysis
# Capture 5 second trace
curl -o trace.out 'http://localhost:6060/debug/pprof/trace?seconds=5'
# Analyze trace
go tool trace trace.out
Common Debugging Scenarios
High memory usage:
go tool pprof http://localhost:6060/debug/pprof/heap
(pprof) top
(pprof) list <function_name>
Goroutine leak:
curl 'http://localhost:6060/debug/pprof/goroutine?debug=2'
Slow requests:
go tool pprof 'http://localhost:6060/debug/pprof/profile?seconds=30'
(pprof) top
(pprof) web # requires graphviz
Security Note
The pprof endpoints expose internal application details. In production:
- Use a non-public port
- Restrict access via firewall rules
- Consider disabling when not actively debugging
Limitations
- Single node only: No clustering or replication
- Filesystem storage: Not suitable for very large deployments
- No lifecycle policies: Objects don't auto-expire
- No bucket policies: Only Unix permissions
- No cross-region replication: Single location
- No server-side encryption: Objects stored in plaintext
- No object locking: No WORM compliance
Security Considerations
- Save credentials on first run - they are randomly generated and shown only once
- Use HTTPS in production (put behind reverse proxy)
- Secure the database file (
bidone.db) - contains password hashes and API keys - Secure the data directory (
./data) - contains all stored objects - Regular backups of database and data directory
- Use environment variables for credentials in automated deployments
- Set a persistent encryption key in production (see below)
Encryption Key
Bidone encrypts S3 secret keys at rest using AES-256-GCM. The encryption key can be configured via:
- Environment variable (highest priority):
BIDONE_ENCRYPTION_KEY - Config file:
security.encryption_key
The key must be exactly 32 bytes, base64-encoded (44 characters).
Generate a key:
# Using openssl
openssl rand -base64 32
# Using Python
python3 -c "import secrets, base64; print(base64.b64encode(secrets.token_bytes(32)).decode())"
Set via environment variable:
export BIDONE_ENCRYPTION_KEY="your-base64-encoded-32-byte-key"
./bidone
Set via config file:
security:
encryption_key: "your-base64-encoded-32-byte-key"
Important:
- In development, if no key is set, a random key is auto-generated at startup. This means secret keys cannot be decrypted after a restart.
- In production, always set a persistent encryption key and back it up securely.
- If you lose the encryption key, all stored S3 secret keys become unrecoverable.
- The environment variable takes precedence over the config file.
License
GNU Affero General Public License v3.0 (AGPLv3)
Contributing
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request