A modular automation framework for local file / directory / ZIP operations,
SSRF-validated HTTP downloads, remote storage (Google Drive, S3, Azure Blob,
Dropbox, SFTP), and JSON-driven action execution over embedded TCP / HTTP
servers. Ships with a PySide6 GUI that exposes every feature through tabs.
All public functionality is re-exported from the top-level automation_file
facade.
- Local file / directory / ZIP operations with path traversal guard (
safe_join) - Validated HTTP downloads with SSRF protections, retry, and size / time caps
- Google Drive CRUD (upload, download, search, delete, share, folders)
- First-class S3, Azure Blob, Dropbox, and SFTP backends — installed by default
- JSON action lists executed by a shared
ActionExecutor— validate, dry-run, parallel - Loopback-first TCP and HTTP servers that accept JSON command batches with optional shared-secret auth
- Reliability primitives:
retry_on_transientdecorator,Quotasize / time budgets - File-watcher triggers — run an action list whenever a path changes (
FA_watch_*) - Cron scheduler — recurring action lists on a stdlib-only 5-field parser (
FA_schedule_*) - Transfer progress + cancellation — opt-in
progress_namehook on HTTP and S3 transfers (FA_progress_*) - Fast file search — OS index fast path (
mdfind/locate/es.exe) with a streamingscandirfallback (FA_fast_find) - Checksums + integrity verification — streaming
file_checksum/verify_checksumwith anyhashlibalgorithm;download_file(expected_sha256=...)verifies after transfer (FA_file_checksum,FA_verify_checksum) - Resumable HTTP downloads —
download_file(resume=True)writes to<target>.partand sendsRange: bytes=<n>-so interrupted transfers continue - Duplicate-file finder — three-stage size → partial-hash → full-hash pipeline; unique-size files are never hashed (
FA_find_duplicates) - DAG action executor — topological scheduling with parallel fan-out and per-branch skip-on-failure (
FA_execute_action_dag) - Entry-point plugins — third-party packages register their own
FA_*actions via[project.entry-points."automation_file.actions"];build_default_registry()picks them up automatically - Incremental directory sync — rsync-style mirror with size+mtime or checksum change detection, optional delete of extras, dry-run (
FA_sync_dir) - Directory manifests — JSON snapshot of every file's checksum under a root, with separate missing/modified/extra reporting on verify (
FA_write_manifest,FA_verify_manifest) - Notification sinks — webhook / Slack / SMTP / Telegram / Discord / Teams / PagerDuty with a fanout manager that does per-sink error isolation and sliding-window dedup; auto-notify on trigger + scheduler failures (
FA_notify_send,FA_notify_list) - Config file + secret providers — declare notification sinks / defaults in
automation_file.toml;${env:…}and${file:…}references resolve through an Env/File/Chained provider abstraction so secrets stay out of the file itself - Config hot reload —
ConfigWatcherpollsautomation_file.tomland re-applies sinks / defaults on change without restart - Shell / grep / JSON edit / tar / backup rotation —
FA_run_shell(argument-list subprocess with timeout),FA_grep(streaming text search),FA_json_get/FA_json_set/FA_json_delete(in-place JSON editing),FA_create_tar/FA_extract_tar,FA_rotate_backups - FTP / FTPS backend — plain FTP or explicit FTPS via
FTP_TLS.auth(); auto-registered asFA_ftp_* - Cross-backend copy —
FA_copy_betweenmoves data between any two backends vialocal://,s3://,drive://,azure://,dropbox://,sftp://,ftp://URIs - Scheduler overlap guard — running jobs are skipped on the next fire unless
allow_overlap=True - Server action ACL —
allowed_actions=(...)restricts which commands TCP / HTTP servers will dispatch - Variable substitution — opt-in
${env:VAR}/${date:%Y-%m-%d}/${uuid}/${cwd}expansion in action arguments viaexecute_action(..., substitute=True) - Conditional execution —
FA_if_exists/FA_if_newer/FA_if_size_gtrun a nested action list only when a guard passes - SQLite audit log —
AuditLog(db_path)records every action execution with actor / status / duration; query viarecent/count/purge - File integrity monitor —
IntegrityMonitorpolls a tree against a manifest and fires a callback + notification on drift - HTTPActionClient SDK — typed Python client for the HTTP action server with shared-secret auth, loopback guard, and OPTIONS-based ping
- AES-256-GCM file encryption —
encrypt_file/decrypt_filewithgenerate_key()/key_from_password()(PBKDF2-HMAC-SHA256); JSON actionsFA_encrypt_file/FA_decrypt_file - Prometheus metrics exporter —
start_metrics_server()exposesautomation_file_actions_total{action,status}counters andautomation_file_action_duration_seconds{action}histograms - WebDAV backend —
WebDAVClientwithexists/upload/download/delete/mkcol/list_diron any RFC 4918 server; rejects private / loopback targets unlessallow_private_hosts=True - SMB / CIFS backend —
SMBClientoversmbprotocol's high-levelsmbclientAPI; UNC-based, encrypted sessions by default - fsspec bridge — drive any
fsspec-backed filesystem (memory, local, s3, gcs, abfs, …) through the action registry withget_fs/fsspec_upload/fsspec_download/fsspec_list_diretc. - HTTP server observability —
GET /healthz/GET /readyzprobes,GET /openapi.jsonspec, andGET /progressWebSocket stream of live transfer snapshots - HTMX Web UI —
start_web_ui()serves a read-only dashboard (health, progress, registry) that polls HTML fragments; stdlib-only HTTP plus one CDN script with SRI - MCP (Model Context Protocol) server —
MCPServerbridges the registry to any MCP host (Claude Desktop, MCP CLIs) over newline-delimited JSON-RPC 2.0 on stdio; everyFA_*action becomes an MCP tool with an auto-generated input schema - PySide6 GUI (
python -m automation_file ui) with a tab per backend, the JSON-action runner, and dedicated tabs for Triggers, Scheduler, and live Progress - Rich CLI with one-shot subcommands plus legacy JSON-batch flags
- Project scaffolding (
ProjectBuilder) for executor-based automations
flowchart LR
User[User / CLI / JSON batch]
subgraph Facade["automation_file (facade)"]
Public["Public API<br/>execute_action, execute_action_parallel,<br/>validate_action, driver_instance,<br/>start_autocontrol_socket_server,<br/>start_http_action_server, Quota,<br/>retry_on_transient, safe_join, ..."]
end
subgraph Core["core"]
Registry[(ActionRegistry<br/>FA_* commands)]
Executor[ActionExecutor]
Callback[CallbackExecutor]
Loader[PackageLoader]
Json[json_store]
Retry[retry]
QuotaMod[quota]
Progress[progress<br/>Token + Reporter]
end
subgraph Events["event-driven"]
TriggerMod["trigger<br/>watchdog file watcher"]
SchedulerMod["scheduler<br/>cron background thread"]
end
subgraph Local["local"]
FileOps[file_ops]
DirOps[dir_ops]
ZipOps[zip_ops]
Safe[safe_paths]
end
subgraph Remote["remote"]
UrlVal[url_validator]
Http[http_download]
Drive["google_drive<br/>client + *_ops"]
S3["s3"]
Azure["azure_blob"]
Dropbox["dropbox_api"]
SFTP["sftp"]
end
subgraph Server["server"]
TCP[TCPActionServer]
HTTP[HTTPActionServer]
end
subgraph UI["ui (PySide6)"]
Launcher[launch_ui]
MainWindow["MainWindow<br/>9-tab control surface"]
end
subgraph Project["project / utils"]
Builder[ProjectBuilder]
Templates[templates]
Discovery[file_discovery]
end
User --> Public
User --> Launcher
Launcher --> MainWindow
MainWindow --> Public
Public --> Executor
Public --> Callback
Public --> Loader
Public --> TCP
Public --> HTTP
Executor --> Registry
Executor --> Retry
Executor --> QuotaMod
Callback --> Registry
Loader --> Registry
TCP --> Executor
HTTP --> Executor
Executor --> Json
Registry --> FileOps
Registry --> DirOps
Registry --> ZipOps
Registry --> Safe
Registry --> Http
Registry --> Drive
Registry --> S3
Registry --> Azure
Registry --> Dropbox
Registry --> SFTP
Registry --> Builder
Registry --> TriggerMod
Registry --> SchedulerMod
Registry --> Progress
TriggerMod --> Executor
SchedulerMod --> Executor
Http --> Progress
S3 --> Progress
Http --> UrlVal
Http --> Retry
Builder --> Templates
Builder --> Discovery
The ActionRegistry built by build_default_registry() is the single source
of truth for every FA_* command. ActionExecutor, CallbackExecutor,
PackageLoader, TCPActionServer, and HTTPActionServer all resolve commands
through the same shared registry instance exposed as executor.registry.
pip install automation_fileA single install pulls in every backend (Google Drive, S3, Azure Blob, Dropbox, SFTP) and the PySide6 GUI — no extras required for day-to-day use.
pip install "automation_file[dev]" # ruff, mypy, pre-commit, pytest-cov, build, twineRequirements:
- Python 3.10+
- Bundled dependencies:
google-api-python-client,google-auth-oauthlib,requests,tqdm,boto3,azure-storage-blob,dropbox,paramiko,PySide6,watchdog
from automation_file import execute_action
execute_action([
["FA_create_file", {"file_path": "test.txt"}],
["FA_copy_file", {"source": "test.txt", "target": "copy.txt"}],
])from automation_file import execute_action, execute_action_parallel, validate_action
# Fail-fast: aborts before any action runs if any name is unknown.
execute_action(actions, validate_first=True)
# Dry-run: log what would be called without invoking commands.
execute_action(actions, dry_run=True)
# Parallel: run independent actions through a thread pool.
execute_action_parallel(actions, max_workers=4)
# Manual validation — returns the list of resolved names.
names = validate_action(actions)from automation_file import driver_instance, drive_upload_to_drive
driver_instance.later_init("token.json", "credentials.json")
drive_upload_to_drive("example.txt")from automation_file import download_file
download_file("https://example.com/file.zip", "file.zip")from automation_file import start_autocontrol_socket_server
server = start_autocontrol_socket_server(
host="127.0.0.1", port=9943, shared_secret="optional-secret",
)Clients must prefix each payload with AUTH <secret>\n when shared_secret
is set. Non-loopback binds require allow_non_loopback=True explicitly.
from automation_file import start_http_action_server
server = start_http_action_server(
host="127.0.0.1", port=9944, shared_secret="optional-secret",
)
# curl -H 'Authorization: Bearer optional-secret' \
# -d '[["FA_create_dir",{"dir_path":"x"}]]' \
# http://127.0.0.1:9944/actionsfrom automation_file import retry_on_transient, Quota
@retry_on_transient(max_attempts=5, backoff_base=0.5)
def flaky_network_call(): ...
quota = Quota(max_bytes=50 * 1024 * 1024, max_seconds=30.0)
with quota.time_budget("bulk-upload"):
bulk_upload_work()from automation_file import safe_join
target = safe_join("/data/jobs", user_supplied_path)
# raises PathTraversalException if the resolved path escapes /data/jobs.Every backend is auto-registered by build_default_registry(), so FA_s3_*,
FA_azure_blob_*, FA_dropbox_*, and FA_sftp_* actions are available out
of the box — no separate register_*_ops call needed.
from automation_file import execute_action, s3_instance
s3_instance.later_init(region_name="us-east-1")
execute_action([
["FA_s3_upload_file", {"local_path": "report.csv", "bucket": "reports", "key": "report.csv"}],
])All backends (s3, azure_blob, dropbox_api, sftp) expose the same five
operations: upload_file, upload_dir, download_file, delete_*, list_*.
SFTP uses paramiko.RejectPolicy — unknown hosts are rejected, not auto-added.
Run an action list whenever a filesystem event fires on a watched path:
from automation_file import watch_start, watch_stop
watch_start(
name="inbox-sweeper",
path="/data/inbox",
action_list=[["FA_copy_all_file_to_dir", {"source_dir": "/data/inbox",
"target_dir": "/data/processed"}]],
events=["created", "modified"],
recursive=False,
)
# later:
watch_stop("inbox-sweeper")FA_watch_start / FA_watch_stop / FA_watch_stop_all / FA_watch_list
surface the same lifecycle to JSON action lists.
Recurring action lists on a stdlib-only 5-field cron parser:
from automation_file import schedule_add
schedule_add(
name="nightly-snapshot",
cron_expression="0 2 * * *", # every day at 02:00 local time
action_list=[["FA_zip_dir", {"dir_we_want_to_zip": "/data",
"zip_name": "/backup/data_nightly"}]],
)Supports *, exact values, a-b ranges, comma lists, and */n step
syntax with jan..dec / sun..sat aliases. JSON actions:
FA_schedule_add, FA_schedule_remove, FA_schedule_remove_all,
FA_schedule_list.
HTTP and S3 transfers accept an opt-in progress_name kwarg:
from automation_file import download_file, progress_cancel
download_file("https://example.com/big.bin", "big.bin",
progress_name="big-download")
# From another thread or the GUI:
progress_cancel("big-download")The shared progress_registry exposes live snapshots via progress_list()
and the FA_progress_list / FA_progress_cancel / FA_progress_clear JSON
actions. The GUI's Progress tab polls the registry every half second.
Query an OS index when available (mdfind on macOS, locate / plocate on
Linux, Everything's es.exe on Windows) and fall back to a streaming
os.scandir walk otherwise. No extra dependencies.
from automation_file import fast_find, scandir_find, has_os_index
# Uses the OS indexer when available, scandir fallback otherwise.
results = fast_find("/var/log", "*.log", limit=100)
# Force the portable path (skip the OS indexer).
results = fast_find("/data", "report_*.csv", use_index=False)
# Streaming — stop early without scanning the whole tree.
for path in scandir_find("/data", "*.csv"):
if "2026" in path:
breakFA_fast_find exposes the same function to JSON action lists:
[["FA_fast_find", {"root": "/var/log", "pattern": "*.log", "limit": 50}]]Stream any hashlib algorithm; verify_checksum compares with
hmac.compare_digest (constant-time):
from automation_file import file_checksum, verify_checksum
digest = file_checksum("bundle.tar.gz") # sha256 by default
verify_checksum("bundle.tar.gz", digest) # -> True
verify_checksum("bundle.tar.gz", "deadbeef...", algorithm="blake2b")Also available as FA_file_checksum / FA_verify_checksum JSON actions.
download_file(resume=True) writes to <target>.part and sends
Range: bytes=<n>- on the next attempt. Pair with expected_sha256= for
integrity verification once the transfer completes:
from automation_file import download_file
download_file(
"https://example.com/big.bin",
"big.bin",
resume=True,
expected_sha256="3b0c44298fc1...",
)Three-stage pipeline: size bucket → 64 KiB partial hash → full hash. Unique-size files are never hashed:
from automation_file import find_duplicates
groups = find_duplicates("/data", min_size=1024)
# list[list[str]] — each inner list is a set of identical files, sorted
# by size descending.FA_find_duplicates runs the same search from JSON.
sync_dir mirrors src into dst by copying only files that are new or
changed. Change detection is (size, mtime) by default; pass
compare="checksum" when mtime is unreliable. Extras under dst are left
alone by default — pass delete=True to prune them (and dry_run=True to
preview):
from automation_file import sync_dir
summary = sync_dir("/data/src", "/data/dst", delete=True)
# summary: {"copied": [...], "skipped": [...], "deleted": [...],
# "errors": [...], "dry_run": False}Symlinks are re-created as symlinks rather than followed, so a link
pointing outside the tree can't blow up the mirror. JSON action:
FA_sync_dir.
Write a JSON manifest of every file's checksum under a tree and verify the tree hasn't changed later:
from automation_file import write_manifest, verify_manifest
write_manifest("/release/payload", "/release/MANIFEST.json")
# Later…
result = verify_manifest("/release/payload", "/release/MANIFEST.json")
if not result["ok"]:
raise SystemExit(f"manifest mismatch: {result}")result reports matched, missing, modified, and extra lists
separately. Extras don't fail verification (mirrors sync_dir's
non-deleting default); missing or modified do. JSON actions:
FA_write_manifest, FA_verify_manifest.
Push one-off messages or auto-notify on trigger/scheduler failures via webhook, Slack, or SMTP:
from automation_file import (
SlackSink, WebhookSink, EmailSink,
notification_manager, notify_send,
)
notification_manager.register(SlackSink("https://hooks.slack.com/services/T/B/X"))
notify_send("deploy complete", body="rev abc123", level="info")Every sink implements the same send(subject, body, level) contract. The
fanout NotificationManager does per-sink error isolation (one broken
sink doesn't starve the others), sliding-window dedup so a stuck trigger
can't flood a channel, and SSRF validation on every webhook/Slack URL.
Scheduler and trigger dispatchers auto-notify on failure at
level="error" — registering a sink is all that's needed. JSON actions:
FA_notify_send, FA_notify_list.
Declare sinks and defaults once in automation_file.toml. Secret
references resolve at load time from environment variables or a file root
(Docker / K8s style):
# automation_file.toml
[secrets]
file_root = "/run/secrets"
[defaults]
dedup_seconds = 120
[[notify.sinks]]
type = "slack"
name = "team-alerts"
webhook_url = "${env:SLACK_WEBHOOK}"
[[notify.sinks]]
type = "email"
name = "ops-email"
host = "smtp.example.com"
port = 587
sender = "alerts@example.com"
recipients = ["ops@example.com"]
username = "${env:SMTP_USER}"
password = "${file:smtp_password}"from automation_file import AutomationConfig, notification_manager
config = AutomationConfig.load("automation_file.toml")
config.apply_to(notification_manager)Unresolved ${…} references raise SecretNotFoundException rather than
silently becoming empty strings. Custom provider chains can be built from
ChainedSecretProvider / EnvSecretProvider / FileSecretProvider and
passed as AutomationConfig.load(path, provider=…).
Opt in with substitute=True and ${…} references expand at dispatch time:
from automation_file import execute_action
execute_action(
[["FA_create_file", {"file_path": "reports/${date:%Y-%m-%d}/${uuid}.txt"}]],
substitute=True,
)Supports ${env:VAR}, ${date:FMT} (strftime), ${uuid}, ${cwd}. Unknown
names raise SubstitutionException — no silent empty strings.
Run a nested action list only when a path-based guard passes:
[
["FA_if_exists", {"path": "/data/in/job.json",
"then": [["FA_copy_file", {"source": "/data/in/job.json",
"target": "/data/processed/job.json"}]]}],
["FA_if_newer", {"source": "/src", "target": "/dst",
"then": [["FA_sync_dir", {"src": "/src", "dst": "/dst"}]]}],
["FA_if_size_gt", {"path": "/logs/app.log", "size": 10485760,
"then": [["FA_run_shell", {"command": ["logrotate", "/logs/app.log"]}]]}]
]AuditLog writes one row per action with short-lived connections and a
module-level lock:
from automation_file import AuditLog
audit = AuditLog("audit.sqlite3")
audit.record(action="FA_copy_file", actor="ops",
status="ok", duration_ms=12, detail={"src": "a", "dst": "b"})
for row in audit.recent(limit=50):
print(row["timestamp"], row["action"], row["status"])Poll a tree against a manifest and fire a callback + notification on drift:
from automation_file import IntegrityMonitor, notification_manager, write_manifest
write_manifest("/srv/site", "/srv/MANIFEST.json")
mon = IntegrityMonitor(
root="/srv/site",
manifest_path="/srv/MANIFEST.json",
interval=60.0,
manager=notification_manager,
on_drift=lambda summary: print("drift:", summary),
)
mon.start()Manifest-load errors are surfaced as drift so tamper and config issues aren't silently different code paths.
Authenticated encryption with a self-describing envelope. Derive a key from a password or generate one directly:
from automation_file import encrypt_file, decrypt_file, key_from_password
key = key_from_password("correct horse battery staple", salt=b"app-salt-v1")
encrypt_file("secret.pdf", "secret.pdf.enc", key, associated_data=b"v1")
decrypt_file("secret.pdf.enc", "secret.pdf", key, associated_data=b"v1")Tamper is detected via GCM's authentication tag and reported as
CryptoException("authentication failed"). JSON actions:
FA_encrypt_file, FA_decrypt_file.
Typed client for the HTTP action server; enforces loopback by default and carries the shared secret for you:
from automation_file import HTTPActionClient
with HTTPActionClient("http://127.0.0.1:9944", shared_secret="s3cr3t") as client:
client.ping() # OPTIONS /actions
result = client.execute([["FA_create_dir", {"dir_path": "x"}]])Auth failures map to HTTPActionClientException with kind="unauthorized";
404 responses report the server exists but does not expose /actions.
ActionExecutor records one counter row and one histogram sample per
action. Serve them on a loopback /metrics endpoint:
from automation_file import start_metrics_server
server = start_metrics_server(host="127.0.0.1", port=9945)
# curl http://127.0.0.1:9945/metricsExports automation_file_actions_total{action,status} and
automation_file_action_duration_seconds{action}. Non-loopback binds
require allow_non_loopback=True explicitly.
Extra remote backends alongside the first-class S3 / Azure / Dropbox / SFTP:
from automation_file import WebDAVClient, SMBClient, fsspec_upload
# RFC 4918 WebDAV — loopback/private targets require opt-in.
dav = WebDAVClient("https://files.example.com/remote.php/dav",
username="alice", password="s3cr3t")
dav.upload("/local/report.csv", "team/reports/report.csv")
# SMB / CIFS via smbprotocol's high-level smbclient API.
with SMBClient("fileserver", "share", "alice", "s3cr3t") as smb:
smb.upload("/local/report.csv", "reports/report.csv")
# Anything fsspec can address — memory, gcs, abfs, local, …
fsspec_upload("/local/report.csv", "memory://reports/report.csv")start_http_action_server() additionally exposes liveness / readiness probes,
an OpenAPI 3.0 spec, and a WebSocket stream of progress snapshots:
curl http://127.0.0.1:9944/healthz # {"status": "ok"}
curl http://127.0.0.1:9944/readyz # 200 when registry non-empty, 503 otherwise
curl http://127.0.0.1:9944/openapi.json # OpenAPI 3.0 spec
# Connect a WebSocket to ws://127.0.0.1:9944/progress for live progress frames.A read-only observability dashboard built on stdlib HTTP + HTMX (loaded from a pinned CDN URL with SRI). Loopback-only by default; optional shared secret:
from automation_file import start_web_ui
server = start_web_ui(host="127.0.0.1", port=9955, shared_secret="s3cr3t")
# Browse http://127.0.0.1:9955/ — health, progress, and registry fragments
# auto-poll every few seconds. Write operations stay on the action servers.Expose every registered FA_* action to an MCP host (Claude Desktop, MCP
CLIs) over JSON-RPC 2.0 on stdio:
from automation_file import MCPServer
MCPServer().serve_stdio() # reads JSON-RPC from stdin, writes to stdoutpip install exposes an automation_file_mcp console script (via
[project.scripts]) so MCP hosts can launch the bridge without any Python
glue. Three equivalent launch styles:
automation_file_mcp # installed console script
python -m automation_file mcp # CLI subcommand
python examples/mcp/run_mcp.py # standalone launcherAll three accept --name, --version, and --allowed-actions (comma-
separated whitelist — strongly recommended since the default registry
includes high-privilege actions like FA_run_shell). See
examples/mcp/ for ready-to-copy Claude Desktop config.
Tool descriptors are generated on the fly by introspecting each action's signature — parameter names and types become a JSON schema, so hosts can render fields without any manual wiring.
Run actions in dependency order; independent branches fan out across a
thread pool. Each node is {"id": ..., "action": [...], "depends_on": [...]}:
from automation_file import execute_action_dag
execute_action_dag([
{"id": "fetch", "action": ["FA_download_file",
["https://example.com/src.tar.gz", "src.tar.gz"]]},
{"id": "verify", "action": ["FA_verify_checksum",
["src.tar.gz", "3b0c44298fc1..."]],
"depends_on": ["fetch"]},
{"id": "unpack", "action": ["FA_unzip_file", ["src.tar.gz", "src"]],
"depends_on": ["verify"]},
])If verify raises, unpack is marked skipped by default. Pass
fail_fast=False to run dependents regardless. JSON action:
FA_execute_action_dag.
Third-party packages advertise actions via pyproject.toml:
[project.entry-points."automation_file.actions"]
my_plugin = "my_plugin:register"where register is a zero-argument callable returning a
dict[str, Callable]. Once installed in the same environment, the
commands show up in every freshly-built registry:
# my_plugin/__init__.py
def greet(name: str) -> str:
return f"hello {name}"
def register() -> dict:
return {"FA_greet": greet}# after `pip install my_plugin`
from automation_file import execute_action
execute_action([["FA_greet", {"name": "world"}]])Plugin failures are logged and swallowed — one broken plugin cannot break the library.
python -m automation_file ui # or: python main_ui.pyfrom automation_file import launch_ui
launch_ui()Tabs: Home, Local, Transfer, Progress, JSON actions, Triggers, Scheduler, Servers. A persistent log panel at the bottom streams every result and error.
from automation_file import create_project_dir
create_project_dir("my_workflow")# Subcommands (one-shot operations)
python -m automation_file ui
python -m automation_file zip ./src out.zip --dir
python -m automation_file unzip out.zip ./restored
python -m automation_file download https://example.com/file.bin file.bin
python -m automation_file create-file hello.txt --content "hi"
python -m automation_file server --host 127.0.0.1 --port 9943
python -m automation_file http-server --host 127.0.0.1 --port 9944
python -m automation_file drive-upload my.txt --token token.json --credentials creds.json
python -m automation_file mcp --allowed-actions FA_file_checksum,FA_fast_find
automation_file_mcp --allowed-actions FA_file_checksum,FA_fast_find # installed console script
# Legacy flags (JSON action lists)
python -m automation_file --execute_file actions.json
python -m automation_file --execute_dir ./actions/
python -m automation_file --execute_str '[["FA_create_dir",{"dir_path":"x"}]]'
python -m automation_file --create_project ./my_projectEach entry is either a bare command name, a [name, kwargs] pair, or a
[name, args] list:
[
["FA_create_file", {"file_path": "test.txt"}],
["FA_drive_upload_to_drive", {"file_path": "test.txt"}],
["FA_drive_search_all_file"]
]Full API documentation lives under docs/ and can be built with Sphinx:
pip install -r docs/requirements.txt
sphinx-build -b html docs/source docs/_build/htmlSee CLAUDE.md for architecture notes, conventions, and security
considerations.