Conversation
…247) * feat: add Claude Code plugin with MCP server and skill * feat: improve skill setup flow with auth detection and install fallbacks * chore: align CLI and MCP tool descriptions * chore: make coderabbit review this PR * chore: sync SKILL.md generator with checked-in skill content * ci: open PR to regenerate SKILL.md when CLI changes land on develop
* feat: add --environment and --all flags to tower deploy command Support deploying to specific environments or all environments at once: - `tower deploy` (default, unchanged behavior) - `tower deploy --environment=production` / `tower deploy -e staging` - `tower deploy --all` The flags are mutually exclusive. The environment is passed as a query parameter on the deploy API endpoint. * Updates to support deployment across all deployments * chore: Remove the default deploy target * chore: Encode environment name before sending it
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Claude plugin metadata and MCP config, adds workflows for skill regen and wasm tests/publish, implements a new Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant CLI as Tower CLI
participant API as Tower API
User->>CLI: run `tower deploy --environment prod` or `tower deploy --all`
CLI->>CLI: parse args -> resolve DeployTarget (Environment / All)
CLI->>CLI: build/package app artifact (uses tower-package -> core/native)
CLI->>API: POST /apps/{urlencode(app_name)}/deploy[?environment=prod | ?all_environments=true]
API-->>CLI: 200 OK
CLI-->>User: print success message indicating target (env vs all)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
crates/tower-cmd/src/mcp.rs (1)
715-717: Consider exposing deploy target intower_deployinstead of hardcodingdefault.Line 715 currently fixes MCP deploys to one environment, which bypasses the new multi-environment deploy surface added to CLI.
♻️ Suggested direction
- let deploy_target = deploy::DeployTarget::Environment("default".to_string()); + // e.g. extend request schema with: + // environment: Option<String>, all: Option<bool> + // then map to DeployTarget here. + let deploy_target = deploy::DeployTarget::Environment("default".to_string());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/tower-cmd/src/mcp.rs` around lines 715 - 717, The code currently hardcodes deploy::DeployTarget::Environment("default".to_string()) before calling deploy::deploy_from_dir, which prevents using the CLI's multi-environment deploy; change this to take a deploy target from the tower_deploy layer (or from self.config) instead of "default": update the function/method that contains this code to accept a DeployTarget parameter (or read it from self.config/cli field), replace the hardcoded deploy::DeployTarget::Environment("default".to_string()) with that parameter/value, and ensure all call sites that invoke this function pass the selected environment through to deploy::deploy_from_dir(self.config.clone(), working_dir, true, deploy_target).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.coderabbit.yaml:
- Around line 2-4: The base_branches key is nested directly under reviews
instead of under reviews.auto_review; move the base_branches mapping so it is a
child of reviews.auto_review (i.e., reviews -> auto_review -> base_branches) so
branch filtering is applied; update the YAML structure to place base_branches
under the auto_review object and ensure the list values remain unchanged.
In `@crates/tower-cmd/src/deploy.rs`:
- Around line 52-73: The code currently injects a hard-coded
Environment("default") and references a non-existent DeployTarget::Default;
change the DeployTarget enum to include an explicit Unspecified (or None)
variant (e.g., DeployTarget::Unspecified) and update do_deploy so the target is
DeployTarget::All if --all, DeployTarget::Environment(name) if --environment,
otherwise DeployTarget::Unspecified; then update any other places that expect a
"default" (the other uses around the 143–169 range) to handle
DeployTarget::Unspecified explicitly (either by prompting, erroring, or
selecting a behavior) and remove references to DeployTarget::Default.
In `@crates/tower-cmd/src/skill.rs`:
- Around line 179-183: The WORKFLOW_HEADER template contains unlabeled code
fences causing MD040 lint warnings; edit the WORKFLOW_HEADER constant to add
fence languages (use ```text) to each unlabeled block shown (the examples like
"tower_file_generate({}) ...", "tower_file_generate → tower_file_update …",
"tower_run_local", "tower_apps_create → tower_deploy → tower_run_remote", the
schedules block, and the apps/teams/secrets block) and then regenerate the
SKILL.md output so the changes take effect; search for the WORKFLOW_HEADER
symbol in the file to locate and update every triple-backtick fence accordingly.
In `@crates/tower-cmd/src/util/deploy.rs`:
- Around line 122-129: The URL construction uses raw app_name which can break
paths; encode app_name with the existing urlencode helper and use the encoded
value (e.g., let encoded_app = urlencode(app_name)) in the three format! calls
that build url (the block referencing variables url, base_url, app_name,
environment, all_environments) so every path segment uses the percent-encoded
app name just like environment is encoded.
In `@skills/tower/SKILL.md`:
- Around line 311-319: Update the `tower deploy` command docs to reflect the new
CLI options by adding `-e, --environment` (select deployment environment) and
`--all` (deploy all services) to the Arguments list in the SKILL.md entry for
`tower deploy`; alternatively regenerate the SKILL.md so it is produced from the
current CLI spec so the `tower deploy` documentation (the `tower deploy` heading
and its Arguments section) matches the actual CLI behaviour.
---
Nitpick comments:
In `@crates/tower-cmd/src/mcp.rs`:
- Around line 715-717: The code currently hardcodes
deploy::DeployTarget::Environment("default".to_string()) before calling
deploy::deploy_from_dir, which prevents using the CLI's multi-environment
deploy; change this to take a deploy target from the tower_deploy layer (or from
self.config) instead of "default": update the function/method that contains this
code to accept a DeployTarget parameter (or read it from self.config/cli field),
replace the hardcoded deploy::DeployTarget::Environment("default".to_string())
with that parameter/value, and ensure all call sites that invoke this function
pass the selected environment through to
deploy::deploy_from_dir(self.config.clone(), working_dir, true, deploy_target).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2a704f31-c858-4c31-a9bb-613bcf8eccf5
📒 Files selected for processing (13)
.claude-plugin/plugin.json.coderabbit.yaml.github/workflows/regenerate-skill.yml.gitignore.mcp.jsoncrates/tower-cmd/src/apps.rscrates/tower-cmd/src/deploy.rscrates/tower-cmd/src/lib.rscrates/tower-cmd/src/mcp.rscrates/tower-cmd/src/secrets.rscrates/tower-cmd/src/skill.rscrates/tower-cmd/src/util/deploy.rsskills/tower/SKILL.md
There was a problem hiding this comment.
♻️ Duplicate comments (1)
skills/tower/SKILL.md (1)
318-320:⚠️ Potential issue | 🟡 MinorFix
tower deployargument docs:-ais invalid and there’s a typo.The CLI defines
--allwithout a short alias, so documenting-ahere is incorrect. Also, “specifiy” should be “specify”.[suggested patch]
Proposed doc fix
-- `-a`, `--all` - Deploy this app to all environments. You can only specify `-a` or `-e`, not both. -- `-e`, `--environment` — The environment to deploy this app to. You can only specifiy `-a` or `-e`, not both. +- `--all` — Deploy this app to all environments. You can only specify `--all` or `-e`, not both. +- `-e`, `--environment` — The environment to deploy this app to. You can only specify `--all` or `-e`, not both.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/tower/SKILL.md` around lines 318 - 320, Update the "tower deploy" flags documentation to match the CLI: remove the invalid short alias `-a` and document `--all` with no short form, correct the typo "specifiy" to "specify" in the `--environment`/`--environment` description, and verify the `-f`, `--create` line accurately reflects the CLI's short/long names for the create/force flag (adjust to the actual short alias if different); reference the "tower deploy" flags `--all`, `--environment`, and `--create` when making these edits.
🧹 Nitpick comments (1)
skills/tower/SKILL.md (1)
68-72: Add language identifiers to fenced code blocks to satisfy markdownlint (MD040).Several fenced blocks are missing a language tag; add
bash/text(or the most accurate language) consistently.[suggested patch]
Example fix pattern
-``` +```text tower_file_generate({}) # current directory tower_file_generate({"working_directory": "/path/to/app"}) # explicit path tower_run_local({"working_directory": "../other-app"})</details> Also applies to: 86-88, 94-96, 102-104, 110-115, 119-125 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/tower/SKILL.mdaround lines 68 - 72, Several fenced code blocks in
SKILL.md are missing language identifiers; update each triple-backtick block
that contains example commands like tower_file_generate and tower_run_local to
include an appropriate language tag (e.g., bash or text). Find the blocks that
show calls to tower_file_generate({}) and tower_run_local({"working_directory":
...}) (and the other similar example blocks referenced at ranges 86-88, 94-96,
102-104, 110-115, 119-125) and addbash ortext immediately after the
opening backticks so markdownlint MD040 is satisfied while preserving the exact
block contents.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In@skills/tower/SKILL.md:
- Around line 318-320: Update the "tower deploy" flags documentation to match
the CLI: remove the invalid short alias-aand document--allwith no short
form, correct the typo "specifiy" to "specify" in the
--environment/--environmentdescription, and verify the-f,--create
line accurately reflects the CLI's short/long names for the create/force flag
(adjust to the actual short alias if different); reference the "tower deploy"
flags--all,--environment, and--createwhen making these edits.
Nitpick comments:
In@skills/tower/SKILL.md:
- Around line 68-72: Several fenced code blocks in SKILL.md are missing language
identifiers; update each triple-backtick block that contains example commands
like tower_file_generate and tower_run_local to include an appropriate language
tag (e.g., bash or text). Find the blocks that show calls to
tower_file_generate({}) and tower_run_local({"working_directory": ...}) (and the
other similar example blocks referenced at ranges 86-88, 94-96, 102-104,
110-115, 119-125) and addbash ortext immediately after the opening
backticks so markdownlint MD040 is satisfied while preserving the exact block
contents.</details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Path: .coderabbit.yaml **Review profile**: CHILL **Plan**: Pro **Run ID**: `d857909a-0bd3-410c-984b-79e220780ea5` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 49f0d52c4ee1d3d1be32e3b81c1c88d188505e35 and 46474f5e45dcd19b786166629b727b1f5a90a48b. </details> <details> <summary>📒 Files selected for processing (3)</summary> * `crates/tower-cmd/src/deploy.rs` * `crates/tower-cmd/src/util/deploy.rs` * `skills/tower/SKILL.md` </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary> * crates/tower-cmd/src/util/deploy.rs </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
Co-authored-by: bradhe <310958+bradhe@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@skills/tower/SKILL.md`:
- Around line 68-72: The fenced code blocks in SKILL.md lack language tags
causing markdownlint MD040 failures; update each triple-backtick block that
contains examples like the one with tower_file_generate, the block listing
tower_file_generate → tower_file_update → tower_file_add/edit/remove_parameter →
tower_file_validate, the tower_run_local block, the sequence tower_apps_create →
tower_deploy → tower_run_remote, the schedules list
(tower_schedules_create/list/update/delete), and the apps/teams/secrets list
(tower_apps_list, tower_apps_show, tower_apps_logs, tower_teams_list,
tower_teams_switch, tower_secrets_create, tower_secrets_list) by adding a fence
language (use "text") after the opening ``` for each block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 04052c91-bf30-4062-b6e8-99053fc92437
📒 Files selected for processing (1)
skills/tower/SKILL.md
…257) * refactor(tower-package): extract pure core crate for bundle building Splits tower-package into a pure tower-package-core crate (no tokio, fs, or glob) and a native shell. The core exposes build_package(PackageInputs) which produces a gzipped tar from in-memory bytes, enabling future wasm32 targets. The native crate handles filesystem walking, globbing, and canonicalization, then delegates to the core. Output is now byte-deterministic: entries are sorted by archive name, tar header mtime/uid/gid are zeroed with mode 0644, and the gzip header drops its embedded mtime. The bundle format (ustar+gzip, app/modules layout, MANIFEST, Towerfile) and checksum algorithm are unchanged, so existing server-side unpacking continues to work. * feat(tower-package-wasm): WASM bindings for bundle builder New crate exposing tower_package_core::build_package to TypeScript via wasm-bindgen. The crate produces an npm-publishable package with typed bindings (Uint8Array in, Uint8Array out). Output matches what the native tower-package produces, so bundles built in the browser or Node deploy through the server's existing unpack path without changes. The flake devshell now carries wasm-pack, wasm-bindgen-cli, and binaryen, and the rust toolchain picks up the wasm32-unknown-unknown target. Build the package with crates/tower-package-wasm/scripts/build.sh, optionally passing bundler (default), web, or nodejs. * test(tower-package-wasm): TypeScript test suite + CI workflow Node-based test suite using tsx and node:test. Covers the output shape, byte-determinism, sort order, MANIFEST contents, module file flow, and checksum divergence on different inputs. Parses the tar stream inline so there are no npm deps beyond tsx and type declarations. The workflow builds the nodejs-target wasm package and runs the tests on every PR that touches tower-package-core, tower-package-wasm, or the workspace manifest. * refactor(tower-package): collapse core/native/wasm into a single crate Folds tower-package-core and tower-package-wasm back into tower-package, using cargo features instead of separate crates to split the shells. Default features remain compatible with existing Rust callers: cargo build on a workspace member still gets tokio, glob, Package::build, and everything else it used to. - native feature (default): tokio, glob, tmpdir, Package, PackageSpec, FileResolver — the CLI's usual path. - wasm feature: wasm-bindgen + serde-wasm-bindgen + serde_bytes, exposes buildBundle to JavaScript. - Pure core (Entry, Manifest, build_package, sorting, hashing) is always compiled under both shells. Build the wasm package with crates/tower-package/scripts/build.sh, which invokes wasm-pack with --no-default-features --features wasm and renames the npm package to tower-package-wasm so the crate and npm names can diverge. Ten native tests and eight TypeScript tests pass. * ci(tower-package): publish tower-package-wasm to npm on release Mirrors the publish-pypi subworkflow: on every release tag, cargo-dist's release.yml calls a new publish-npm.yml that builds the bundler-target wasm package and runs npm publish. Version tracks Cargo.toml, so the npm package stays in lockstep with the Rust crate. First cut publishes the wasm package only; the consumer brings their own bundler. * ci(tower-package): use npm trusted publishing instead of a token Drops NPM_TOKEN in favour of OIDC; no long-lived secret to rotate. Requires a trusted publisher to be configured on the package at npmjs.com matching this repo, this workflow filename, and the release environment. Bumps node to 22 and pulls npm@latest to ensure the CLI supports OIDC publishing. * chore: add Ben Lovell as workspace author * refactor(tower-package): simplify code in core, native, and wasm tests * fix(tower-package): restore comment; tighten publish-npm workflow header * chore: bump nixpkgs input * refactor(tower-package): rename bundle to package and drop schedule * refactor(tower-package): derive manifest from Towerfile bytes * refactor(tower-package): own the Towerfile schema Move Towerfile/App/Parameter from config into a new tower-package::towerfile module and flip the crate dependency — config now re-exports them so existing `use config::Towerfile` call sites keep working. core::build_package parses the bytes directly through Towerfile::from_toml, retiring the private TowerfileSpec shadow struct and unifying Parameter (description: String) across authoring and manifest views. * fix(tower-package): unused Path import on wasm; drop obsolete schedule test Gate the Path import out — it's only needed inside save() under the native feature. Also delete test_manifest_contains_schedule; schedule was removed from the manifest in f625834.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/release.yml (1)
315-323:⚠️ Potential issue | 🟡 MinorThe
announcejob does not wait forcustom-publish-npmto complete.The
announcejob'sneedsarray includescustom-publish-pypibut notcustom-publish-npm. This means the release announcement could be published before the npm package is available. If this is intentional (e.g., npm publish failures shouldn't block announcements), please confirm. Otherwise, consider adding the npm job to the dependency chain.announce: needs: - plan - host - custom-publish-pypi + - custom-publish-npm # use "always() && ..." to allow us to wait for all publish jobs while # still allowing individual publish jobs to skip themselves (for prereleases). # "host" however must run to completion, no skipping allowed! - if: ${{ always() && needs.host.result == 'success' && (needs.custom-publish-pypi.result == 'skipped' || needs.custom-publish-pypi.result == 'success') }} + if: ${{ always() && needs.host.result == 'success' && (needs.custom-publish-pypi.result == 'skipped' || needs.custom-publish-pypi.result == 'success') && (needs.custom-publish-npm.result == 'skipped' || needs.custom-publish-npm.result == 'success') }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 315 - 323, The announce job currently omits the custom-publish-npm job from its dependencies and conditional check; update the announce job to include custom-publish-npm in the needs array and extend the if condition to require that needs.custom-publish-npm.result is either 'skipped' or 'success' (mirroring the existing check for needs.custom-publish-pypi), or explicitly confirm that omitting custom-publish-npm was intentional if you do not want npm publish to block announcements.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/tower-package/src/core.rs`:
- Around line 211-214: normalize_path currently mishandles ParentDir: it
silently drops a leading ".." (e.g., "../foo") and rejects safe cases like
"a/../b". Update normalize_path to perform stack-style segment processing
instead of the current immediate-branch behavior: iterate components, skip "."
segments, push normal segments onto a Vec stack, on Component::ParentDir pop
from the stack if non-empty, and if the stack is empty then return
Err(Error::InvalidPath) to reject leading escapes; finally reconstruct the path
from the stack. Make sure to reference Component::ParentDir, the local variable
next (or the current segment), and Error::InvalidPath when adjusting logic.
- Around line 141-176: The code currently takes each Entry.archive_name verbatim
when building the package; before sorting and appending you must validate and
normalize those names in build_package: iterate over entries and for each Entry
replace backslashes with '/', strip any leading '/' components, reject names
that are empty, contain ".." path segments, or are absolute (leading '/' or
Windows drive letters), and fail early with an error if any invalid name is
found; use the normalized names for path_hashes, sorting, and when calling
append_entry (update references to entry.archive_name where needed) so the tar
only contains safe, platform-independent relative paths.
In `@crates/tower-package/src/error.rs`:
- Around line 45-57: The match arm in the impl From<crate::core::Error> for
Error currently maps Core::Io to Error::NoManifest; change the Core::Io { source
} arm inside the from(fn) to map to the crate::error::Error IO variant (e.g.
Error::Io { source } or Error::Io(source) matching the Error enum definition) so
the original IO error and its source are preserved instead of being downgraded
to NoManifest.
In `@crates/tower-package/src/native.rs`:
- Around line 222-224: The code is unwrapping package_file_path in
Package::unpack(), which panics for instances created by
Package::from_unpacked_path(); change Package::unpack() to check
package_file_path (or an "already_unpacked" condition) and return a proper
Result error or simply short-circuit when None instead of calling unwrap();
locate the unwrap of self.package_file_path in native.rs (and the call to
unpack_archive) and replace it with a conditional that either calls
unpack_archive when Some(path) or returns an appropriate Err variant (or Ok(())
if already unpacked) so callers get a non-panicking error path.
- Around line 39-40: The debug statement in from_towerfile currently logs the
entire Towerfile with "{:?}" which can include parameter defaults and hidden
secrets; change the logging to only emit non-sensitive fields from the Towerfile
(e.g., name, version, description, or other explicit safe fields) and do not
include the params map or any field marked hidden; if you need to indicate
presence of parameters, log a redacted summary (e.g., "params: <redacted>" or
params.keys()) rather than the full struct, and update the debug call that
references towerfile to use those selected fields instead.
- Around line 402-444: The resolve_path helper currently panics on IO errors due
to read_dir(...).await.unwrap() and entries.next_entry().await.unwrap(); change
async fn resolve_path to return a Result (e.g., Result<(), std::io::Error> or
the crate's Error type) and replace those unwraps with the ? operator (or proper
error propagation) so read_dir and next_entry failures bubble to the caller;
update callers of resolve_path accordingly to handle the Result. Ensure the rest
of the function still skips non-canonicalize errors as before and that you
adjust any match arms to propagate errors from tokio::fs::read_dir and
DirEntry::next_entry rather than unwrapping.
- Around line 161-169: Replace the Path::file_name() usage that computes
raw_basename inside the loop over spec.import_paths/canonical_import_paths with
the shared helper import_path_basename(...) from core.rs so backslashes are
treated as separators; specifically, in the loop that references raw_import and
canonical_import, call import_path_basename(raw_import) (importing the function
if necessary, e.g. from crate::core or the correct module) and use that result
for archive_prefix instead of the current Path::file_name() logic.
In `@crates/tower-package/src/towerfile.rs`:
- Around line 84-92: The set_parameter method can desync lookup_name and the
stored Parameter.name; ensure stored parameter always uses lookup_name by
overwriting param.name with lookup_name before inserting or replacing. In other
words, in set_parameter (and when finding existing via
self.parameters.iter_mut().find(...)) set param.name = lookup_name (or set
existing.name = lookup_name when replacing) so the stored name matches the
lookup key and you won't silently create duplicates or break future
lookups/removals.
---
Outside diff comments:
In @.github/workflows/release.yml:
- Around line 315-323: The announce job currently omits the custom-publish-npm
job from its dependencies and conditional check; update the announce job to
include custom-publish-npm in the needs array and extend the if condition to
require that needs.custom-publish-npm.result is either 'skipped' or 'success'
(mirroring the existing check for needs.custom-publish-pypi), or explicitly
confirm that omitting custom-publish-npm was intentional if you do not want npm
publish to block announcements.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fa6bbb3f-8b41-43f3-a037-f3df398cbc49
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockflake.lockis excluded by!**/*.lock
📒 Files selected for processing (31)
.github/workflows/publish-npm.yml.github/workflows/release.yml.github/workflows/test-wasm.ymlCargo.tomlcrates/config/Cargo.tomlcrates/config/src/error.rscrates/config/src/lib.rscrates/tower-cmd/src/deploy.rscrates/tower-cmd/src/error.rscrates/tower-cmd/src/output.rscrates/tower-cmd/src/package.rscrates/tower-cmd/src/run.rscrates/tower-cmd/src/util/deploy.rscrates/tower-package/.gitignorecrates/tower-package/Cargo.tomlcrates/tower-package/README.mdcrates/tower-package/scripts/build.shcrates/tower-package/src/core.rscrates/tower-package/src/error.rscrates/tower-package/src/lib.rscrates/tower-package/src/native.rscrates/tower-package/src/towerfile.rscrates/tower-package/src/wasm.rscrates/tower-package/test/.gitignorecrates/tower-package/test/build.test.tscrates/tower-package/test/package.jsoncrates/tower-package/test/tsconfig.jsoncrates/tower-package/tests/package_test.rscrates/tower-package/types.d.tsflake.nixtests/tower/test_build_package.py
💤 Files with no reviewable changes (2)
- tests/tower/test_build_package.py
- crates/config/src/error.rs
✅ Files skipped from review due to trivial changes (7)
- crates/tower-package/test/.gitignore
- crates/tower-package/.gitignore
- crates/tower-package/test/package.json
- flake.nix
- crates/tower-package/test/tsconfig.json
- .github/workflows/test-wasm.yml
- crates/tower-package/README.md
🚧 Files skipped from review as they are similar to previous changes (3)
- Cargo.toml
- crates/tower-cmd/src/deploy.rs
- crates/tower-cmd/src/util/deploy.rs
| let mut entries: Vec<Entry> = Vec::with_capacity(inputs.app_files.len() + inputs.module_files.len()); | ||
| entries.extend(inputs.app_files); | ||
| entries.extend(inputs.module_files); | ||
| entries.sort_by(|a, b| a.archive_name.cmp(&b.archive_name)); | ||
|
|
||
| let mut path_hashes: HashMap<String, String> = HashMap::with_capacity(entries.len()); | ||
| for entry in &entries { | ||
| path_hashes.insert(entry.archive_name.clone(), compute_sha256_bytes(&entry.bytes)); | ||
| } | ||
|
|
||
| let manifest = Manifest { | ||
| version: Some(CURRENT_PACKAGE_VERSION), | ||
| invoke: towerfile.app.script, | ||
| parameters: towerfile.parameters, | ||
| schedule: None, | ||
| import_paths, | ||
| app_dir_name: "app".to_string(), | ||
| modules_dir_name: "modules".to_string(), | ||
| checksum: compute_sha256_package(&path_hashes), | ||
| }; | ||
|
|
||
| let manifest_bytes = serde_json::to_vec(&manifest)?; | ||
|
|
||
| let gz = GzBuilder::new() | ||
| .mtime(0) | ||
| .write(Vec::new(), Compression::default()); | ||
| let mut builder = Builder::new(gz); | ||
|
|
||
| for entry in &entries { | ||
| append_entry(&mut builder, &entry.archive_name, &entry.bytes)?; | ||
| } | ||
| append_entry(&mut builder, "MANIFEST", &manifest_bytes)?; | ||
| append_entry(&mut builder, "Towerfile", &inputs.towerfile_bytes)?; | ||
|
|
||
| let gz: GzEncoder<Vec<u8>> = builder.into_inner()?; | ||
| let bytes = gz.finish()?; |
There was a problem hiding this comment.
Validate Entry.archive_name inside the core builder.
build_package() writes archive_name verbatim even though this is now the public packaging API. Native callers normalize first, but any other caller can still feed absolute, .., or platform-specific paths and they'll be embedded directly into the tar. Please normalize/reject names here before sorting and appending them.
Suggested fix
- let mut entries: Vec<Entry> = Vec::with_capacity(inputs.app_files.len() + inputs.module_files.len());
- entries.extend(inputs.app_files);
- entries.extend(inputs.module_files);
+ let mut entries: Vec<Entry> =
+ Vec::with_capacity(inputs.app_files.len() + inputs.module_files.len());
+ entries.extend(inputs.app_files);
+ entries.extend(inputs.module_files);
+ for entry in &mut entries {
+ entry.archive_name = normalize_path(Path::new(&entry.archive_name))?;
+ }
entries.sort_by(|a, b| a.archive_name.cmp(&b.archive_name));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-package/src/core.rs` around lines 141 - 176, The code currently
takes each Entry.archive_name verbatim when building the package; before sorting
and appending you must validate and normalize those names in build_package:
iterate over entries and for each Entry replace backslashes with '/', strip any
leading '/' components, reject names that are empty, contain ".." path segments,
or are absolute (leading '/' or Windows drive letters), and fail early with an
error if any invalid name is found; use the normalized names for path_hashes,
sorting, and when calling append_entry (update references to entry.archive_name
where needed) so the tar only contains safe, platform-independent relative
paths.
| Component::ParentDir => { | ||
| if !next.is_empty() { | ||
| return Err(Error::InvalidPath); | ||
| } |
There was a problem hiding this comment.
normalize_path currently accepts leading .. segments.
For ../foo, next is empty so this branch silently drops the escape and returns "foo" instead of rejecting it. At the same time, a/../b is rejected even though it normalizes safely. This needs stack-style pop/push handling.
Suggested fix
Component::ParentDir => {
- if !next.is_empty() {
+ if next.pop().is_none() {
return Err(Error::InvalidPath);
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-package/src/core.rs` around lines 211 - 214, normalize_path
currently mishandles ParentDir: it silently drops a leading ".." (e.g.,
"../foo") and rejects safe cases like "a/../b". Update normalize_path to perform
stack-style segment processing instead of the current immediate-branch behavior:
iterate components, skip "." segments, push normal segments onto a Vec stack, on
Component::ParentDir pop from the stack if non-empty, and if the stack is empty
then return Err(Error::InvalidPath) to reject leading escapes; finally
reconstruct the path from the stack. Make sure to reference
Component::ParentDir, the local variable next (or the current segment), and
Error::InvalidPath when adjusting logic.
| impl From<crate::core::Error> for Error { | ||
| fn from(err: crate::core::Error) -> Self { | ||
| use crate::core::Error as Core; | ||
| match err { | ||
| Core::InvalidPath => Error::InvalidPath, | ||
| Core::Serialization { source } => { | ||
| debug!("core serialization error: {}", source); | ||
| Error::InvalidManifest | ||
| } | ||
| Core::Io { source } => { | ||
| debug!("core IO error: {}", source); | ||
| Error::NoManifest | ||
| } |
There was a problem hiding this comment.
Map core IO failures to Error::Io, not Error::NoManifest.
Core::Io currently gets downgraded to NoManifest, which loses error fidelity and bypasses the new IO-specific handling path.
🔧 Proposed fix
impl From<std::io::Error> for Error {
- fn from(err: std::io::Error) -> Self {
- debug!("IO error: {}", err);
- Error::NoManifest
+ fn from(source: std::io::Error) -> Self {
+ debug!("IO error: {}", source);
+ Error::Io { source }
}
}
@@
Core::Io { source } => {
debug!("core IO error: {}", source);
- Error::NoManifest
+ Error::Io { source }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-package/src/error.rs` around lines 45 - 57, The match arm in the
impl From<crate::core::Error> for Error currently maps Core::Io to
Error::NoManifest; change the Core::Io { source } arm inside the from(fn) to map
to the crate::error::Error IO variant (e.g. Error::Io { source } or
Error::Io(source) matching the Error enum definition) so the original IO error
and its source are preserved instead of being downgraded to NoManifest.
| pub fn from_towerfile(towerfile: &Towerfile) -> Self { | ||
| debug!("creating package spec from towerfile: {:?}", towerfile); |
There was a problem hiding this comment.
Avoid logging the full Towerfile.
{:?} includes parameter defaults, including hidden parameters. That leaks secrets into debug logs during packaging. Log non-sensitive fields only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-package/src/native.rs` around lines 39 - 40, The debug statement
in from_towerfile currently logs the entire Towerfile with "{:?}" which can
include parameter defaults and hidden secrets; change the logging to only emit
non-sensitive fields from the Towerfile (e.g., name, version, description, or
other explicit safe fields) and do not include the params map or any field
marked hidden; if you need to indicate presence of parameters, log a redacted
summary (e.g., "params: <redacted>" or params.keys()) rather than the full
struct, and update the debug call that references towerfile to use those
selected fields instead.
| // self.package_file_path should be set otherwise this is a bug. | ||
| let package_path = self.package_file_path.clone().unwrap(); | ||
| unpack_archive(&package_path, &path).await?; |
There was a problem hiding this comment.
Handle missing package_file_path without panicking.
Package::from_unpacked_path() creates instances with package_file_path: None, so calling unpack() on that valid state currently panics here. This should return a normal error or short-circuit if the package is already unpacked.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-package/src/native.rs` around lines 222 - 224, The code is
unwrapping package_file_path in Package::unpack(), which panics for instances
created by Package::from_unpacked_path(); change Package::unpack() to check
package_file_path (or an "already_unpacked" condition) and return a proper
Result error or simply short-circuit when None instead of calling unwrap();
locate the unwrap of self.package_file_path in native.rs (and the call to
unpack_archive) and replace it with a conditional that either calls
unpack_archive when Some(path) or returns an appropriate Err variant (or Ok(())
if already unpacked) so callers get a non-panicking error path.
| async fn resolve_path(&self, path: &PathBuf, file_paths: &mut HashMap<PathBuf, PathBuf>) { | ||
| let mut queue = VecDeque::new(); | ||
| queue.push_back(path.to_path_buf()); | ||
|
|
||
| while let Some(current_path) = queue.pop_front() { | ||
| let physical_path = match current_path.canonicalize() { | ||
| Ok(p) => p, | ||
| Err(e) => { | ||
| debug!(" - skipping path {}: {}", current_path.display(), e); | ||
| continue; | ||
| } | ||
| }; | ||
|
|
||
| if physical_path.is_dir() { | ||
| let mut entries = tokio::fs::read_dir(&physical_path).await.unwrap(); | ||
|
|
||
| while let Some(entry) = entries.next_entry().await.unwrap() { | ||
| queue.push_back(entry.path()); | ||
| } | ||
| } else { | ||
| if !self.should_ignore(&physical_path) { | ||
| let cp = physical_path.clone(); | ||
| match self.logical_path(&cp) { | ||
| None => { | ||
| debug!( | ||
| " - skipping file {}: not in base directory {}: ...", | ||
| physical_path.display(), | ||
| self.base_dir.display(), | ||
| ); | ||
| continue; | ||
| } | ||
| Some(logical_path) => { | ||
| debug!( | ||
| " - resolved path {} to logical path {}", | ||
| physical_path.display(), | ||
| logical_path.display() | ||
| ); | ||
| file_paths.insert(physical_path, logical_path.to_path_buf()); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Propagate directory-walk IO failures instead of panicking.
read_dir(...).await.unwrap() and next_entry().await.unwrap() turn permission issues and filesystem races into process aborts. This helper sits on a Result-returning build path already, so it should bubble Error back to the caller.
Suggested fix
- async fn resolve_path(&self, path: &PathBuf, file_paths: &mut HashMap<PathBuf, PathBuf>) {
+ async fn resolve_path(
+ &self,
+ path: &PathBuf,
+ file_paths: &mut HashMap<PathBuf, PathBuf>,
+ ) -> Result<(), Error> {
let mut queue = VecDeque::new();
queue.push_back(path.to_path_buf());
while let Some(current_path) = queue.pop_front() {
let physical_path = match current_path.canonicalize() {
@@
};
if physical_path.is_dir() {
- let mut entries = tokio::fs::read_dir(&physical_path).await.unwrap();
+ let mut entries = tokio::fs::read_dir(&physical_path).await?;
- while let Some(entry) = entries.next_entry().await.unwrap() {
+ while let Some(entry) = entries.next_entry().await? {
queue.push_back(entry.path());
}
} else {
if !self.should_ignore(&physical_path) {
@@
}
}
}
+
+ Ok(())
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async fn resolve_path(&self, path: &PathBuf, file_paths: &mut HashMap<PathBuf, PathBuf>) { | |
| let mut queue = VecDeque::new(); | |
| queue.push_back(path.to_path_buf()); | |
| while let Some(current_path) = queue.pop_front() { | |
| let physical_path = match current_path.canonicalize() { | |
| Ok(p) => p, | |
| Err(e) => { | |
| debug!(" - skipping path {}: {}", current_path.display(), e); | |
| continue; | |
| } | |
| }; | |
| if physical_path.is_dir() { | |
| let mut entries = tokio::fs::read_dir(&physical_path).await.unwrap(); | |
| while let Some(entry) = entries.next_entry().await.unwrap() { | |
| queue.push_back(entry.path()); | |
| } | |
| } else { | |
| if !self.should_ignore(&physical_path) { | |
| let cp = physical_path.clone(); | |
| match self.logical_path(&cp) { | |
| None => { | |
| debug!( | |
| " - skipping file {}: not in base directory {}: ...", | |
| physical_path.display(), | |
| self.base_dir.display(), | |
| ); | |
| continue; | |
| } | |
| Some(logical_path) => { | |
| debug!( | |
| " - resolved path {} to logical path {}", | |
| physical_path.display(), | |
| logical_path.display() | |
| ); | |
| file_paths.insert(physical_path, logical_path.to_path_buf()); | |
| } | |
| } | |
| } | |
| } | |
| } | |
| async fn resolve_path( | |
| &self, | |
| path: &PathBuf, | |
| file_paths: &mut HashMap<PathBuf, PathBuf>, | |
| ) -> Result<(), Error> { | |
| let mut queue = VecDeque::new(); | |
| queue.push_back(path.to_path_buf()); | |
| while let Some(current_path) = queue.pop_front() { | |
| let physical_path = match current_path.canonicalize() { | |
| Ok(p) => p, | |
| Err(e) => { | |
| debug!(" - skipping path {}: {}", current_path.display(), e); | |
| continue; | |
| } | |
| }; | |
| if physical_path.is_dir() { | |
| let mut entries = tokio::fs::read_dir(&physical_path).await?; | |
| while let Some(entry) = entries.next_entry().await? { | |
| queue.push_back(entry.path()); | |
| } | |
| } else { | |
| if !self.should_ignore(&physical_path) { | |
| let cp = physical_path.clone(); | |
| match self.logical_path(&cp) { | |
| None => { | |
| debug!( | |
| " - skipping file {}: not in base directory {}: ...", | |
| physical_path.display(), | |
| self.base_dir.display(), | |
| ); | |
| continue; | |
| } | |
| Some(logical_path) => { | |
| debug!( | |
| " - resolved path {} to logical path {}", | |
| physical_path.display(), | |
| logical_path.display() | |
| ); | |
| file_paths.insert(physical_path, logical_path.to_path_buf()); | |
| } | |
| } | |
| } | |
| } | |
| } | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-package/src/native.rs` around lines 402 - 444, The resolve_path
helper currently panics on IO errors due to read_dir(...).await.unwrap() and
entries.next_entry().await.unwrap(); change async fn resolve_path to return a
Result (e.g., Result<(), std::io::Error> or the crate's Error type) and replace
those unwraps with the ? operator (or proper error propagation) so read_dir and
next_entry failures bubble to the caller; update callers of resolve_path
accordingly to handle the Result. Ensure the rest of the function still skips
non-canonicalize errors as before and that you adjust any match arms to
propagate errors from tokio::fs::read_dir and DirEntry::next_entry rather than
unwrapping.
| /// set_parameter upserts a parameter by lookup name. If a parameter with the given name | ||
| /// exists, it is replaced. Otherwise, the parameter is appended. | ||
| pub fn set_parameter(&mut self, lookup_name: &str, param: Parameter) { | ||
| if let Some(existing) = self.parameters.iter_mut().find(|p| p.name == lookup_name) { | ||
| *existing = param; | ||
| } else { | ||
| self.parameters.push(param); | ||
| } | ||
| } |
There was a problem hiding this comment.
Keep lookup_name and param.name in sync.
set_parameter("old", Parameter { name: "new", ... }) will find or append by "old" but persist "new", so later lookups/removals by "old" stop working and you can silently create duplicates. Either reject mismatches or overwrite param.name with lookup_name before storing it.
Suggested fix
- pub fn set_parameter(&mut self, lookup_name: &str, param: Parameter) {
+ pub fn set_parameter(&mut self, lookup_name: &str, mut param: Parameter) {
+ param.name = lookup_name.to_string();
if let Some(existing) = self.parameters.iter_mut().find(|p| p.name == lookup_name) {
*existing = param;
} else {
self.parameters.push(param);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-package/src/towerfile.rs` around lines 84 - 92, The
set_parameter method can desync lookup_name and the stored Parameter.name;
ensure stored parameter always uses lookup_name by overwriting param.name with
lookup_name before inserting or replacing. In other words, in set_parameter (and
when finding existing via self.parameters.iter_mut().find(...)) set param.name =
lookup_name (or set existing.name = lookup_name when replacing) so the stored
name matches the lookup key and you won't silently create duplicates or break
future lookups/removals.
Summary by CodeRabbit
New Features
Documentation
Bug Fixes / UX
Chores