Helm2's --timeout took a number of seconds, rather than the
ParseDuration-compatible string that helm3 uses. For backward-
compatibility, update a bare number into a duration string.
These comments were a reasonable attempt at ensuring the documentation
matched reality, but the checkbox in the pull request template is much
more likely to produce results.
It's a little tricky to find a balance between "brittle" and "thorough"
in this test--I'd like to verify that e.g. the certificate is in
clusters[0].cluster.certificate-authority-data, not at the root. On the
other hand, we can't actually show that it's a valid kubeconfig file
without actually *using* it, so there's a hard upper limit on the
strength of the assertions. I've settled on verifying that all the
settings make it into the file and the file is syntactically-valid yaml.
Redacting KubeToken may not be sufficient, since it's possible that
someone would put secrets in Values or StringValues. Unilaterally
redacting those seems unhelpful, though, since they may be the very
thing the user is trying to debug. I've settled on redacting the obvious
field without trying to promise that all sensitive data will be hidden.
I'd like to keep Prefix's scope fairly limited, because it has potential
to spiral into something magnificently complex. You get one prefix
setting, it goes in `settings` not `environment`, end of feature.
Trying to guess in advance which part of the config a user will put in
the `settings` section and which they'll put in `environment` is a
fool's errand. Just let everything go in either place.
The ServiceAccount field only had an `envconfig` tag (as opposed to
`split_words`) because that triggered envconfig to look for the non-
prefixed form. Now that we're finding non-prefixed forms of everything,
we can use the clearer/more concise tag.
Note that TestPopulateWithConflictingVariables isn't meant to say
"here's what behavior we *want*" so much as "here's what the behavior
*is*." I don't think one thing is any better than the other, but we
should know which one we're getting.
Helm3 renamed its delete command to uninstall. We should still accept
helm_command=delete for drone-helm compatibility, but the internals
should use Helm's preferred name.
The tests need to allow calls to Stdout/Stderr so they don't get
"Unexpected call" errors from gomock, but these tests aren't meant to
assert that the calls actually happened. Using .AnyTimes allows 0 or
more calls.
This change revealed more about how the system needs to work, so there
are some supporting changes:
* helm.upgrade and helm.help are now vars rather than raw functions.
This allows unit tests to target the "which step should we run"
logic directly by comparing function pointers, rather than having to
configure/prepare a fully-valid Plan and then infer the logic’s
correctness based on the Plan’s state.
* configuration that's specific to kubeconfig initialization is now part
of the InitKube struct rather than run.Config, since other steps
shouldn’t need access to those settings (particularly the secrets).
* Step.Execute now receives a run.Config so it can log debug output.
I'm vacillating about the choice to have separate Config structs in the
`helm` and `run` packages. I can't tell whether it's "good separation of
concerns" or "cumbersome and over-engineered." It seems appropriate at
the moment, though.
The recommended way to test code that uses exec.Cmd involves setting up
a real exec.Cmd that invokes `go test` with additional arguments that
fire off a specially-constructed test that behaves the way the mocked-
out script would be expected to do. It's a sensible way to test exec.Cmd
itself, but for code that merely invokes it, I think it makes more sense
to use actual mocks.