If you spend much time talking to me about Python, my firm belief that a well-maintained project should be typed and analyzed will probably come up.
I might be an annoying reviewer, asking you to reconsider using Dict[string, any]
and suggest using TypedDict or even a Dataclass instead. Although Python is a great language, I’ve witnessed how its dynamic typing can lead to complex method arguments that are hard to decipher without running the code and using a debugger.
There are instances where the need for flexibility, represented by type X or type Y
, known as an untagged union type, is justified. In Python, this need can lead to a technique called monkey patching. This is where an already instantiated object is altered during runtime to replace a property with something new. A typical example is a class with a result
property that can be assigned from some list of potential class types. This approach can be functional and ergonomic, even in environments with stricter typing rules.
One specific use case I often encounter when developing software for cloud providers involves a modular library that dynamically hoists components at runtime. I might have a process with a set of implemented abstract base classes for each cloud service provider (CSP). Although the implemented ABCs may be similar at the top level, the underlying methods and return values may differ. At the start of a workload, the process may not know whether it will run against AWS, Azure, or GCP, and each workload could run a different set of steps, even against the same provider. Nevertheless, I still want to write code with predictable outcomes, thanks to proper typing. This use case becomes more relevant when creating “plugin” style systems where top-level objects from the framework interact with a nested pluggable element unknown to the core library.
Translating this mindset into go
took some quirky thinking, and I found it challenging to locate a concise answer with example code on how I could use structs
and interfaces
to load in undefined types. This post isn’t breaking any new ground, but something like it would have saved me a good deal of sweat equity.
Defining some basic structs
First, we should design some basics.
Let’s say we have a set of workload modules, each containing a JobDefinition
struct for a different provider (e.g., awsSource.JobDefinition
, azureSource.JobDefinition
, etc). Each definition might be similar but will have its unique attributes. Here are a couple of examples:
# In library one...
type JobDefinition struct {
ID string `json:"id"`
Source string
Resource []string `json:"resource"`
AdminAccount string `json:"adminaccount"`
}
# In library two...
type JobDefinition struct {
ID string `json:"id"`
Source string
Resource []string `json:"resource"`
RoleName string `json:"rolename"`
}
The framework will need to be able to support instructions for each of these definitions and have a mechanism to retrieve each known definition while still returning something that could be leveraged by a plugin that may not be able to modify this code.
type GenericJobDefinition interface{}
type JobConfig struct {
ConfigHash string
SourcePluginName string
SerializedSpec []byte
TargetTemplate Template
}
type Template struct {
Kind string `json:"kind"`
Spec struct {
Name string `json:"name"`
Spec interface{} `json:"spec,omitempty"`
Actions []string `json:"actions"`
Version string `json:"version"`
} `json:"spec"`
}
A blanket interface
worked well enough for the easy-to-consume job definition, and storing the job instructions as serialized JSON while pre-processing everything proved ergonomic enough.
func (c *AgentConfig) GetJobDefinition(name string) (*GenericJobDefinition, error) {
var cqtbRecord GenericJobDefinition
switch name {
case "aws":
cqtbRecord = new(awsSource.JobDefinition)
case "azurerm":
cqtbRecord = new(azureSource.JobDefinition)
case "gcp":
cqtbRecord = new(gcpSource.JobDefinition)
default:
return nil, nil
}
return &cqtbRecord, nil
}
This look-up mechanism served the basic need of establishing which job definition would be handled going forward. By returning nil
for unknown job definition names, a plugin could manage its own lookup without breaking anything out of band of this method.
Processing the job definition
With some basics out of the way, the question becomes how to interpret the interface
as a specific type to do some code flow logic. Fortunately, struct
s support type switching. Using this pattern, the workflow can use provider-specific methods to translate the job instructions into a provider-specific Spec
. In this example, it also assumed the workload payload would have actions to perform against that provider unique to that given job (as opposed to running the same thing against the provider each time).
func (c *AgentConfig) ApplyRecord(ctx context.Context, job GenericJobDefinition) error {
switch job := job.(type) {
case *awsSource.JobDefinition:
awsSpec, err := awsSource.ApplyAWS(ctx, *job, c.SerializedSpec)
if err != nil {
return err
}
c.TargetTemplate.Spec.Spec = awsSpec
c.TargetTemplate.Spec.Actions = job.Resource
case *azureSource.JobDefinition:
azureSpec, err := azureSource.ApplyAzure(ctx, *job, c.SerializedSpec)
if err != nil {
return err
}
c.TargetTemplate.Spec.Spec = azureSpec
c.TargetTemplate.Spec.Actions = job.Resource
case *gcpSource.JobDefinition:
gcpSpec, err := gcpSource.ApplyGCP(ctx, *job, c.SerializedSpec)
if err != nil {
return err
}
c.TargetTemplate.Spec.Spec = gcpSpec
c.TargetTemplate.Spec.Actions = job.Resource
// set GCP client secret
err = c.SetGCPClientSecret(ctx)
if err != nil {
return err
}
default:
return errors.New("unsupport source type")
}
c.TargetTemplate.Spec.Name = fmt.Sprintf("%s-%s", c.SourceName, c.ConfigHash)
return nil
}
This pattern also provides the capability to call additional methods added to the provider-specific AgentConfig
implementation, such as setting GCP credentials through a method but not doing something similar for Azure or AWS.
Leave a Reply