giving — the reactive logger
giving
is a simple, magical library that lets you log or "give" arbitrary data throughout a program and then process it as an event stream. You can use it to log to the terminal, to wandb or mlflow, to compute minimums, maximums, rolling means, etc., separate from your program's core logic.
- Inside your code,
give()
every object or datum that you may want to log or compute metrics about. - Wrap your main loop with
given()
and define pipelines to map, filter and reduce the data you gave.
Examples
Code | Output |
---|---|
Simple logging with given().display():
a, b = 10, 20
give()
give(a * b, c=30)
|
|
Extract values into a list with given()["s"].values() as results:
s = 0
for i in range(5):
s += i
give(s)
print(results)
|
|
Reductions (min, max, count, etc.) def collatz(n):
while n != 1:
give(n)
n = (3 * n + 1) if n % 2 else (n // 2)
with given() as gv:
gv["n"].max().print("max: {}")
gv["n"].count().print("steps: {}")
collatz(2021)
|
|
Using the st, = given()["n"].count().eval(collatz, 2021)
print(st)
|
|
The with given() as gv:
gv.kscan().display()
give(elk=1)
give(rabbit=2)
give(elk=3, wolf=4)
|
|
The with given() as gv:
gv.throttle(1).display()
for i in range(50):
give(i)
time.sleep(0.1)
|
|
The above examples only show a small number of all the available operators.
Give
There are multiple ways you can use give
. give
returns None unless it is given a single positional argument, in which case it returns the value of that argument.
-
give(key=value)
This is the most straightforward way to use
give
: you write out both the key and the value associated.Returns: None
-
x = give(value)
When no key is given, but the result of
give
is assigned to a variable, the key is the name of that variable. In other words, the above is equivalent togive(x=value)
.Returns: The value
-
give(x)
When no key is given and the result is not assigned to a variable,
give(x)
is equivalent togive(x=x)
. If the argument is an expression likex * x
, the key will be the string"x * x"
.Returns: The value
-
give(x, y, z)
Multiple arguments can be given. The above is equivalent to
give(x=x, y=y, z=z)
.Returns: None
-
x = value; give()
If
give
has no arguments at all, it will look at the immediately previous statement and infer what you mean. The above is equivalent tox = value; give(x=value)
.Returns: None
Important functions and methods
- print and display: for printing out the stream
- values and accum: for accumulating into a list
- subscribe and ksubscribe: perform a task on every element
- where, where_any, keep,
gv["key"]
,gv["?key"]
: filter based on keys
Operator summary
Not all operators are listed here. See here for the complete list.
Filtering
- filter: filter with a function
- kfilter: filter with a function (keyword arguments)
- where: filter based on keys and simple conditions
- where_any: filter based on keys
- keep: filter based on keys (+drop the rest)
- distinct: only emit distinct elements
- norepeat: only emit distinct consecutive elements
- first: only emit the first element
- last: only emit the last element
- take: only emit the first n elements
- take_last: only emit the last n elements
- skip: suppress the first n elements
- skip_last: suppress the last n elements
Mapping
- map: map with a function
- kmap: map with a function (keyword arguments)
- augment: add extra keys using a mapping function
- getitem: extract value for a specific key
- sole: extract value from dict of length 1
- as_: wrap as a dict
Reduction
- reduce: reduce with a function
- scan: emit a result at each reduction step
- roll: reduce using overlapping windows
- kmerge: merge all dictionaries in the stream
- kscan: incremental version of
kmerge
Arithmetic reductions
Most of these reductions can be called with the scan
argument set to True
to use scan
instead of reduce
. scan
can also be set to an integer, in which case roll
is used.
Wrapping
- wrap: give a special key at the beginning and end of a block
- wrap_inherit: give a special key at the beginning and end of a block
- inherit: add default key/values for every give() in the block
- wrap: plug a context manager at the location of a
give.wrap
- kwrap: same as wrap, but pass kwargs
Timing
- debounce: suppress events that are too close in time
- sample: sample an element every n seconds
- throttle: emit at most once every n seconds
Debugging
- breakpoint: set a breakpoint whenever data comes in. Use this with filters.
- tag: assigns a special word to every entry. Use with
breakword
. - breakword: set a breakpoint on a specific word set by
tag
, using theBREAKWORD
environment variable.
Other
- accum: accumulate into a list
- display: print out the stream (pretty).
- print: print out the stream.
- values: accumulate into a list (context manager)
- subscribe: run a task on every element
- ksubscribe: run a task on every element (keyword arguments)
ML ideas
Here are some ideas for using giving in a machine learning model training context:
from giving import give, given
def main():
model = Model()
for i in range(niters):
# Give the model. give looks at the argument string, so
# give(model) is equivalent to give(model=model)
give(model)
loss = model.step()
# Give the iteration number and the loss (equivalent to give(i=i, loss=loss))
give(i, loss)
# Give the final model. The final=True key is there so we can filter on it.
give(model, final=True)
if __name__ == "__main__":
with given() as gv:
# ===========================================================
# Define our pipeline **before** running main()
# ===========================================================
# Filter all the lines that have the "loss" key
# NOTE: Same as gv.filter(lambda values: "loss" in values)
losses = gv.where("loss")
# Print the losses on stdout
losses.display() # always
losses.throttle(1).display() # OR: once every second
losses.slice(step=10).display() # OR: every 10th loss
# Log the losses (and indexes i) with wandb
# >> is shorthand for .subscribe()
losses >> wandb.log
# Print the minimum loss at the end
losses["loss"].min().print("Minimum loss: {}")
# Print the mean of the last 100 losses
# * affix adds columns, so we will display i, loss and meanloss together
# * The scan argument outputs the mean incrementally
# * It's important that each affixed column has the same length as
# the losses stream (or "table")
losses.affix(meanloss=losses["loss"].mean(scan=100)).display()
# Store all the losses in a list
losslist = losses["loss"].accum()
# Set a breakpoint whenever the loss is nan or infinite
losses["loss"].filter(lambda loss: not math.isfinite(loss)).breakpoint()
# Filter all the lines that have the "model" key:
models = gv.where("model")
# Write a checkpoint of the model at most once every 30 minutes
models["model"].throttle(30 * 60).subscribe(
lambda model: model.checkpoint()
)
# Watch with wandb, but only once at the very beginning
models["model"].first() >> wandb.watch
# Write the final model (you could also use models.last())
models.where(final=True)["model"].subscribe(
lambda model: model.save()
)
# ===========================================================
# Finally, execute the code. All the pipelines we defined above
# will proceed as we give data.
# ===========================================================
main()