A Python and R autograding solution

Overview

Otter-Grader

PyPI DOI Run tests codecov Documentation Status Slack

Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is designed to work with classes at any scale by abstracting away the autograding internals in a way that is compatible with any instructor's assignment distribution and collection pipeline. Otter supports local grading through parallel Docker containers, grading using the autograder platforms of 3rd party learning management systems (LMSs), the deployment of an Otter-managed grading virtual machine, and a client package that allows students to run public checks on their own machines. Otter is designed to grade Python scripts and Jupyter Notebooks, and is compatible with a few different LMSs, including Canvas and Gradescope.

Documentation

The documentation for Otter can be found here.

Contributing

See CONTRIBUTING.md.

Changelog

See CHANGELOG.md.

Comments
  • Make otter-grader installable in JupyterLite

    Make otter-grader installable in JupyterLite

    Is your feature request related to a problem? Please describe.

    jupyterlite is a full blown scientific python environment running entirely in your browser, no server required! You can try a demo here.

    If you try to install otter-grader with:

    import micropip
    await micropip.install('otter-grader')
    

    It'll fails, trying to install the pypdf2 package. JupyterLite can only install pure python packages (other than the compiled packages it comes with, such as numpy, pandas, etc). So these packages will need to be made optional wherever possible.

    Describe the solution you'd like

    Figure out what are the required packages for our user functionality, and try making everything else optional.

    Describe alternatives you've considered

    1. Don't support jupyterlite

    Additional context

    /cc @ericvd-ucb who really wants this, and @jptio the core dev of jupyterlite.

    enhancement 
    opened by yuvipanda 24
  • Otter assign - tests fail when answers are correct,

    Otter assign - tests fail when answers are correct, "Error: object not found"

    Describe the bug I am following the otter sample at https://github.com/ucbds-infra/ottr-sample to create an assignment, and all of my tests are failing on Gradescope in the tested Rmd submission (a submission with all correct solutions). It is notable that all tests pass successfully when testing locally using otter run lab04.Rmd. I am attaching a ZIP file containing the master notebook file that I am using and a screenshot of the failing tests. It seems that the tests are unable to recognize any of the variables that are created in the immediately preceding solution cell.

    Archive.zip

    Screen Shot 2021-07-03 at 12 45 21 AM

    To Reproduce Steps to reproduce the behavior:

    1. Unzip the Archive.zip attached. cd into its directory.
    2. Use Otter assign to prepare the autograder.zip file: otter assign lab04.Rmd dist. Note that the header of the master notebook has the following:
    BEGIN ASSIGNMENT
    requirements: requirements.R
    generate: true
    
    1. Upload the autograder.zip from dist/autograder to Gradescope.
    2. Test the autograder using the solution Rmd file.
    3. The above error is produced.

    Expected behavior All tests should pass successfully for the tested submission .Rmd file on Gradescope.

    Versions Python version: 3.9.5 Otter-Grader version: 2.2.2

    bug 
    opened by jerrybonnell 19
  • Issues running tests with Rmd notebooks

    Issues running tests with Rmd notebooks

    Hello,

    I've gotten through the full workflow with the ipynb demo files in the tutorial documation, but I want to use otter to grade Rmd notebooks.

    I have the following setup so far (all files can be found on my github):

    cd ~/Documents/otter-test
    
    .
    ├── TestAssignment.Rmd
    ├── dist
    │   ├── autograder
    │   │   ├── TestAssignment.Rmd
    │   │   └── tests
    │   │       ├── Question1.R
    │   │       └── Question2.R
    │   └── student
    │       ├── TestAssignment.Rmd
    │       └── tests
    │           ├── Question1.R
    │           └── Question2.R
    └── submissions
        ├── Student1-Correct-TestAssignment.Rmd
        └── Student2-Wrong-TestAssignment.Rmd
    
    

    Question1 has one open and one hidden test, and Question2 has 2 hidden tests. The student distribution notebook looks fine to me, and I copied them and put in answers to make the two submission files. I've run into three issues:

    1. In R, the relative path to tests does not work in the student notebook, I need the full path (not the end of the world; the dist/autograder notebook can use relative paths fine):
    > setwd("~/Documents/otter-test/dist/student/")
    > 
    > library(testthat)
    > library(ottr)
    >
    > ice_cream <- "vanilla" # YOUR CODE HERE
    > . = ottr::check("tests/Question1.R")
    cannot open file 'tests/Question1.R': No such file or directoryError in file(filename, "r") : cannot open the connection
    > getwd()
    [1] "/Users/hgibling/Documents/otter-test/dist/student"
    > . = ottr::check("~/Documents/otter-test/dist/student/tests/Question1.R")
    All tests passed!
    
    1. While the tests work appropriately in the original raw notebook, in the student notebooks (and in the dist/autograder notebook) the tests pass when the answer is blatantly wrong:
    > getwd()
    [1] "/Users/hgibling/Documents/otter-test/dist/student"
    > ice_cream <- 13 # YOUR CODE HERE
    > . = ottr::check("~/Documents/otter-test/dist/student/tests/Question1.R")
    All tests passed!
    
    ###
    
    > setwd("~/Documents/otter-test/dist/autograder/")
    > ice_cream <- "chocolate" #SOLUTION
    > . = ottr::check("tests/Question1.R")
    All tests passed!
    > ice_cream <- 13 #SOLUTION
    > . = ottr::check("tests/Question1.R")
    All tests passed!
    
    1. I imagine the previous issue might have something to do with not being able to run otter check in terminal. Since I'm pretending to be a student, I am supplying the path to the student tests created after otter assign in dist:
    cd ~/Documents/otter-test/submissions
    otter check Student1-Correct-TestAssignment.Rmd -t ~/Documents/otter-test/dist/student/tests -q Question1 
    
    Traceback (most recent call last):
      File "/Users/hgibling/miniconda3/bin/otter", line 8, in <module>
        sys.exit(cli())
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1137, in __call__
        return self.main(*args, **kwargs)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1062, in main
        rv = self.invoke(ctx)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1668, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/otter/cli.py", line 55, in check_cli
        return check(*args, **kwargs)
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/otter/check/__init__.py", line 77, in main
        raise e
      File "/Users/hgibling/miniconda3/lib/python3.7/site-packages/otter/check/__init__.py", line 50, in main
        assert os.path.isfile(test_path), "Test {} does not exist".format(question)
    AssertionError: Test Question1.R does not exist
    

    I'm not sure what's happening. Any ideas?

    bug question 
    opened by hgibling 13
  • Showing output to students for debugging purposes

    Showing output to students for debugging purposes

    Hi team,

    I am using Gradescope and I would like to display the autograder output when a notebook failed to run to students so they can debug their own notebooks. Currently, only instructors can see the log of the errors but the students cannot.

    According to Gradescope, I need to change the "stdout_visibility" in results.JSON file but there is no results.JSON file when I run otter assign.

    How can I configure this? Thanks, Quan

    question 
    opened by quan3010 12
  • Add download HTML attribute to download links for JupyterLab.

    Add download HTML attribute to download links for JupyterLab.

    As explained here by @jasongrout, JupyterLab will handle download links correctly if the download attribute is set. We will continue investigating ways to smooth this experience in JupyterLab, as it does come up in other scenarios for our users, but for our use case with otter for grading, this simple fix should do the trick.

    Closes #339.

    bug 
    opened by fperez 12
  • grader generates empty pdfs when grading open questions

    grader generates empty pdfs when grading open questions

    Describe the bug

    I have open questions in notebooks and I grade zip files. The grader works fine but when supplying the --pdfs option, it generates empty pdfs. I could not find information of how to solve it in the documentation.

    To Reproduce

    My otter_config.json contains

    {
      "pdf": true,
      "zips": true
    }
    

    The zipped notebooks contain cells of the form

    <!-- BEGIN QUESTION -->
    <!--
    BEGIN QUESTION
    name: q
    manual: True
    -->
    

    And answers of the form

    **My Answer:**

    I run otter grade -z --pdfs

    Expected behavior

    I would like to get the grading summary and generated pdfs in the submissions_pdfs folder. The grading summary is ok but all the pdfs in the generated pdfs folder are empty.

    Versions Please provide your Python and Otter versions. The Otter can be obtained using from otter import __version__

    Python version: 3.8.10 Otter-Grader version: 3.2.1

    bug wontfix 
    opened by shaolintl 10
  • `otter generate autograder` generates buggy `autograder.zip`

    `otter generate autograder` generates buggy `autograder.zip`

    otter generate autograder generates autograder.zip with both requirements.txt and requirements.r.

    Unfortrunately, I am not permitted to share the code, but I imagine this will reproduce the same situation, after you cd into any Jupyter Notebook Python (not R) assignment:

    conda activate base
    otter generate autograder
    

    The problem is that Gradescope programming assignments don't like this zip file. It's hard to copy/paste Gradescope programming assignment Docker build output, but here's the last few lines (all kind of messed up)

    ==> For changes to take effect, close and re-open your current shell. <==
    
    Cloning into '/autograder/source/ottr'...
    ERROR conda.cli.main_run:execute(32): Subprocess for 'conda run ['Rscript', '-e', 'devtools::install\\(\\)']' command failed.  (See above for error)
    Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : 
      there is no package called ‘usethis’
    Calls: :: ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart
    Execution halted
    
    ERROR conda.cli.main_run:execute(32): Subprocess for 'conda run ['Rscript', '-e', 'devtools::install\\(\\)']' command failed.  (See above for error)
    Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : 
      there is no package called ‘usethis’
    Calls: :: ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart
    Execution halted
    

    There should be no running of Rscript at all, so it's likely an easy fix.

    I can get around it quite easily by just deleting requirements.r before I upload the zip file to Gradescope.

    Versions Otter: 1.1.6 Python: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]

    bug 
    opened by tbrown122387 10
  • UnicodeDecodeError

    UnicodeDecodeError

    Describe the bug A UnicodeDecodeError occurs when running grader.check_all() in a student JN in JupyterLab Desktop in Windows 11 if a question uses Japanese.

    • No such error for grader.check() for individual questions.
    • No such error occurs if grader.check_all() of the same file is run in JupyterLab Desktop in Mac.

    To Reproduce Steps to reproduce the behavior:

    1. otter assign the attached file in Mac.
    2. Open a student JN in JupyterLab Desktop in Windows 11 Pro
    3. Run all commands including grader.check_all()
    4. See the error message attached below.

    Expected behavior No such error occurs

    Versions otter-grader 4.0.1 JupyterLab Desktop 3.4.5-1

    Additional context

    cp932.ipynb.txt

    The error message

    ---------------------------------------------------------------------------
    UnicodeDecodeError                        Traceback (most recent call last)
    Input In [5], in <cell line: 1>()
    ----> 1 grader.check_all()
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:151, in grading_mode_disabled(wrapped, self, args, kwargs)
        149 if type(self)._grading_mode:
        150     return
    --> 151 return wrapped(*args, **kwargs)
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:188, in logs_event.<locals>.event_logger(wrapped, self, args, kwargs)
        186 except Exception as e:
        187     self._log_event(event_type, success=False, error=e)
    --> 188     raise e
        190 else:
        191     self._log_event(event_type, results=results, question=question, shelve_env=shelve_env)
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:182, in logs_event.<locals>.event_logger(wrapped, self, args, kwargs)
        179     question, results, shelve_env = wrapped(*args, **kwargs)
        181 else:
    --> 182     results = wrapped(*args, **kwargs)
        183     shelve_env = {}
        184     question = None
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\notebook.py:438, in Notebook.check_all(self)
        432 """
        433 Runs all tests on this notebook. Tests are run against the current global environment, so any
        434 tests with variable name collisions will fail.
        435 """
        436 self._log_event(EventType.BEGIN_CHECK_ALL)
    --> 438 tests = list_available_tests(self._path, self._resolve_nb_path(None, fail_silently=True))
        440 global_env = inspect.currentframe().f_back.f_back.f_back.f_globals
        442 self._logger.debug(f"Found available tests: {', '.join(tests)}")
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\site-packages\otter\check\utils.py:221, in list_available_tests(tests_dir, nb_path)
        218         raise ValueError("Tests directory does not exist and no notebook path provided")
        220     with open(nb_path) as f:
    --> 221         nb = json.load(f)
        223     tests = list(nb["metadata"][NOTEBOOK_METADATA_KEY]["tests"].keys())
        225 return sorted(tests)
    
    File ~\AppData\Roaming\jupyterlab-desktop\jlab_server\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
        274 def load(fp, *, cls=None, object_hook=None, parse_float=None,
        275         parse_int=None, parse_constant=None, object_pairs_hook=None, **kw):
        276     """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
        277     a JSON document) to a Python object.
        278 
       (...)
        291     kwarg; otherwise ``JSONDecoder`` is used.
        292     """
    --> 293     return loads(fp.read(),
        294         cls=cls, object_hook=object_hook,
        295         parse_float=parse_float, parse_int=parse_int,
        296         parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
    
    UnicodeDecodeError: 'cp932' codec can't decode byte 0x82 in position 508: illegal multibyte sequence
    
    bug 
    opened by spring-haru 9
  • Otter grading fails when checking more than 2 matplotlib objects

    Otter grading fails when checking more than 2 matplotlib objects

    Describe the bug

    I'm trying to create exercise for students to try different visualizations with matplotlib. I'm checking their work by saving the output to an axis object and then interrogating those properties.

    In general:

    ax_a = sns.barplot(data = table.to_df(), x = 'col2', y = 'col1', hue='col3') # SOLUTION
    

    and then in the tests

    # Test #
    xticks = {tick.get_text() for tick in ax_a.get_xticklabels()}
    expected = {'True', 'False'}
    assert (xticks == expected), 'Wrong patient ticks, did you plot right independent variable?'
    

    This all works great.

    Then, for some reason, when any more than 2 plots are in the notebook. The otter autochecker fails, yet generates notebooks that pass when run in interactive jupyter notebooks.

    To Reproduce

    I've attached a minimal_example.ipynb file minimal_example.ipynb.txt . Currently, when running otter assign it fails yet it creates a autograder solution that passes in the distribution folder.

    If you delete the third question. otter assign runs correctly. It doesn't even matter if its part of the doc tests. Just generating the third plot causes the previous two to fail. This is even more perplexing because I intentionally saved each plot into its own axis so I could robustly validate them even if the student ran things out of order.

    Expected behavior 3 or more plots.

    Versions

    Interactive notebooks otter-grader 1.1.6 and general Colab environment.

    Generating/grading version 2.1.7, Python 3.7.10

    Additional context

    Is there an alternative strategy for validating the "correct plot" was made aside from manual mode?

    bug 
    opened by JudoWill 9
  • Optimize & fix docker grading

    Optimize & fix docker grading

    Hello 😀 there have been some minor issues using the provided setup.sh and I would like to provide a fix:

    • Because the otter-grader base image is an ubuntu 20.04 texlive-generic-recommended is replaced by texlive-plain-generic
    • Use wkhtmltox_0.12.6-1.focal_amd64.deb instead of wkhtmltox_0.12.6-1.bionic_amd64.deb
    • Installing libcurl4-gnutls-dev and libcurl4-openssl-dev at once causes conflicts.
    • You can reduce the layers while building the docker image by merging the commands in the Dockerfile
    opened by fritterhoff 9
  • Speed up building R autograders on gradescope

    Speed up building R autograders on gradescope

    Describe the bug Building an R autograder on gradescope takes about 25 min each time, while building one in Python takes about 5 min. There are two steps where R autograders on gradescope get stuck on for ~10 min with no additional output in the console:

    Just after installing the conda packages (this takes about 3 seconds in python): image

    Just after running apt get clean (this takes about 2-3 minutes for python): image

    To Reproduce Upload the example R autograder config with no additional packages to gradescope.

    Expected behavior Faster build times on R would be very helpful. From sample setup.sh in the docs it seems like several of the steps could be skipped with access to the ucbdsinfra/otter-grader container image. Is there any way this could be distributed and uploaded to gradescope as part of the zip file (or something along those lines), so that the wait time is less than 5 min?

    It would also be great if there could be output to console during the most time-consuming steps to know what is taking such a long time.

    Versions 3.10, 4.0.2

    enhancement 
    opened by joelostblom 8
  • Error encountered while generating and submitting PDF

    Error encountered while generating and submitting PDF

    Describe the bug

    Error encountered while generating and submitting PDF: There was an error generating your LaTeX; showing full error message: Failed to run "['xelatex', 'notebook.tex', '-quiet']" command: This is XeTeX, Version 3.141592653-2.6-0.999993 (TeX Live 2022/dev/Debian) (preloaded format=xelatex) restricted \write18 enabled.

    If the error above is related to xeCJK or fandol in LaTeX and you don't require this functionality, try running again with no_xecjk set to True or the --no-xecjk flag.

    To Reproduce Steps to reproduce the behavior:

    1. Prepare the assignment notebook
    2. Run otter assign
    3. upload the zip file under autograder to Gradescope
    4. test autograder by submitting a solution notebook

    Expected behavior A pdf should be generated and submitted to another Gradescope assignment.

    Versions Python: 3.7.4 Otter: 4.2.1 nbconvert: 6.4.4

    Additional context Besides the failure mentioned above, I am also wondering how I can set no_xecjk to True. I can do that locally but it is not in one of the configurations in the Assignment Metadata.

    Also I have tried these two commands:

    otter export -v   --no-xecjk .\Homework1_F22_Solution.ipynb dist
    
    otter export -v  --exporter html  .\Homework1_F22_Solution.ipynb dist
    

    it gives Failed to run "xelatex notebook.tex -quiet" command: and otter.export.utils.WkhtmltopdfNotFoundError: PDF via HTML indicated but wkhtmltopdf not found , respectively.

    bug 
    opened by ericchouzyb 0
  • Executing otter-grader v4.2.1 for RMD files fails to generate different versions (student, grader) of the RMD master file

    Executing otter-grader v4.2.1 for RMD files fails to generate different versions (student, grader) of the RMD master file

    Describe the bug Executing the ottr-sample with a new otter installation v.4.0. After executing the following command in console: otter assign hw01.Rmd dist

    with otter v4.2.1 the process is executed silently without any errors (even with --verbose and --debug flags). However the resulting Rmd files are the same. No extraction of the R blocks and otter-grader instructions is performed in any of the versions

    To Reproduce Steps to reproduce the behavior:

    1. Install otter-grader v4.2.1
    2. Download the otter-sample directory
    3. Execute otter assign hw01.Rmd dist in the otter-sample directory
    4. There is no difference between ./dist/autograder/hw01.Rmd and ./dist/student/hw01.Rmd

    Expected behavior If there is any problem converting the RMD file, the execution of the command should point out any error.

    Versions python = 3.8.5 otter-grader = 4.2.1

    Additional context I downgraded the otter-grader to v3.3.0, and the parsing of the RMD document is working fine

    bug 
    opened by opterix 0
  • Can't create raw cells in Deepnote/Colab

    Can't create raw cells in Deepnote/Colab

    From an instructor that filled out the Otter-Grader Adoption Survey:

    One pain point of Otter is that it requires using Raw cells for denoting start/end of sections. Some notebook tools (Deepnote, Colab, etc) don't allow for the creation of Raw cells, which makes authoring notebooks using Otter v1 impossible.

    enhancement 
    opened by surajrampure 1
  • Multiline statements cause test failures

    Multiline statements cause test failures

    Describe the bug Having any multiline statement in test cells lead to a test failure because of unmatched parenthesis although the code works fine when executed in the Jupyter Notebook.

    For example, a test that looks like this:

    assert y.find(
        'a'
    ) == 1
    

    Will work fine in Jupyter and with otter assign, but when doing otter run it will throw the following error:

    /home/joel/miniconda3/envs/573/lib/python3.10/site-packages/nbformat/__init__.py:128: MissingIDFieldWarning: Code cell is missing an id field, this will become a hard error in future nbformat versions. You may want to use `normalize()` on your notebooks before validations (available since nbformat 5.1.4). Previous versions of nbformat are fixing this issue transparently, and will stop doing so in the future.
      validate(nb)
    Traceback (most recent call last):
      File "/home/joel/miniconda3/envs/573/bin/otter", line 8, in <module>
        sys.exit(cli())
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
        return self.main(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1055, in main
        rv = self.invoke(ctx)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/click/core.py", line 760, in invoke
        return __callback(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/cli.py", line 32, in wrapper
        return f(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/cli.py", line 65, in assign_cli
        return assign(*args, **kwargs)
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/assign/__init__.py", line 177, in main
        run_tests(
      File "/home/joel/miniconda3/envs/573/lib/python3.10/site-packages/otter/assign/utils.py", line 216, in run_tests
        raise RuntimeError(f"Some autograder tests failed in the autograder notebook:\n" + \
    RuntimeError: Some autograder tests failed in the autograder notebook:
        q1 results:
            q1 - 1 result:
                ❌ Test case failed
                Trying:
                    assert y.find(
                        'a'
                Expecting nothing
                **********************************************************************
                Line 1, in q1 0
                Failed example:
                    assert y.find(
                        'a'
                Exception raised:
                    Traceback (most recent call last):
                      File "/home/joel/miniconda3/envs/573/lib/python3.10/doctest.py", line 1350, in __run
                        exec(compile(example.source, filename, "single",
                      File "<doctest q1 0[0]>", line 1
                        assert y.find(
                                     ^
                    SyntaxError: '(' was never closed
                Trying:
                    ) == 1
                Expecting nothing
                **********************************************************************
                Line 3, in q1 0
                Failed example:
                    ) == 1
                Exception raised:
                    Traceback (most recent call last):
                      File "/home/joel/miniconda3/envs/573/lib/python3.10/doctest.py", line 1350, in __run
                        exec(compile(example.source, filename, "single",
                      File "<doctest q1 0[1]>", line 1
                        ) == 1
                        ^
                    SyntaxError: unmatched ')'
    

    To Reproduce Setup a notebook question with a test cell that has multiple lines, e.g. image

    Expected behavior It is useful to be able to split code over mutliple lines to make it more readable when doing method chaining and similar, so it would be great it this did not lead to an error, but worked since it is valid Python syntax.

    Versions Otter 4.2.0 Python 3.10.x

    bug 
    opened by joelostblom 0
  • Add progress indicator to `otter assign`

    Add progress indicator to `otter assign`

    Is your feature request related to a problem? Please describe. On long assignments, it can be hard to know whether otter assign is stuck or it is just taking a long time to go through all the questions. Getting some feedback on the grader's progress would be nice.

    Describe the solution you'd like It would be convenient if otter assign printed which question it was executing. A fancier version would be to add a progress meter such as https://tqdm.github.io/. Something like this should work well

    from tqdm import tqdm
    
    ...
    
    question_ids = # all question ids in the assignment 
    question_progress_bar = tqdm(question_ids)
    for question_id in question_progress_bar:
         question_progress_bar.set_description(question_id)
        ...
    
    enhancement 
    opened by joelostblom 1
  • Make `# SOLUTION` compatible with multi-line assignment

    Make `# SOLUTION` compatible with multi-line assignment

    Is your feature request related to a problem? Please describe. It would be convenient if # SOLUTION tags would work when the assigned expression is split over multiple lines. e.g.

    x = {  # SOLUTION
        'one': 1,
        'two': 2
    }
    

    which currently becomes:

    x = ...
        'one': 1,
        'two': 2
    }
    

    Describe the solution you'd like I would like the output of the above to be:

    x = ...
    

    Describe alternatives you've considered I am currently doing this which works but is a bit more typing:

    x = {  # SOLUTION
    # BEGIN SOLUTION NO PROMPT
        'one': 1,
        'two': 2
    }
    # END SOLUTION
    
    enhancement 
    opened by joelostblom 0
Releases(v4.2.1)
Owner
Infrastructure Team
Infrastructure Team at UC Berkeley Data Science Education Program
Infrastructure Team
A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

This tutorial's purpose is to introduce Pythonistas to methods for scaling their data science and machine learning work to larger datasets and larger models, using the tools and APIs they know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

Coiled 102 Nov 10, 2022
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Dec 25, 2022
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

???? ??. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python and HoloViz Panel.

Marc Skov Madsen 97 Dec 8, 2022
Example Of Splunk Search Query With Python And Splunk Python SDK

SSQAuto (Splunk Search Query Automation) Example Of Splunk Search Query With Python And Splunk Python SDK installation: ➜ ~ git clone https://github.c

AmirHoseinTangsiriNET 1 Nov 14, 2021
A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

Rishikesh S 4 Oct 17, 2022
ToeholdTools is a Python package and desktop app designed to facilitate analyzing and designing toehold switches, created as part of the 2021 iGEM competition.

ToeholdTools Category Status Repository Package Build Quality A library for the analysis of toehold switch riboregulators created by the iGEM team Cit

null 0 Dec 1, 2021
Python beta calculator that retrieves stock and market data and provides linear regressions.

Stock and Index Beta Calculator Python script that calculates the beta (β) of a stock against the chosen index. The script retrieves the data and resa

sammuhrai 4 Jul 29, 2022
Larch: Applications and Python Library for Data Analysis of X-ray Absorption Spectroscopy (XAS, XANES, XAFS, EXAFS), X-ray Fluorescence (XRF) Spectroscopy and Imaging

Larch: Data Analysis Tools for X-ray Spectroscopy and More Documentation: http://xraypy.github.io/xraylarch Code: http://github.com/xraypy/xraylarch L

xraypy 95 Dec 13, 2022
Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Sebastian Schäfer 10 Dec 8, 2022
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

null 2 Nov 20, 2021
Incubator for useful bioinformatics code, primarily in Python and R

Collection of useful code related to biological analysis. Much of this is discussed with examples at Blue collar bioinformatics. All code, images and

Brad Chapman 560 Jan 3, 2023
Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano

PyMC3 is a Python package for Bayesian statistical modeling and Probabilistic Machine Learning focusing on advanced Markov chain Monte Carlo (MCMC) an

PyMC 7.2k Dec 30, 2022
Statsmodels: statistical modeling and econometrics in Python

About statsmodels statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics an

statsmodels 8k Dec 29, 2022
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

null 7.7k Dec 30, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Jan 2, 2023
Sensitivity Analysis Library in Python (Numpy). Contains Sobol, Morris, Fractional Factorial and FAST methods.

Sensitivity Analysis Library (SALib) Python implementations of commonly used sensitivity analysis methods. Useful in systems modeling to calculate the

SALib 663 Jan 5, 2023
A Python package for Bayesian forecasting with object-oriented design and probabilistic models under the hood.

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Uber Open Source 1.6k Dec 29, 2022
DenseClus is a Python module for clustering mixed type data using UMAP and HDBSCAN

DenseClus is a Python module for clustering mixed type data using UMAP and HDBSCAN. Allowing for both categorical and numerical data, DenseClus makes it possible to incorporate all features in clustering.

Amazon Web Services - Labs 53 Dec 8, 2022
Python ELT Studio, an application for building ELT (and ETL) data flows.

The Python Extract, Load, Transform Studio is an application for performing ELT (and ETL) tasks. Under the hood the application consists of a two parts.

Schlerp 55 Nov 18, 2022