MachineLearningStocks is designed to be an intuitive and highly extensible template project applying machine learning to making stock predictions.

Overview

MachineLearningStocks in python: a starter project and guide

forthebadge made-with-python

GitHub license

EDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained

MachineLearningStocks is designed to be an intuitive and highly extensible template project applying machine learning to making stock predictions. My hope is that this project will help you understand the overall workflow of using machine learning to predict stock movements and also appreciate some of its subtleties. And of course, after following this guide and playing around with the project, you should definitely make your own improvements – if you're struggling to think of what to do, at the end of this readme I've included a long list of possiblilities: take your pick.

Concretely, we will be cleaning and preparing a dataset of historical stock prices and fundamentals using pandas, after which we will apply a scikit-learn classifier to discover the relationship between stock fundamentals (e.g PE ratio, debt/equity, float, etc) and the subsequent annual price change (compared with the an index). We then conduct a simple backtest, before generating predictions on current data.

While I would not live trade based off of the predictions from this exact code, I do believe that you can use this project as starting point for a profitable trading system – I have actually used code based on this project to live trade, with pretty decent results (around 20% returns on backtest and 10-15% on live trading).

This project has quite a lot of personal significance for me. It was my first proper python project, one of my first real encounters with ML, and the first time I used git. At the start, my code was rife with bad practice and inefficiency: I have since tried to amend most of this, but please be warned that some minor issues may remain (feel free to raise an issue, or fork and submit a PR). Both the project and myself as a programmer have evolved a lot since the first iteration, but there is always room to improve.

As a disclaimer, this is a purely educational project. Be aware that backtested performance may often be deceptive – trade at your own risk!

MachineLearningStocks predicts which stocks will outperform. But it does not suggest how best to combine them into a portfolio. I have just released PyPortfolioOpt, a portfolio optimisation library which uses classical efficient frontier techniques (with modern improvements) in order to generate risk-efficient portfolios. Generating optimal allocations from the predicted outperformers might be a great way to improve risk-adjusted returns.

This guide has been cross-posted at my academic blog, reasonabledeviations.com

Contents

Overview

The overall workflow to use machine learning to make stocks prediction is as follows:

  1. Acquire historical fundamental data – these are the features or predictors
  2. Acquire historical stock price data – this is will make up the dependent variable, or label (what we are trying to predict).
  3. Preprocess data
  4. Use a machine learning model to learn from the data
  5. Backtest the performance of the machine learning model
  6. Acquire current fundamental data
  7. Generate predictions from current fundamental data

This is a very generalised overview, but in principle this is all you need to build a fundamentals-based ML stock predictor.

EDIT as of 24/5/18

This project uses pandas-datareader to download historical price data from Yahoo Finance. However, in the past few weeks this has become extremely inconsistent – it seems like Yahoo have added some measures to prevent the bulk download of their data. I will try to add a fix, but for now, take note that download_historical_prices.py may be deprecated.

As a temporary solution, I've uploaded stock_prices.csv and sp500_index.csv, so the rest of the project can still function.

EDIT as of October 2019

I expect that after so much time there will be many data issues. To that end, I have decided to upload the other CSV files: keystats.csv (the output of parsing_keystats.py) and forward_sample.csv (the output of current_data.py).

Quickstart

If you want to throw away the instruction manual and play immediately, clone this project, then download and unzip the data file into the same directory. Then, open an instance of terminal and cd to the project's file path, e.g

cd Users/User/Desktop/MachineLearningStocks

Then, run the following in terminal:

pip install -r requirements.txt
python download_historical_prices.py
python parsing_keystats.py
python backtesting.py
python current_data.py
pytest -v
python stock_prediction.py

Otherwise, follow the step-by-step guide below.

Preliminaries

This project uses python 3.6, and the common data science libraries pandas and scikit-learn. If you are on python 3.x less than 3.6, you will find some syntax errors wherever f-strings have been used for string formatting. These are fortunately very easy to fix (just rebuild the string using your preferred method), but I do encourage you to upgrade to 3.6 to enjoy the elegance of f-strings. A full list of requirements is included in the requirements.txt file. To install all of the requirements at once, run the following code in terminal:

pip install -r requirements.txt

To get started, clone this project and unzip it. This folder will become our working directory, so make sure you cd your terminal instance into this directory.

Historical data

Data acquisition and preprocessing is probably the hardest part of most machine learning projects. But it is a necessary evil, so it's best to not fret and just carry on.

For this project, we need three datasets:

  1. Historical stock fundamentals
  2. Historical stock prices
  3. Historical S&P500 prices

We need the S&P500 index prices as a benchmark: a 5% stock growth does not mean much if the S&P500 grew 10% in that time period, so all stock returns must be compared to those of the index.

Historical stock fundamentals

Historical fundamental data is actually very difficult to find (for free, at least). Although sites like Quandl do have datasets available, you often have to pay a pretty steep fee.

It turns out that there is a way to parse this data, for free, from Yahoo Finance. I will not go into details, because Sentdex has done it for us. On his page you will be able to find a file called intraQuarter.zip, which you should download, unzip, and place in your working directory. Relevant to this project is the subfolder called _KeyStats, which contains html files that hold stock fundamentals for all stocks in the S&P500 between 2003 and 2013, sorted by stock. However, at this stage, the data is unusable – we will have to parse it into a nice csv file before we can do any ML.

Historical price data

In the first iteration of the project, I used pandas-datareader, an extremely convenient library which can load stock data straight into pandas. However, after Yahoo Finance changed their UI, datareader no longer worked, so I switched to Quandl, which has free stock price data for a few tickers, and a python API. However, as pandas-datareader has been fixed, we will use that instead.

Likewise, we can easily use pandas-datareader to access data for the SPY ticker. Failing that, one could manually download it from yahoo finance, place it into the project directory and rename it sp500_index.csv.

The code for downloading historical price data can be run by entering the following into terminal:

python download_historical_prices.py

Creating the training dataset

Our ultimate goal for the training data is to have a 'snapshot' of a particular stock's fundamentals at a particular time, and the corresponding subsequent annual performance of the stock.

For example, if our 'snapshot' consists of all of the fundamental data for AAPL on the date 28/1/2005, then we also need to know the percentage price change of AAPL between 28/1/05 and 28/1/06. Thus our algorithm can learn how the fundamentals impact the annual change in the stock price.

In fact, this is a slight oversimplification. In fact, what the algorithm will eventually learn is how fundamentals impact the outperformance of a stock relative to the S&P500 index. This is why we also need index data.

Preprocessing historical price data

When pandas-datareader downloads stock price data, it does not include rows for weekends and public holidays (when the market is closed).

However, referring to the example of AAPL above, if our snapshot includes fundamental data for 28/1/05 and we want to see the change in price a year later, we will get the nasty surprise that 28/1/2006 is a Saturday. Does this mean that we have to discard this snapshot?

By no means – data is too valuable to callously toss away. As a workaround, I instead decided to 'fill forward' the missing data, i.e we will assume that the stock price on Saturday 28/1/2006 is equal to the stock price on Friday 27/1/2006.

Features

Below is a list of some of the interesting variables that are available on Yahoo Finance.

Valuation measures

  • 'Market Cap'
  • Enterprise Value
  • Trailing P/E
  • Forward P/E
  • PEG Ratio
  • Price/Sales
  • Price/Book
  • Enterprise Value/Revenue
  • Enterprise Value/EBITDA

Financials

  • Profit Margin
  • Operating Margin
  • Return on Assets
  • Return on Equity
  • Revenue
  • Revenue Per Share
  • Quarterly Revenue Growth
  • Gross Profit
  • EBITDA
  • Net Income Avi to Common
  • Diluted EPS
  • Quarterly Earnings Growth
  • Total Cash
  • Total Cash Per Share
  • Total Debt
  • Total Debt/Equity
  • Current Ratio
  • Book Value Per Share
  • Operating Cash Flow
  • Levered Free Cash Flow

Trading information

  • Beta
  • 50-Day Moving Average
  • 200-Day Moving Average
  • Avg Vol (3 month)
  • Shares Outstanding
  • Float
  • % Held by Insiders
  • % Held by Institutions
  • Shares Short
  • Short Ratio
  • Short % of Float
  • Shares Short (prior month)

Parsing

However, all of this data is locked up in HTML files. Thus, we need to build a parser. In this project, I did the parsing with regex, but please note that generally it is really not recommended to use regex to parse HTML. However, I think regex probably wins out for ease of understanding (this project being educational in nature), and from experience regex works fine in this case.

This is the exact regex used:

r'>' + re.escape(variable) + r'.*?(\-?\d+\.*\d*K?M?B?|N/A[\\n|\s]*|>0|NaN)%?(|)'

While it looks pretty arcane, all it is doing is searching for the first occurence of the feature (e.g "Market Cap"), then it looks forward until it finds a number immediately followed by a or (signifying the end of a table entry). The complexity of the expression above accounts for some subtleties in the parsing:

  • the numbers could be preceeded by a minus sign
  • Yahoo Finance sometimes uses K, M, and B as abbreviations for thousand, million and billion respectively.
  • some data are given as percentages
  • some datapoints are missing, so instead of a number we have to look for "N/A" or "NaN.

Both the preprocessing of price data and the parsing of keystats are included in parsing_keystats.py. Run the following in your terminal:

python parsing_keystats.py

You should see the file keystats.csv appear in your working directory. Now that we have the training data ready, we are ready to actually do some machine learning.

Backtesting

Backtesting is arguably the most important part of any quantitative strategy: you must have some way of testing the performance of your algorithm before you live trade it.

Despite its importance, I originally did not want to include backtesting code in this repository. The reasons were as follows:

  • Backtesting is messy and empirical. The code is not very pleasant to use, and in practice requires a lot of manual interaction.
  • Backtesting is very difficult to get right, and if you do it wrong, you will be deceiving yourself with high returns.
  • Developing and working with your backtest is probably the best way to learn about machine learning and stocks – you'll see what works, what doesn't, and what you don't understand.

Nevertheless, because of the importance of backtesting, I decided that I can't really call this a 'template machine learning stocks project' without backtesting. Thus, I have included a simplistic backtesting script. Please note that there is a fatal flaw with this backtesting implementation that will result in much higher backtesting returns. It is quite a subtle point, but I will let you figure that out :)

Run the following in terminal:

python backtesting.py

You should get something like this:

Classifier performance
======================
Accuracy score:  0.81
Precision score:  0.75

Stock prediction performance report
===================================
Total Trades: 177
Average return for stock predictions:  37.8 %
Average market return in the same period:  9.2%
Compared to the index, our strategy earns  28.6 percentage points more

Again, the performance looks too good to be true and almost certainly is.

Current fundamental data

Now that we have trained and backtested a model on our data, we would like to generate actual predictions on current data.

As always, we can scrape the data from good old Yahoo Finance. My method is to literally just download the statistics page for each stock (here is the page for Apple), then to parse it using regex as before.

In fact, the regex should be almost identical, but because Yahoo has changed their UI a couple of times, there are some minor differences. This part of the projet has to be fixed whenever yahoo finance changes their UI, so if you can't get the project to work, the problem is most likely here.

Run the following in terminal:

python current_data.py

The script will then begin downloading the HTML into the forward/ folder within your working directory, before parsing this data and outputting the file forward_sample.csv. You might see a few miscellaneous errors for certain tickers (e.g 'Exceeded 30 redirects.'), but this is to be expected.

Stock prediction

Now that we have the training data and the current data, we can finally generate actual predictions. This part of the project is very simple: the only thing you have to decide is the value of the OUTPERFORMANCE parameter (the percentage by which a stock has to beat the S&P500 to be considered a 'buy'). I have set it to 10 by default, but it can easily be modified by changing the variable at the top of the file. Go ahead and run the script:

python stock_prediction.py

You should get something like this:

21 stocks predicted to outperform the S&P500 by more than 10%:
NOC FL SWK NFX LH NSC SCHL KSU DDS GWW AIZ ORLY R SFLY SHW GME DLX DIS AMP BBBY APD

Unit testing

I have included a number of unit tests (in the tests/ folder) which serve to check that things are working properly. However, due to the nature of the some of this projects functionality (downloading big datasets), you will have to run all the code once before running the tests. Otherwise, the tests themselves would have to download huge datasets (which I don't think is optimal).

I thus recommend that you run the tests after you have run all the other scripts (except, perhaps, stock_prediction.py).

To run the tests, simply enter the following into a terminal instance in the project directory:

pytest -v

Please note that it is not considered best practice to include an __init__.py file in the tests/ directory (see here for more), but I have done it anyway because it is uncomplicated and functional.

Where to go from here

I have stated that this project is extensible, so here are some ideas to get you started and possibly increase returns (no promises).

Data acquisition

My personal belief is that better quality data is THE factor that will ultimately determine your performance. Here are some ideas:

  • Explore the other subfolders in Sentdex's intraQuarter.zip.
  • Parse the annual reports that all companies submit to the SEC (have a look at the Edgar Database)
  • Try to find websites from which you can scrape fundamental data (this has been my solution).
  • Ditch US stocks and go global – perhaps better results may be found in markets that are less-liquid. It'd be interesting to see whether the predictive power of features vary based on geography.
  • Buy Quandl data, or experiment with alternative data.

Data preprocessing

  • Build a more robust parser using BeautifulSoup
  • In this project, I have just ignored any rows with missing data, but this reduces the size of the dataset considerably. Are there any ways you can fill in some of this data?
    • hint: if the PE ratio is missing but you know the stock price and the earnings/share...
    • hint 2: how different is Apple's book value in March to its book value in June?
  • Some form of feature engineering
    • e.g, calculate Graham's number and use it as a feature
    • some of the features are probably redundant. Why not remove them to speed up training?
  • Speed up the construction of keystats.csv.
    • hint: don't keep appending to one growing dataframe! Split it into chunks

Machine learning

Altering the machine learning stuff is probably the easiest and most fun to do.

  • The most important thing if you're serious about results is to find the problem with the current backtesting setup and fix it. This will likely be quite a sobering experience, but if your backtest is done right, it should mean that any observed outperformance on your test set can be traded on (again, do so at your own discretion).
  • Try a different classifier – there is plenty of research that advocates the use of SVMs, for example. Don't forget that other classifiers may require feature scaling etc.
  • Hyperparameter tuning: use gridsearch to find the optimal hyperparameters for your classifier. But make sure you don't overfit!
  • Make it deep – experiment with neural networks (an easy way to start is with sklearn.neural_network).
  • Change the classification problem into a regression one: will we achieve better results if we try to predict the stock return rather than whether it outperformed?
  • Run the prediction multiple times (perhaps using different hyperparameters?) and select the k most common stocks to invest in. This is especially important if the algorithm is not deterministic (as is the case for Random Forest)
  • Experiment with different values of the outperformance parameter.
  • Should we really be trying to predict raw returns? What happens if a stock achieves a 20% return but does so by being highly volatile?
  • Try to plot the importance of different features to 'see what the machine sees'.

Contributing

Feel free to fork, play around, and submit PRs. I would be very grateful for any bug fixes or more unit tests.

This project was originally based on Sentdex's excellent machine learning tutorial, but it has since evolved far beyond that and the code is almost completely different. The complete series is also on his website.


For more content like this, check out my academic blog at reasonabledeviations.com/.

Comments
  • I've got an error when I try to run stock_prediction.py

    I've got an error when I try to run stock_prediction.py

    Hello,

    I've got the following error when I try to run stock_prediiction.py I already tried in Linux Centos 7 and Windows 10 my python version is 3.6.5 I followed all the instructions step by step . The others files runs fine.

    [root@customiseta MachineLearningStocks]# python3.6 stock_prediction.py Building dataset and predicting stocks... Traceback (most recent call last): File "stock_prediction.py", line 55, in predict_stocks() File "stock_prediction.py", line 42, in predict_stocks y_pred = clf.predict(X_test) File "/usr/lib64/python3.6/site-packages/sklearn/ensemble/forest.py", line 538, in predict proba = self.predict_proba(X) File "/usr/lib64/python3.6/site-packages/sklearn/ensemble/forest.py", line 578, in predict_proba X = self._validate_X_predict(X) File "/usr/lib64/python3.6/site-packages/sklearn/ensemble/forest.py", line 357, in validate_X_predict return self.estimators[0]._validate_X_predict(X, check_input=True) File "/usr/lib64/python3.6/site-packages/sklearn/tree/tree.py", line 373, in _validate_X_predict X = check_array(X, dtype=DTYPE, accept_sparse="csr") File "/usr/lib64/python3.6/site-packages/sklearn/utils/validation.py", line 462, in check_array context)) ValueError: Found array with 0 sample(s) (shape=(0, 41)) while a minimum of 1 is required.

    bug 
    opened by pnaves 26
  • backtest

    backtest

    I would like to contribute to this project and have read through the readme in detail.

    I have noticed you speak about a fatal flaw in the backtest, what is it? I can work on this and submit a PR.

    opened by gibo 7
  • Adaptation request

    Adaptation request

    Hello, I luckily ended up on your project as I'm looking at scraping data from Yahoo Finance for a list of quotes (not only S&P500). I was wondering if there was a way to get a part of your script adapted to my needs? i.e. I've got a list of quotes available in a .txt file. I currently use the YahooFinancials python api but I realised that some key figures are missing, such as "Cash, Debt, Levered free cash flow...etc". So far, I'm collecting the data using that custom python script and then dump as a JSON file. Would you be able to help me? Thanks :)

    opened by DebugMeIfYouCan 6
  • Train - test split (allready seen samples)

    Train - test split (allready seen samples)

    Hello,

    First of all great work Robert.

    I find one big mistake ( everyone do that ) in backtesting.py -> row 40 - u are using shuffle = True ( by default is true in train_test_split ) and when u doing i+1 or i+x targets data is already seen when doing learning. Because of that u get always different result when running backtesting.py. If u change shuffle = False u will get 45-50% less of trades and Accuracy score will drop to 0.6/0.65 max.

    Best

    opened by illUkc 2
  • keystats.csv

    keystats.csv

    Hello, when I run command python parsing_keystats.py, keystats.cvs is created, but it's empty(except for the Date, unix, ticker and etc.). I know for sure that stock prices are downloaded and updates as I change date to present. Please can you help me as I tried everything to solve this.

    opened by boatmeup 2
  • Test Failure

    Test Failure

    I get the below error when doing the pytest. I'm not sure why this is occurring.

    pytest -vv ================================================================================= test session starts ================================================================================== platform linux -- Python 3.6.5, pytest-3.4.1, py-1.5.3, pluggy-0.6.0 -- /home/chris/anaconda3/bin/python cachedir: .pytest_cache rootdir: /home/chris/Documents/Stocks/MachineLearningStocks, inifile: plugins: remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2 collected 9 items

    tests/test_datasets.py::test_forward_sample_dimensions PASSED [ 11%] tests/test_datasets.py::test_forward_sample_data PASSED [ 22%] tests/test_datasets.py::test_stock_prices_dataset PASSED [ 33%] tests/test_datasets.py::test_stock_prediction_dataset PASSED [ 44%] tests/test_utils.py::test_status_calc PASSED [ 55%] tests/test_utils.py::test_data_string_to_float PASSED [ 66%] tests/test_variables.py::test_statspath PASSED [ 77%] tests/test_variables.py::test_features_same FAILED [ 88%] tests/test_variables.py::test_outperformance PASSED [100%]

    ======================================================================================= FAILURES ======================================================================================= __________________________________________________________________________________ test_features_same __________________________________________________________________________________

    def test_features_same():
        # There are only four differences (intentionally)
    
      assert set(parsing_keystats.features) - set(current_data.features) == {'Qtrly Revenue Growth', 'Qtrly Earnings Growth',
    
                                                                               'Shares Short (as of', 'Net Income Avl to Common'}
    

    E AssertionError: assert {'Net Income ...prior month)'} == {'Net Income A...Short (as of'} E Extra items in the left set: E 'Shares Short (prior month)' E Full diff: E {'Net Income Avl to Common', E 'Qtrly Earnings Growth', E 'Qtrly Revenue Growth', E - 'Shares Short (as of', E ? ^ E + 'Shares Short (as of'} E ? ^ E - 'Shares Short (prior month)'}

    tests/test_variables.py:17: AssertionError ========================================================================= 1 failed, 8 passed in 15.02 seconds ==========================================================================

    bug 
    opened by TardisCoder 2
  • syntax error - download_historical_prices.py

    syntax error - download_historical_prices.py

    ubuntu 16.04

    ✘-1 ~/MachineLearningStocks [master|✚ 1…21968] 
    05:02 $ python download_historical_prices.py
      File "download_historical_prices.py", line 35
        print(f"{len(missing_tickers)} tickers are missing: \n {missing_tickers} ")
                                                                                 ^
    SyntaxError: invalid syntax
    
    
    opened by alphaaurigae 2
  • Bump numpy from 1.12.1 to 1.21.0

    Bump numpy from 1.12.1 to 1.21.0

    Bumps numpy from 1.12.1 to 1.21.0.

    Release notes

    Sourced from numpy's releases.

    v1.21.0

    NumPy 1.21.0 Release Notes

    The NumPy 1.21.0 release highlights are

    • continued SIMD work covering more functions and platforms,
    • initial work on the new dtype infrastructure and casting,
    • universal2 wheels for Python 3.8 and Python 3.9 on Mac,
    • improved documentation,
    • improved annotations,
    • new PCG64DXSM bitgenerator for random numbers.

    In addition there are the usual large number of bug fixes and other improvements.

    The Python versions supported for this release are 3.7-3.9. Official support for Python 3.10 will be added when it is released.

    :warning: Warning: there are unresolved problems compiling NumPy 1.21.0 with gcc-11.1 .

    • Optimization level -O3 results in many wrong warnings when running the tests.
    • On some hardware NumPy will hang in an infinite loop.

    New functions

    Add PCG64DXSM BitGenerator

    Uses of the PCG64 BitGenerator in a massively-parallel context have been shown to have statistical weaknesses that were not apparent at the first release in numpy 1.17. Most users will never observe this weakness and are safe to continue to use PCG64. We have introduced a new PCG64DXSM BitGenerator that will eventually become the new default BitGenerator implementation used by default_rng in future releases. PCG64DXSM solves the statistical weakness while preserving the performance and the features of PCG64.

    See upgrading-pcg64 for more details.

    (gh-18906)

    Expired deprecations

    • The shape argument numpy.unravel_index cannot be passed as dims keyword argument anymore. (Was deprecated in NumPy 1.16.)

    ... (truncated)

    Commits
    • b235f9e Merge pull request #19283 from charris/prepare-1.21.0-release
    • 34aebc2 MAINT: Update 1.21.0-notes.rst
    • 493b64b MAINT: Update 1.21.0-changelog.rst
    • 07d7e72 MAINT: Remove accidentally created directory.
    • 032fca5 Merge pull request #19280 from charris/backport-19277
    • 7d25b81 BUG: Fix refcount leak in ResultType
    • fa5754e BUG: Add missing DECREF in new path
    • 61127bb Merge pull request #19268 from charris/backport-19264
    • 143d45f Merge pull request #19269 from charris/backport-19228
    • d80e473 BUG: Removed typing for == and != in dtypes
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Benchmark data

    Benchmark data

    Hi Robert,

    I was reviewing your most excellent work earlier and was wondering..

    What index did you use to generate the sp500_index.csv data?

    Was this S&P 500 (^GSPC) and did you preprocess or scale this data.

    The reason I ask is that the data in the 200-207 range looks on the low side.

    Thanks!

    Fig

    opened by lefig 1
  • Backtesting issue

    Backtesting issue

    Interested in this project and possibly working on it more. Just starting out with ML but I was curious to try and figure out the issue with the backtesting. From what I can tell it is that you are training the model on future data but then making predictions for stocks in the past...

    It seems like the solution would be to first, randomly select the year you'd like to predict and then ensure the spit for both training and test is only run on years before that. Just wanted to check and see if I'm right about at least the issue. Feel free to drop me an email (on my profile) if you'd rather talk there, I know you said you want to let other people try and figure it out.

    opened by maxbeyer1 1
  • SET (help wanted - can't find label )

    SET (help wanted - can't find label )

    Was really impressed with this file. Most just look at historical price, but love what you have done with including a deeper assessment of the company.

    I live in Bangkok and follow the Thai market. I can already get Thai historical prices, but how to change S&P500 to SET ( In yahoo I thought it would be SET.bk ( as stocks are .bk ), but does not work

    Thanks

    opened by carlgoodier 1
  • neew complementary tool

    neew complementary tool

    My name is Luis, I'm a big-data machine-learning developer, I'm a fan of your work, and I usually check your updates.

    I was afraid that my savings would be eaten by inflation. I have created a powerful tool that based on past technical patterns (volatility, moving averages, statistics, trends, candlesticks, support and resistance, stock index indicators). All the ones you know (RSI, MACD, STOCH, Bolinger Bands, SMA, DEMARK, Japanese candlesticks, ichimoku, fibonacci, williansR, balance of power, murrey math, etc) and more than 200 others.

    The tool creates prediction models of correct trading points (buy signal and sell signal, every stock is good traded in time and direction). For this I have used big data tools like pandas python, stock market libraries like: tablib, TAcharts ,pandas_ta... For data collection and calculation. And powerful machine-learning libraries such as: Sklearn.RandomForest , Sklearn.GradientBoosting, XGBoost, Google TensorFlow and Google TensorFlow LSTM.

    With the models trained with the selection of the best technical indicators, the tool is able to predict trading points (where to buy, where to sell) and send real-time alerts to Telegram or Mail. The points are calculated based on the learning of the correct trading points of the last 2 years (including the change to bear market after the rate hike).

    I think it could be useful to you, to improve, I would like to share it with you, and if you are interested in improving and collaborating I am also willing, and if not file it in the box.

    opened by Leci37 1
  • Requests.get() no longer working with Yahoo Finance.

    Requests.get() no longer working with Yahoo Finance.

    Since the past 1 year, it seems that 'requests.get()' has stopped working with yahoo finance.

    @robertmartin8 may you please guide us on how we can get around this error?

    Thanks

    opened by vedantk281007 0
  • Bump numpy from 1.12.1 to 1.22.0

    Bump numpy from 1.12.1 to 1.22.0

    Bumps numpy from 1.12.1 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • What is the best way to get more updated key statistics data?

    What is the best way to get more updated key statistics data?

    Hello,

    I tried using the script but the data for key statistics provided by Sentdex is a bit outdated. Does anyone know an API or a URL where we can get more fresh data? I am willing to submit a PR with this solution is someone provides me with enough info so I can implement it?

    Thanks, Aleksandar Serafimoski

    opened by mopkobot 5
  • Stuck downloading historical prices

    Stuck downloading historical prices

    Hello, thank you for writing such an interesting repository. Could you assist me with an issue with running the command, python download_historical_prices.py. Appears to be stuck at 80% and not proceeding.. Thank you! image

    opened by potatoFry 2
  • Using yfinance

    Using yfinance

    Hello, thanks you for all the great work done in this repo. I would suggest that you use the finance library that's gets the data from Yahoo Finance fairly easily and is much faster and more accommodating than pandas_datareader. It also has a load of other functions that might make your life easier. I would love to contribute to this repo.

    opened by ApurvShah007 0
Owner
Robert Martin
Astrophysics at the University of Cambridge. Python <3
Robert Martin
Greykite: A flexible, intuitive and fast forecasting library

The Greykite library provides flexible, intuitive and fast forecasts through its flagship algorithm, Silverkite.

LinkedIn 1.7k Jan 4, 2023
Stock Price Prediction Bank Jago Using Facebook Prophet Machine Learning & Python

Stock Price Prediction Bank Jago Using Facebook Prophet Machine Learning & Python Overview Bank Jago has attracted investors' attention since the end

Najibulloh Asror 3 Feb 10, 2022
neurodsp is a collection of approaches for applying digital signal processing to neural time series

neurodsp is a collection of approaches for applying digital signal processing to neural time series, including algorithms that have been proposed for the analysis of neural time series. It also includes simulation tools for generating plausible simulations of neural time series.

NeuroDSP 224 Dec 2, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 2, 2023
Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

SDK: Overview of the Kubeflow pipelines service Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on

Kubeflow 3.1k Jan 6, 2023
Data from "Datamodels: Predicting Predictions with Training Data"

Data from "Datamodels: Predicting Predictions with Training Data" Here we provid

Madry Lab 51 Dec 9, 2022
TorchDrug is a PyTorch-based machine learning toolbox designed for drug discovery

A powerful and flexible machine learning platform for drug discovery

MilaGraph 1.1k Jan 8, 2023
Automated Machine Learning Pipeline for tabular data. Designed for predictive maintenance applications, failure identification, failure prediction, condition monitoring, etc.

Automated Machine Learning Pipeline for tabular data. Designed for predictive maintenance applications, failure identification, failure prediction, condition monitoring, etc.

Amplo 10 May 15, 2022
Machine learning template for projects based on sklearn library.

Machine learning template for projects based on sklearn library.

Janez Lapajne 17 Oct 28, 2022
Traingenerator 🧙 A web app to generate template code for machine learning ✨

Traingenerator ?? A web app to generate template code for machine learning ✨ ?? Traingenerator is now live! ??

Johannes Rieke 1.2k Jan 7, 2023
Highly interpretable classifiers for scikit learn, producing easily understood decision rules instead of black box models

Highly interpretable, sklearn-compatible classifier based on decision rules This is a scikit-learn compatible wrapper for the Bayesian Rule List class

Tamas Madl 482 Nov 19, 2022
monolish: MONOlithic Liner equation Solvers for Highly-parallel architecture

monolish is a linear equation solver library that monolithically fuses variable data type, matrix structures, matrix data format, vendor specific data transfer APIs, and vendor specific numerical algebra libraries.

RICOS Co. Ltd. 179 Dec 21, 2022
Katana project is a template for ASAP 🚀 ML application deployment

Katana project is a FastAPI template for ASAP ?? ML API deployment

Mohammad Shahebaz 100 Dec 26, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 9, 2023
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 84 Nov 25, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

Vowpal Wabbit 8.1k Dec 30, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 19 Oct 3, 2022
Warren - Stock Price Predictor

Web app to predict closing stock prices in real time using Facebook's Prophet time series algorithm with a multi-variate, single-step time series forecasting strategy.

Kumar Nityan Suman 153 Jan 3, 2023
learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

BDFD 6 Nov 5, 2022