Sigma coding youtube - This is a collection of all the code that can be found on my YouTube channel Sigma Coding.

Overview

Sigma Coding

Tutorials & Resources

YouTubeFacebook

Support Sigma Coding

PatreonGitHub SponsorShop Amazon

Table of Contents

Overview

Howdy! My name is Alex, and if you're like me, you enjoy the world of programming. Or maybe you were like me a few years ago and are beginning to take your first step into this exciting world. The GitHub repository you're currently contains almost all of the code you find on my YouTube channel Sigma Coding. Feel free to clone, download or branch this repository so you can leverage the code I share in my channel.

Because I cover so many different langages on my YouTube channel, I dedicate a folder to each specific lanaguge. Right now, I cover the following lanagues on my channel:

This list is continuously changing, and I do my best to make tutorials engaging, exciting, and most importantly, easy to follow!

Topics

Now, I cover a lot of topics on my channel and as much I would like to list them all I don't want to overload with you a bunch of information. Here is a list of some of my more popular topics:

  • Python:

    • Win32COM The Win32COM library allows us to control the VBA object model from Python.
    • TD Ameritrade API The TD Ameritrade API allows us to stream real-time quote data and execute trades from Python.
    • Interactive Brokers API The TD Ameritrade API allows us to stream real-time quote data and execute trades from Python.
    • Machine Learning I cover different machine learning models ranging from regression to classification.
    • Pythonnet Pythonnet is used to connect Python to something called the CLR (Common Language Runtime) which gives us access to more Windows speicific libraries.
  • VBA:

    • Access VBA In Access, we can store large amounts of data. With Access, we will see how to create tables, query existing tables, and even importing and exporting data to and from access.
    • Excel VBA In Excel, we do an awful lot even working with non-standard libraries like ADODB.
    • Outlook VBA In Outlook, we work with email objects and account information.
    • PowerPoint VBA This series covers interacting with PowerPoint objects using VBA, topics like linking OLE objects and formatting slides.
    • Publisher VBA In Publisher, we explore how to create fliers and other media documents for advertising.
    • Word VBA With Word VBA, we see how to manipulate different documents and change the underlying format in them.
  • JavaScript:

    • Office API Learn how to use the new JavaScript API for Microsoft Office.
    • Excel API The Excel API focuses just on the API for Microsoft Excel and the object model associated with it.
    • Word API The Word API focuses just on the API for Microsoft Word and the object model associated with it.
  • TSQL:

    • APIs Learn how to make API request from Microsoft SQL Server.
    • Excel Learn how to work with Excel Workbooks using T-SQL.

Resources

If you ever have a question, would like to suggest a topic, found a mistake or just want some input for a project you can always email me at [email protected]. Additionally, you can find dedicated folders in the repository for resources like documentation.

Links To Other Respositories

Some of my projects are so large that they have dedicated repositories for them. Here is a list of repositiories of my other repositiories:

Support the Channel

Patreon: If you like what you see! Then Help support the channel and future projects by donating to my Patreon Page. I'm always looking to add more content for individuals like yourself, unfortuantely some of the APIs I would require me to pay monthly fees.

Hire Me: If you have a project, you think I can help you with feel free to reach out at [email protected] or fill out the contract request form

Disclosures: I am a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. Full Disclosure: I will earn a commission if you purchase from the Shop Amazon link, more details are provided below.

Comments
  • TD Standard API.Py : issue with refresh token

    TD Standard API.Py : issue with refresh token

    Alex:

    thank you for the great tutorial about how to access TDAmeritrade accounts in youtube via API. I have a question, from a code I have been playing with.

    The authentication returns 2 tokens, one that expires in 30 minutes and the other one that expires in 3 months (the refresh_token).

    Your code shows how to access the functionality by using the access token. I saved my access token and refresh token in an environment file.

    After 30 minutes, the access token expires..how do I acess the functionality with the refresh_token?

    Do I have to modify this? headers = {'Authorization': "Bearer {}".format(access_token)}

    The documentation states this about refresh_tokens: To request a new access token, make a Post Access Token request with your refresh token using the following parameter values:

    grant_type: refresh_token
    refresh_token: {REFRESH TOKEN}
    client_id: {Consumer Key}
    

    Not sure how to implement this, Thank you

    opened by lmsanch 2
  • Endpoint for Positions

    Endpoint for Positions

    On the TDAmeritrade developer page documentation its referred to as the accounts endpoint. I can't seem to find where to access it in your library. Any help is appreciated. Thanks!

    opened by dsrDevelopment 1
  • Task exception was never retreived for Data Streaming.

    Task exception was never retreived for Data Streaming.

    HI Alex,

    I run your code as you have written it. When I run the code at the top I always see a print message " Connection established. Client correctly connected" When I run the code I might see either Response for LevelOne_futures_options, Active_Nasdaq, Quote, or all of them combined. Then followed by a statement: "Connection with server closed" "Task exception was never retrieved" Then followed by an error message, RuntimeError: cannot call recv while another coroutine is already waiting for the next message. I have attached the screenshots of the error messages.

    Other times the code works continuously as intended without interrupting.

    Would you know what this issue can be attributed to, whether it has something to do with my internet connection, bug in the server or a bug in the websockets and possibly if you know how to get around it.

    Screenshots Screen Shot 2020-12-08 at 6 50 22 PM Screen Shot 2020-12-08 at 6 49 17 PM Screen Shot 2020-12-08 at 6 49 50 PM

    • OS: [ MacOs]
    • Browser [chrome]
    • Version [e.g. 22]
    opened by AlphaAmongBetas 1
  • is file_htm an xml or htm file?

    is file_htm an xml or htm file?

    I think the code mistakenly tried to parse an HTML file. here are a few lines from the raw code:

    # Define the file path.
    file_htm = sec_directory.joinpath('fb-09302019x10q.htm').resolve()
    file_cal = sec_directory.joinpath('fb-20190930_cal.xml').resolve()
    file_lab = sec_directory.joinpath('fb-20190930_lab.xml').resolve()
    file_def = sec_directory.joinpath('fb-20190930_def.xml').resolve()
    
    

    The first file is the path for an HTML file, but I think the parser is configured for XML file. Perhaps that is why the code gives me the full structure in the CSV file, but no values!

    opened by snayeri 1
  • Can not run the tdameriotrade code with error

    Can not run the tdameriotrade code with error

    I installed ChromeDriver 81.0.4044.69 and I am using chrome 81.0.4044.92 (Official Build) (64-bit) I also installed splinter 13.0 and add chromedriver to the environment path. I can find chromedriver in cmd mode.

    But when I run the code, I got an error Traceback (most recent call last): File "C:/Users/XXX/AppData/Roaming/JetBrains/PyCharm2020.1/scratches/Test.py", line 16, in browser = Browser('chrome', **executable_path, headless=False) File "C:\Users\XXX\AppData\Roaming\Python\Python36\site-packages\splinter\browser.py", line 90, in Browser return get_driver(driver, *args, **kwargs) File "C:\Users\XXX\AppData\Roaming\Python\Python36\site-packages\splinter\browser.py", line 68, in get_driver raise e UnboundLocalError: local variable 'e' referenced before assignment

    Do you know what I am missing?

    opened by baomx888 1
  • Web Scraping SEC - EDGAR Queries.ipynb

    Web Scraping SEC - EDGAR Queries.ipynb

    Hi

    this part of the code triggers error ==> IndexError: list index out of range

    • Web Scraping SEC - EDGAR Queries.ipynb
    • Section Two: Parse the Response for the Document Details -In [63]:

    filing_date = cols[3].text.strip() filing_numb = cols[4].text.strip()

    does this happen for anyone else as well?

    thx and amazing job!!!

    opened by PSause 1
  • Not able to scrap page contexts in loop

    Not able to scrap page contexts in loop

    I need to scrap contexts of around 250 10-K filings of 2019. When I run the code while looping through 250 url list, Its working for only first url for next ones it is throwing Find Attribute errors with description method. Any help would be appreciated!!

    opened by ajitsingh98 0
  • How do I change the connection

    How do I change the connection "Connection:="TEXT;C:\Users\Alex\Desktop\SalesData.txt"," to a different module?

    https://github.com/areed1192/sigma_coding_youtube/blob/a4e8df97856ad224664254fa726ac56cbee4dc57/vba/vba-excel/data-imports/Import%20Text%20File%20-%20Non%20Power%20Query.bas#L6

    opened by OzBlake 1
  • 403 Forbidden

    403 Forbidden

    Describe the bug

    Need user agent as explained in https://github.com/jadchaar/sec-edgar-downloader/issues/77.

    headers = {"User-Agent": "Company Name [email protected]"}
    response = requests.get(TEXT_URL, headers=headers)
    
    if response.status_code == 200:
        content_html = response.content.decode("utf-8") 
    else:
        print(f"HTML from {TEXT_URL} failed with status {response.status_code}")
    
    soup = BeautifulSoup(response.content, 'lxml')
    
    opened by oonisim 0
  • Error in SEC Scraper

    Error in SEC Scraper

    Describe the bug Encounter an error when grrabbing the Filing XML Summary (Referred to as "Second Block")

    To Reproduce Steps to reproduce the behavior:

    1. First Block

    import our libraries

    import requests import pandas as pd from bs4 import BeautifulSoup

    1. Second Block

    define the base url needed to create the file url.

    base_url = r"https://www.sec.gov"

    convert a normal url to a document url

    normal_url = r"https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/0000106040-20-000024.txt" normal_url = normal_url.replace('-','').replace('.txt','/index.json')

    define a url that leads to a 10k document landing page

    documents_url = r"https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/index.json"

    request the url and decode it.

    content = requests.get(documents_url).json()

    for file in content['directory']['item']:

    # Grab the filing summary and create a new url leading to the file so we can download it.
    if file['name'] == 'FilingSummary.xml':
    
        xml_summary = base_url + content['directory']['name'] + "/" + file['name']
        
        print('-' * 100)
        print('File Name: ' + file['name'])
        print('File Path: ' + xml_summary)
    
    1. See error

    JSONDecodeError Traceback (most recent call last) in 10 11 # request the url and decode it. ---> 12 content = requests.get(documents_url).json() 13 14 for file in content['directory']['item']:

    C:\ProgramData\Miniconda2\envs\tensorflow\lib\site-packages\requests\models.py in json(self, **kwargs) 898 # used. 899 pass --> 900 return complexjson.loads(self.text, **kwargs) 901 902 @property

    C:\ProgramData\Miniconda2\envs\tensorflow\lib\json_init_.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 352 parse_int is None and parse_float is None and 353 parse_constant is None and object_pairs_hook is None and not kw): --> 354 return _default_decoder.decode(s) 355 if cls is None: 356 cls = JSONDecoder

    C:\ProgramData\Miniconda2\envs\tensorflow\lib\json\decoder.py in decode(self, s, _w) 337 338 """ --> 339 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 340 end = _w(s, end).end() 341 if end != len(s):

    C:\ProgramData\Miniconda2\envs\tensorflow\lib\json\decoder.py in raw_decode(self, s, idx) 355 obj, end = self.scan_once(s, idx) 356 except StopIteration as err: --> 357 raise JSONDecodeError("Expecting value", s, err.value) from None 358 return obj, end

    JSONDecodeError: Expecting value: line 1 column 1 (char 0)

    Expected behavior

    File Name: FilingSummary.xml File Path: https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/FilingSummary.xml

    Screenshots Not Applicable.

    Additional context For context, its a 50/50 if it works. Sometimes when I run it, it sucssesfully returns the File name and File Path, other times I get the JSON Decode error and have to restart kernel and run it all again. By the way, I am a big fan. Are you working on any projects recently?

    opened by imakemoneymoves 0
  • 'NoneType' object has no attribute 'find_all'

    'NoneType' object has no attribute 'find_all'

    Describe the bug


    AttributeError Traceback (most recent call last) in 13 14 # loop through each report in the 'myreports' tag but avoid the last one as this will cause an error. ---> 15 for report in reports.find_all('report')[:-1]: 16 17 # let's create a dictionary to store all the different parts we need.

    AttributeError: 'NoneType' object has no attribute 'find_all' Expected behavior Returns report dictionary Side Note Also generally, when I run the scraper in Jupyter Notebook it is very buggy, and I have to run the Grab the Filing XML Summary block multiple times. Do you think this could be due to the SEC throttling our requests?

    opened by imakemoneymoves 0
Owner
Alex Reed
My background is in finance but stumbled upon programming and have been doing it ever since.
Alex Reed
This library attempts to abstract the handling of Sigma rules in Python

This library attempts to abstract the handling of Sigma rules in Python. The rules are parsed using a schema defined with pydantic, and can be easily loaded from YAML files into a structured Python object.

Caleb Stewart 44 Oct 29, 2022
Allows you to purge all reply comments left by a user on a YouTube channel or video.

YouTube Spammer Purge Allows you to purge all reply comments left by a user on a YouTube channel or video. Purpose Recently, there has been a massive

null 4.3k Jan 9, 2023
PSP (Python Starter Package) is meant for those who want to start coding in python but are new to the coding scene.

Python Starter Package PSP (Python Starter Package) is meant for those who want to start coding in python, but are new to the coding scene. We include

Giter/ 1 Nov 20, 2021
Exploiting Linksys WRT54G using a vulnerability I found.

Exploiting Linksys WRT54G Exploit # Install the requirements. pip install -r requirements.txt ROUTER_HOST=192.169.1.1 ROUTER_USERNAME=admin ROUTER_P

Elon Gliksberg 31 May 29, 2022
Analyzes crypto candles over a set time period and then trades based on winning patterns found

patternstrade Analyzes crypto candles over a set time period and then trades based on winning patterns found. Heavily customizable. Warning: This was

ConnorCreate 14 May 29, 2022
A simple project which is a ecm to found a good way to provide a path to img_dir in gooey

ECM to find a good way for img_dir Path in Gooey This code is just an ECM to find a good way to indicate a path of image in image_dir variable. We loo

Jean-Emmanuel Longueville 1 Oct 25, 2021
Osintgram by Datalux but i fixed some errors i found and made it look cleaner

OSINTgram-V2 OSINTgram-V2 is made from Osintgram which is made by Datalux originally but i took the script and fixed some errors i found and made the

null 2 Feb 2, 2022
A reference implementation for processing the content.log files found at opendata.dwd.de/weather

A reference implementation for processing the content.log files found at opendata.dwd.de/weather.

Deutscher Wetterdienst (DWD) 6 Nov 26, 2022
Youtube Channel Website

Videos-By-Sanjeevi Youtube Channel Website YouTube Channel Website Features: Free Hosting using GitHub Pages and open-source code base in GitHub. It c

Sanjeevi Subramani 5 Mar 26, 2022
With Christmas and New Year ahead, it is time for some festive coding. Here is a Christmas Card for you all!

Christmas Card With Christmas and New Year ahead, it is time for some festive coding! Here is a Christmas Card for you all! NOTE: I have not made this

CodeMaster7000 1 Dec 25, 2021
This Python script can enumerate all URLs present in robots.txt files, and test whether they can be accessed or not.

Robots.txt tester With this script, you can enumerate all URLs present in robots.txt files, and test whether you can access them or not. Setup Clone t

Podalirius 32 Oct 10, 2022
An open-source Python project series where beginners can contribute and practice coding.

Python Mini Projects A collection of easy Python small projects to help you improve your programming skills. Table Of Contents Aim Of The Project Cont

Leah Nguyen 491 Jan 4, 2023
One Ansible Module for using LINE notify API to send notification. It can be required in the collection list.

Ansible Collection - hazel_shen.line_notify Documentation for the collection. ansible-galaxy collection install hazel_shen.line_notify --ignore-certs

Hazel Shen 4 Jul 19, 2021
A collection of UIKit components that can be used as a Wagtail StreamField block.

Wagtail UIKit Blocks A collection of UIKit components that can be used as a Wagtail StreamField block. Available UIKit components Container Grid Headi

Krishna Prasad K 13 Dec 15, 2022
APRS Track Direct is a collection of tools that can be used to run an APRS website

APRS Track Direct APRS Track Direct is a collection of tools that can be used to run an APRS website. You can use data from APRS-IS, CWOP-IS, OGN, HUB

Per Qvarforth 42 Dec 29, 2022
Hexa is an advanced browser.It can carry out all the functions present in a browser.

Hexa is an advanced browser.It can carry out all the functions present in a browser.It is coded in the language Python using the modules PyQt5 and sys mainly.It is gonna get developed more in the future.It is made specially for the students.Only 1 tab can be used while using it so that the students cant missuse the pandemic situation :)

null 1 Dec 10, 2021
Islam - This is a simple python script.In this script I have written all the suras of Al Quran. As a result, by using this script, you can know the number of any sura at the moment.

Introduction: If you want to know sura number of al quran by just typing the name of sura than you can use this script. Usage in termux: $ pkg install

Fazle Rabbi 1 Jan 2, 2022
This repo is related to Google Coding Challenge, given to Bright Network Internship Experience 2021.

BrightNetworkUK-GCC-2021 This repo is related to Google Coding Challenge, given to Bright Network Internship Experience 2021. Language used here is py

Dareer Ahmad Mufti 28 May 23, 2022
Make discord server By Coding!

Discord Server Maker Make discord server by Coding! FAQ How can i get role permissons? Open discord with chrome developer tool, go to network and clic

null 1 Jul 17, 2022