chainladder - Property and Casualty Loss Reserving in Python

Overview

chainladder (python)

PyPI version Conda Version Build Status codecov io Documentation Status

chainladder - Property and Casualty Loss Reserving in Python

This package gets inspiration from the popular R ChainLadder package.

This package strives to be minimalistic in needing its own API. Think in pandas for data manipulation and scikit-learn for model construction. An actuary already versed in these tools will pick up this package with ease. Save your mental energy for actuarial work.

Documentation

Please visit the Documentation page for examples, how-tos, and source code documentation.

Available Estimators

chainladder has an ever growing list of estimators that work seemlessly together:

Loss Development Tails Factors IBNR Models Adjustments Workflow
Development TailCurve Chainladder BootstrapODPSample VotingChainladder
DevelopmentConstant TailConstant MackChainladder BerquistSherman Pipeline
MunichAdjustment TailBondy BornhuettterFerguson ParallelogramOLF GridSearch
ClarkLDF TailClark Benktander Trend  
IncrementalAdditive   CapeCod    
CaseOutstanding        
TweedieGLM        
DevelopmentML        
BarnettZehnwirth        

Getting Started Tutorials

Tutorial notebooks are available for download here.

Installation

To install using pip: pip install chainladder

To instal using conda: conda install -c conda-forge chainladder

Alternatively for pre-release functionality, install directly from github: pip install git+https://github.com/casact/chainladder-python/

Note: This package requires Python>=3.5 pandas 0.23.0 and later, sparse 0.9 and later, scikit-learn 0.23.0 and later.

Questions or Ideas?

Join in on the github discussions. Your question is more likely to get answered here than on Stack Overflow. We're always happy to answer any usage questions or hear ideas on how to make chainladder better.

Want to contribute?

Check out our contributing guidelines.

Comments
  • Independence tests for Chain Ladder

    Independence tests for Chain Ladder

    It is important that some critical assumptions around chain ladder are tested before applying the method. Thomas Mack suggested some tests, for example in "Measuring the variability of Chain Ladder reserve estimates", 1997. Below are implementations for tests of correlation among development factors and for the impact of calendar years. I don't feel confident enough to modify the package code directly but hopefully this can be a useful template.

    def developFactorsCorrelation(df):
        # Mack (1997) test for Correlations between Subsequent Development Factors
        # results should be between -.67x and +.67x stdError otherwise too much correlation
        m1=df.rank() # rank the development factors by column
        m2=df.to_numpy(copy=True) #does the same but ignoring the anti-diagonal
        np.fill_diagonal(np.fliplr(m2),np.nan)
        m2=pd.DataFrame(np.roll(m2,1),columns=m1.columns, index=m1.index).iloc[:,1:] #leave out the first column
        m2=m2.rank()
        numerator=((m1-m2) **2).sum(axis=0)
        SpearmanFactor=pd.DataFrame(range(1,len(m1.columns)+1),index=m1.columns, columns=['colNo'])
        I = SpearmanFactor['colNo'].max()+1
        SpearmanFactor['divisor'] = (I-SpearmanFactor['colNo'])**3 - I +SpearmanFactor['colNo'] #(I-k)^3-I+k
        SpearmanFactor['value']= 1-6*numerator.T/SpearmanFactor['divisor']
        SpearmanFactor['weighted'] = SpearmanFactor['value'] * (I-SpearmanFactor['colNo']-1) / (SpearmanFactor[1:-1]['colNo']-1).sum() #weight sum excludes 1 and I
        SpearmanCorr=SpearmanFactor['weighted'].iloc[1:-1].sum() # exlcuding 1st and last elements as not significant
        SpearmanCorrVar = 2/((I-2)*(I-3))
        return SpearmanCorr,SpearmanCorrVar
    
    from scipy.stats import binom
    def calendarCorrelation(df, pCritical=.1):
        # Mack (1997) test for calendar year effect
        # A calendar period has impact across developments if the probability of the number of small (or large)
        # development factors in that period occurring randomly is less than pCritical
        # df should have period as the row index, on the assumption that the first anti-diagonal is in relation to the same period (development=0) 
        m1=df.rank() # rank the development factors by column
        med=m1.median(axis=0) # find the median value for each column
        m1large=m1.apply(lambda r: r>med, axis=1) # sets to True those elements in each column which are large (above the median rank)
        m1small=m1.apply(lambda r: r<med, axis=1)
        m2large=m1large.to_numpy(copy=True)
        m2small=m1small.to_numpy(copy=True)
        S=[np.diag(m2small[:,::-1],k).sum() for k in range(min(m2small.shape),-1,-1)] # number of large elements in anti-diagonal (calendar year)
        L=[np.diag(m2large[:,::-1],k).sum() for k in range(min(m2large.shape),-1,-1)] # number of large elements in anti-diagonal (calendar year)
        probs=[binom.pmf(S[i], S[i]+L[i], 0.5)+binom.pmf(L[i], S[i]+L[i], 0.5) for i in range(len(S))] # probability of NOT having too many large or small items in anti-diagonal (calendar year)
        return pd.Series([p<pCritical for p in probs[1:]], index=df.index)
    

    And using the example in the paper

    MackEx='''1,2,3,4,5,6,7,8,9
    1.6,1.32,1.08,1.15,1.2,1.1,1.033,1,1.01
    40.4,1.26,1.98,1.29,1.13,.99,1.043,1.03,
    2.6,1.54,1.16,1.16,1.19,1.03,1.026,,
    2,1.36,1.35,1.1,1.11,1.04,,,
    8.8,1.66,1.4,1.17,1.01,,,,
    4.3,1.82,1.11,1.23,,,,,
    7.2,2.72,1.12,,,,,,
    5.1,1.89,,,,,,,
    1.7,,,,,,,,
    '''
    df=pd.read_csv(StringIO(MackEx), header=0)
    df.index = df.index+1 #reindex rows from 1
    dfCorr,dfCorrVar = developFactorsCorrelation(df)
    print('Development factors correlation is {:.2%}'.format(dfCorr))
    print('Factor independence if correlation is in range [{:.2%} to {:.2%}]'.format(-.67*np.sqrt(dfCorrVar),  .67*np.sqrt(dfCorrVar)))
    print('Dependence on calendar year:')
    print(calendarCorrelation(df))
    

    Result is

    Development factors correlation is 6.96%
    Factor independence if correlation is in range [-12.66% to 12.66%]
    Dependence on calendar year:
    1    False
    2    False
    3    False
    4    False
    5    False
    6    False
    7    False
    8    False
    9    False
    dtype: bool
    
    Enhancement 
    opened by gig67 26
  • Requirement - Dropping Link Ratios conditionally (based on any input value)

    Requirement - Dropping Link Ratios conditionally (based on any input value)

    Hi! I hope that you all are fine.

    There are several arguments for dropping/discarding problematic link ratios as well as excluding whole valuation periods or highs and lows. Any combination of the ‘drop’ arguments is permissible.

    I would like to ask you that is there any provision/function (in chainladder-python library) to drop link ratios conditionally (based on any input value), please guide me in this regards. Please go through the example below (which we need):

    Example: This is sample link ratio:

    ### Source - Link Ratio (before applying any dropping condition)                |   1        |    2      |    3       |    4      |    5      |    6       |    7      | | 2017Q1 | 1.7566 | 1.1415 | 1.0917 | 1.0091 | 1.0062 | 1.0007 | 1.0014| | 2017Q2 | 1.9576 | 1.2615 | 1.0640 | 0.8965 | 1.0008 | 1.0029 |   | 2017Q3 | 2.4820 | 0.9930 | 1.0588 | 1.0491 | 1.0121 |   | 2017Q4 | 1.8434 | 1.0787 | 1.0773 | 0.9654 |   | 2018Q1 | 1.4951 | 0.9120 | 1.0428 |   | 2018Q2 | 2.3292 | 1.1350 |   | 2018Q3 | 1.1851 |

    Link Ratio-Cuttoff (input value): 0.9999

    Our requirement is that: If any value of above 'Source - Link Ratio' is getting less than 'Link Ratio-Cuttoff (input value)', the particular value will be dropped as mentioned in the below 'Target - Link Ratio'.

    ### Target - Link Ratio (After applying condition based on the above input value)                |   1        |    2      |    3       |    4      |    5      |    6       |    7      | | 2017Q1 | 1.7566 | 1.1415 | 1.0917 | 1.0091 | 1.0062 | 1.0007 | 1.0014 | | 2017Q2 | 1.9576 | 1.2615 | 1.0640 |            | 1.0008 | 1.0029 |   | 2017Q3 | 2.4820 |             | 1.0588 | 1.0491| 1.0121 |   | 2017Q4 | 1.8434 | 1.0787 | 1.0773 |            |   | 2018Q1 | 1.4951 |             | 1.0428 |   | 2018Q2 | 2.3292 | 1.1350 |   | 2018Q3 | 1.1851 |

    I have made many tries to explore particular subjective dropping/omitting link-ratio feature in the provided chainladder-python documentation. However, I was unable to find out this. You are kindly requested to guide me in this regard, if feature is already available.

    If it is not available and if it is possible, please provide the following conditional dropping link ratio feature at your earliest.

    Best Regards, Muhammad Shamsher Khan

    Enhancement 
    opened by muhammadshamsherkhan2002 24
  • Having some trouble with origin periods

    Having some trouble with origin periods

    Trying to bring in premium data for BF. For some reason the origin type gets set to Q-JUN (loss had was Q-DEC no problem).

    prem = [1000,1200,1300,1400,1500,1600,1700,1800]
    dates = ["2015-07-01","2015-10-01","2016-01-01","2016-04-01","2016-07-01","2016-10-01","2017-01-01","2017-04-01"]
    premium_df = pd.DataFrame(data={"Premium":prem,"Date":dates})
    premium_df["val"] = "2017-12-31"
    premium_tri = cl.Triangle(
        data=premium_df,
        origin="Date",
        development="val",
        columns="Premium",
        cumulative=False,
    )
    premium_tri.origin
    
    Bug 
    opened by henrydingliu 16
  • Unpredictable behaviour of drop_high

    Unpredictable behaviour of drop_high

    drop_high does some weird stuff sometimes

    prism_df = pd.read_csv(
        "https://raw.githubusercontent.com/casact/chainladder-python/master/chainladder/utils/data/prism.csv"
    )
    prism_df["AccidentDate"] = pd.to_datetime(prism_df["AccidentDate"])
    prism_df["PaymentDate"] = pd.to_datetime(prism_df["PaymentDate"])
    prism_df.loc[(prism_df["AccidentDate"].dt.year > 2008) & (prism_df["PaymentDate"].dt.year > 2012),"Incurred"] = 0
    prism = cl.Triangle(
        data=prism_df,
        origin="AccidentDate",
        development="PaymentDate",
        columns="Incurred",
        cumulative=False,
    )
    prism = prism.grain("OYDY").incr_to_cum()
    prism_dev = cl.Development(drop_high = 1, drop_low = 1).fit_transform(prism)
    print(prism.age_to_age)
    print(prism_dev.age_to_age)
    

    drop_high seems to not work on 12-24, 24-36, 36-48, and 48-60,

    Bug 
    opened by henrydingliu 15
  • Drop High/Drop Low are not working correctly

    Drop High/Drop Low are not working correctly

    I get a different answer depending on whether I fit just the work comp LOB vs all LOBs and then slice work comp. These approaches should yield the same answer, but they don't:

    import chainladder as cl
    clrd = cl.load_sample('clrd').groupby('LOB')['CumPaidLoss'].sum()
    # Passes
    assert cl.Development(n_periods=5).fit(clrd).ldf_.loc['wkcomp'] == cl.Development(n_periods=5).fit(clrd.loc['wkcomp']).ldf_
    # Fails
    assert cl.Development(drop_low=2).fit(clrd).ldf_.loc['wkcomp'] == cl.Development(drop_low=2).fit(clrd.loc['wkcomp']).ldf_
    

    I don't think this ever worked. The reason is that we only have one 2D (origin x development) matrix of the entries to be dropped. This matrix is shared across all triangles. When dropping using n_periods, drop_valutation, drop_origin, etc, this is fine. But dropping based on link-ratio values it is not.

    Bug 
    opened by jbogaardt 14
  • Issue on page /tutorials/data-tutorial.html

    Issue on page /tutorials/data-tutorial.html

    error at the bottom of the page

    plt.plot( (model_cl.ultimate_["Paid"]).to_frame().index.astype("string"), (model_cl.ultimate_["Paid"] / model_cl.ultimate_["reportedCount"]).to_frame(), ) plt.xticks(np.arange(0, 10 * 12, 12), np.arange(2008, 2018, 1))

    Documentation 
    opened by henrydingliu 14
  • Expected Loss Method bug

    Expected Loss Method bug

    using the example provided in https://chainladder-python.readthedocs.io/en/latest/tutorials/deterministic-tutorial.html#expected-loss-method

    EL_model.full_triangle_ shows negatives in the next diagonal. The bottom triangle should be filled in by allocating the selected IBNR using conditional distributions from the selected dev pattern, rather than doing implied LDF calcs.

    Might also be an issue with Benktander. Will investigate and report back.

    opened by henrydingliu 13
  • Address origin_as_datetime bug

    Address origin_as_datetime bug

    Using try/except to check if origin_as_datetime can be converted

    I am not sure if I am preserving the *args or **kargs correctly. Can you take a look?

    The following test code works:

    import chainladder as cl
    raa = cl.load_sample("raa")
    cl.Chainladder().fit(raa).ultimate_.to_frame(origin_as_datetime=False)
    cl.Chainladder().fit(raa).ultimate_.to_frame(origin_as_datetime=True) # no warning thrown
    cl.Chainladder().fit(raa).cdf_.to_frame(origin_as_datetime=False)
    cl.Chainladder().fit(raa).cdf_.to_frame(origin_as_datetime=True) # warning thrown
    
    Bug 
    opened by kennethshsu 13
  • Bug in to_frame.origin_as_datetime argument

    Bug in to_frame.origin_as_datetime argument

    Will try to create a new ticket so we can better keep track for patch notes going forward.

    Is there a standardized warning message to use? I was thinking Deprecation warning: in an upcoming version of the package, origin_as_datetime will be defaulted to True in to_frame(...), use origin_as_datetime=False to preserve current setting.

    Bug 
    opened by kennethshsu 13
  • full_triangle_ for BF method

    full_triangle_ for BF method

    BF method doesn't seem to be creating the runoff triangle correctly. In my data, accident months can have volatile reported losses as of the latest valuation date. Using an expected IBNR method can therefore produce volatile LDF's (implied, using ultimate_ / latest.diagonal ). In my particular case, when reported losses are an outlier on the low end, the implied LDF is very high. When I extract the first diagonal from the runoff triangle and divide that into my ultimate*, the resulting LDF's are much more homogeneous than they were the previous months. Assuming my development pattern is relatively benign, there shouldn't be this much smoothing (development) in a one month projection. For example, if the percent of ultimate for month n is estimated to be 90% and for n+1 it's 95%, the LDF shouldn't drop from 3.0 to 1.2, it should drop to 1.6 (only 3.3% of ultimate has been reported for the month in questions as of month n). The projected ultimate is only 13.3% and an expectation of 5% being reported next month would yield an expected reported loss of 8.3% and an implied ldf of 1.6.

    Use case here is daily interpolation between month start and month end (ie, subsequent) LDF's. It's better to interpolate to the expected month-end ldf than the subsequent accident month's ldf as this could produce large swings in your projected ultimate losses during the month.

    *First diagonal from runoff triangle: bf_table2 = bf_table.full_triangle_.dev_to_val() bf_table3 = bf_table2[bf_table2.valuation==(triangle.valuation_date + relativedelta(months=+1))] next_mo_ldf = bf_table.ultimate_ / bf_table3

    for relativedelta, use "from dateutil.relativedelta import *"

    opened by mlbugs 13
  • Requirement - Provision required for Inputting whole Link Ratio

    Requirement - Provision required for Inputting whole Link Ratio

    I hope that you are fine and perfect.

    It is good to see that your team provided the many features to accommodate the actuarial team fulfillment in python-chainladder library.

    We keep on exploring features from your provided documentation as and when needed and fortunately, most of the things are available in the API.

    Nowadays, We are exploring the subjective feature in the API. However, We are unable to find it so far. Our requirement is (provision for inputting whole modified link ratio for re-processing and generation subsequent methods ldf, cdf and so on...), please go through the example below:

    -Primarily, we will process triangle, link ratio, ldf, cdf and so on as usual based on the input raw data while using python-chainladder library methods:

    for example:

    -In 1st attempt

    1. actuarial_df = pd.read_csv("c:/actuarial-data/actuarialData.csv")
    2. triangle = cl.Triangle(actuarial_df,origin="origin",development="development",columns="values",cumulative=True)
    3. triangle.link_ratio
    4. cl.Development(average="volume").fit(triangle).ldf_
    5. and so on

    -In 2nd attempt or during re-processing (after necessary modification in the link ratio)

    1. input link ratio (like a triangle) linkRatio_df = pd.read_csv("c:/actuarial-data/linkRatioData.csv")

    2. Method required for taking complete link ratio as input link_ratio = cl.Triangle(linkRatio_df,origin="origin",development="development")

    3. and ldf, cdf and so on...

    You are kindly requested to guide me in this regard, if feature is already available. If it is not and if it is possible, please provide the following requirement in your API. We will be very grateful to you.

    Best Regards, Muhammad Shamsher Khan

    opened by muhammadshamsherkhan2002 11
  • [BUG]

    [BUG]

    Describe the bug applying a cl estimator to another triangle doesn't always work

    To Reproduce

    import chainladder as cl
    clrd_total = cl.load_sample('clrd').sum()[["CumPaidLoss"]]
    clrd = cl.load_sample('clrd').groupby("LOB").sum()[["CumPaidLoss"]]
    est = cl.Chainladder().fit(clrd_total)
    try:
        est_other = est.predict(clrd[clrd["LOB"]=="comauto"])
    except:
        print("this doesn't work")
    est = cl.Chainladder().fit(clrd[clrd["LOB"]=="comauto"])
    est_other = est.predict(clrd_total)
    print(est_other.ultimate_)
    print("but this works")
    

    Expected behavior the code inside the try should work

    Desktop (please complete the following information):

    • Numpy Version 1.21.6
    • Pandas Version 1.4.4
    • Chainladder Version 0.8.14
    Bug 
    opened by henrydingliu 1
  • [BUG] changing grain with an off-cycle close.

    [BUG] changing grain with an off-cycle close.

    Describe the bug The Triangle class supports trailing quarter, semesters and years. However, moving from a triangle grain that is trailing to more aggregate grain produces inaccurate results.

    To Reproduce Steps to reproduce the behavior. Code should be self-contained and runnable against publicly available data. For example:

    import chainladder as cl
    
    prism = cl.load_sample('prism')['Paid'].sum()
    prism = prism[prism.valuation<prism.valuation_date].incr_to_cum() # (limit data to November)
    prism = prism.grain('OQDQ', trailing=True) # OK
    primt(prism.origin_close) # OK - latest origin is complete quarter cut-off at November
    prism.grain('OYDY') # Not OK. 
    

    Expected behavior prism.grain('OYDY') should inherit the trailing property and produce ages 12, 24, 36, etc.

    Bug 
    opened by jbogaardt 0
  • [BUG] origin_as_datetime does nothing

    [BUG] origin_as_datetime does nothing

    Describe the bug origin_as_datetime does nothing

    To Reproduce This assertion should not fail

    import chainladder as cl
    assert cl.load_sample('raa').to_frame(origin_as_datetime=True).index == pd.DatetimeIndex
    
    Bug 
    opened by jbogaardt 0
  • [BUG] Zero Loss values in Expected Loss Methods.

    [BUG] Zero Loss values in Expected Loss Methods.

    Describe the bug I am still seeing zeros in CapeCod IBNR when I shouldn't. This code is against the unreleased master branch.

    To Reproduce Even though our predicted loss triangle is zero for all entries, we should have some IBNR from the non-zero exposure

    import chainladder as cl
    wc = cl.load_sample('clrd').groupby('LOB')[['CumPaidLoss', 'EarnedPremDIR']].sum().loc['wkcomp']
    
    model = cl.CapeCod().fit(wc['CumPaidLoss'], sample_weight=wc['EarnedPremDIR'].latest_diagonal)
    
    model.predict(
    wc['CumPaidLoss']*0, 
    sample_weight=wc['EarnedPremDIR'].latest_diagonal
    ).ibnr_
    

    Manually pushing the CapeCod.apriori_ into the BornheutterFerguson doesn't resolve the issue:

    cl.BornhuetterFerguson().fit(wc['CumPaidLoss']*0, sample_weight=model.apriori_ * wc['EarnedPremDIR'].latest_diagonal).ibnr_
    

    Expected behavior I expect there to be non-zero IBNR for this workflow.

    Bug 
    opened by jbogaardt 0
  • [BUG] CapeCod predict at different index grain

    [BUG] CapeCod predict at different index grain

    Describe the bug The chainladder estimators are supposed to broadcast LDFs and Aprioris down do a lower grain. For example, one might fit countrywide patterns, but predict at a state level. For BornheutterFerguson and Chainladder this works.

    import chainladder as cl
    prism = cl.load_sample('prism')[['reportedCount', 'Paid']]
    
    bf_pipe = cl.Pipeline(
        [('dev', cl.Development()),
         ('model', cl.BornhuetterFerguson())]
    )
    # Fit at Line level
    bf_pipe.fit(
        X=prism.groupby('Line')['Paid'].sum(), 
        sample_weight=prism.groupby('Line')['reportedCount'].sum().sum('development'))
    
    # Predict at claim level
    bf_pipe.predict(
        X=prism['Paid'], 
        sample_weight=prism['reportedCount'].sum('development')
    )
    

    To Reproduce The same code errors out when using the CapeCod estimator:

    import chainladder as cl
    prism = cl.load_sample('prism')[['reportedCount', 'Paid']]
    # CapeCod Fails
    cc_pipe = cl.Pipeline(
        [('dev', cl.Development()),
         ('model', cl.CapeCod())]
    )
    # Fit at Line level
    cc_pipe.fit(
        X=prism.groupby('Line')['Paid'].sum(), 
        sample_weight=prism.groupby('Line')['reportedCount'].sum().sum('development'))
    # Predict at claim level
    cc_pipe.predict(
        X=prism['Paid'], 
        sample_weight=prism['reportedCount'].sum('development'))
    

    Expected behavior I expect the CapeCod code not to crash.

    Bug 
    opened by jbogaardt 3
  • [BUG] Chainladder doesn't coerce valuation triangle correctly

    [BUG] Chainladder doesn't coerce valuation triangle correctly

    Describe the bug See offending code below.

    To Reproduce Steps to reproduce the behavior. Code should be self-contained and runnable against publicly available data. For example:

    import chainladder as cl
    raa = cl.load_sample('raa')
    model = cl.Chainladder().fit(raa)
    # works 
    assert model.predict(raa.latest_diagonal.val_to_dev()).ultimate_ == model.ultimate_
    # doesn't work
    assert model.predict(raa.latest_diagonal).ultimate_ == model.ultimate_
    
    Bug 
    opened by jbogaardt 1
Releases(v0.8.14)
  • v0.8.14(Nov 25, 2022)

    New Contributors

    A big thank you to our new contributors this release

    • @henrydingliu made their first contribution in https://github.com/casact/chainladder-python/pull/349
    • @wleescor made their first contribution in https://github.com/casact/chainladder-python/pull/384

    Bug Fixes

    • Updating drop high logic by @henrydingliu in https://github.com/casact/chainladder-python/pull/349
    • Adding std_residuals_ to the transformer by @henrydingliu in https://github.com/casact/chainladder-python/pull/352
    • add semi-annual key by @wleescor in https://github.com/casact/chainladder-python/pull/384
    • addressed #321 by @kennethshsu in https://github.com/casact/chainladder-python/pull/368
    • Adding 4D support to drop_hi/lo by @henrydingliu in https://github.com/casact/chainladder-python/pull/370
    • Corrected number of days in a year to 365.25 by @kennethshsu in https://github.com/casact/chainladder-python/pull/371
    • #294 and #313 are fixed here by @kennethshsu in https://github.com/casact/chainladder-python/pull/373
    • adding 4D support for drop_above/below by @henrydingliu in https://github.com/casact/chainladder-python/pull/375
    • QoL improvement by @henrydingliu in https://github.com/casact/chainladder-python/pull/381
    • #330 Fix for full triangle per suggestion by @henrydingliu
    • #367 Triangle instatiation issues @jbogaardt

    Documentation

    • Documentation by @kennethshsu in https://github.com/casact/chainladder-python/pull/311
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/335
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/336
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/337
    • addressed #319 and #333 by @kennethshsu in https://github.com/casact/chainladder-python/pull/338
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/339
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/350
    • Fixed paths by @kennethshsu in https://github.com/casact/chainladder-python/pull/351
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/355
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/358
    • Annual meeting prep by @kennethshsu in https://github.com/casact/chainladder-python/pull/360
    • Found some typos by @kennethshsu in https://github.com/casact/chainladder-python/pull/361
    • Corrected some more typos by @kennethshsu in https://github.com/casact/chainladder-python/pull/363
    • Fixed readthedocs build #374 @jbogaardt

    Enhancements

    • Mack correlation test enhancements by @genedan in https://github.com/casact/chainladder-python/pull/342
    • Refine Mack DevelopmentCorrelation class by @genedan in https://github.com/casact/chainladder-python/pull/344
    • Adding Friedland & Mack 97 data sets by @genedan in https://github.com/casact/chainladder-python/pull/347
    • Add Friedland WC Self-Insurer data set. by @genedan in https://github.com/casact/chainladder-python/pull/348
    • Add Friedland gl and xyz disposal rate data sets. by @genedan in https://github.com/casact/chainladder-python/pull/353
    • Add Friedland US Industry Auto - case outstanding technique data set. by @genedan in https://github.com/casact/chainladder-python/pull/357
    • Add Friedland XYZ Insurer - case outstanding technique data set. by @genedan in https://github.com/casact/chainladder-python/pull/359
    • Add 3 datasets by @genedan in https://github.com/casact/chainladder-python/pull/362
    • Add 3 datasets: by @genedan in https://github.com/casact/chainladder-python/pull/366

    Full Changelog: https://github.com/casact/chainladder-python/compare/v0.8.13...v0.8.14

    Source code(tar.gz)
    Source code(zip)
  • v0.8.13(Jun 27, 2022)

    The majority of this release was written by @kennethshsu and contains a first-time contribution from @genedan. Thank you!

    Bug Fixes:

    • #270 Fixed an issue affecting triangle instantiation
    • #251 Fixed a bug in predict method that failed to honor development ages appropriatly
    • #274 Addressed a pandas deprecation notice in the heatmap method.
    • #258 Addressed silent error when predicting on a Triangle that is larger than the underlying model.
    • #288 Addressed Bug in to_frame method
    • #283 Addressed Bug in the calculation of LDFs in CaseOutstanding

    Enhancements

    • #293 Expanded functionality drop_high and drop_low arguments of Development estimator.
    • #298, #299, #304, #305, #308 Documentation clean-up.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.12(Mar 8, 2022)

    Bug Fixes:

    • #254 Fixed an undesired mutation when using cl.concat
    • #257 to_frame bug fix on empty triangles
    • #248 to_frame bug and deprecation notice in origin_as_datetime argument.
    • #250 Bug fix in triangle initialization
    • #258 Addressed silent error when predicting on a Triangle that is larger than the underlying model.
    • #261 Addressed a pandas>1.4.0 bug

    Enhancements

    Introduced ExpectedLoss method

    • #242 Added threshold based dropping of link ratios to Development estimator
    • #158 Triangles can now be instantiated with only one data point.
    • #250 python 3.10 support
    • #260 fillna method added to Triangle class that supports filling with constants or other Triangles.
    • Support for 'H' or 'S' when using half year grain
    Source code(tar.gz)
    Source code(zip)
  • v0.8.11-beta(Mar 7, 2022)

  • v0.8.11-alpha(Mar 7, 2022)

    Bug Fixes:

    • #239 val_to_dev bug fixed
    • #254 Fixed an undesired mutation when using cl.concat
    • #257 to_frame bug fix on empty triangles
    • #248 to_frame bug and deprecation notice in origin_as_datetime argument.
    • #250 Bug fix in triangle initialization
    • #258 Addressed silent error when predicting on a Triangle that is larger than the underlying model.
    • #261 Addressed a pandas>1.4.0 bug

    Enhancements

    • Introduced ExpectedLoss method
    • #242 Added threshold based dropping of link ratios to Development estimator
    • #158 Triangles can now be instantiated with only one data point.
    • #250 python 3.10 support
    • #260 fillna method added to Triangle class that supports filling with constants or other Triangles.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.10(Dec 4, 2021)

    Enhancements

    • #225 - Added support for mid-year triangles
    • Remove xlcompose as a dependency
    • #233 and #219 - Added more explicit warning when is_cumulative property of triangle is not set.
    • #192 Expanded drop_high and drop_low functionality to include integers and variable length lists. PR courtesy of @kennethshsu.

    Bug fixes

    • #234 Coerce MackChainladder to zero variance when inbr_ and cdf_ are 0.
    • Fix bug in BarnettZehnwirth.transform that was previously not applying the log-transform correctly
    Source code(tar.gz)
    Source code(zip)
  • v0.8.9(Oct 24, 2021)

    Enhancements

    • #198 Added projection_period to all Tail estimators. This allows for analyzing run-off beyond a one year time horizon.
    • #214 A refreshed docs theme using jupyter-book!
    • #200 Added more flexibility to TailCurve.fit_period to allow boolean lists - similar to Development.drop_high
    • @kennethshsu further improved tutorials

    Bug Fixes

    • #210 Fixed regression in triangle instantiation where grain=='S' is being interpreted as seconds and not semesters.
    • #213 Fixed an unintended Triangle mutation when reassigning columns on a sparse backend.
    • #221 Fixed origin/development broadcasting issue that was causing silent bugs in calculations on malformed triangles.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.8(Sep 13, 2021)

    Enhancements

    • #140 Improved test coverage
    • #126 Relaxed BootstrapODPSample column restriction
    • #207 Improved tutorials

    Bug Fixes

    • #204 Fixed regression in grain
    • #203 Fixed regression in slice
    • Fixed regression in Benktander index broadcasting
    • #205 Fixed CI/CD build
    Source code(tar.gz)
    Source code(zip)
  • v0.8.7(Aug 29, 2021)

    Minor release to fix some chainladder==0.8.6 regressions.

    Bug Fixes

    • Fixed #190 - 0 values getting into heatmap
    • Fixed #191 regression that didn't support earlier versions of pandas
    • Fixed #197 Index broadcasting edge case
    • Fixed valuation_date regression when instantiating Triangles as vectors

    Enhancements

    • #99 Added Semester as an additional grain
    • Added parallel support to GridSearch using n_jobs argument
    • More tutorial improvements from @kennethshsu
    Source code(tar.gz)
    Source code(zip)
  • v0.8.6(Aug 18, 2021)

    Ehancements

    • @kennethshsu improved heatmap shading
    • Support for numpy reductions, e.g. np.sum(cl.load_sample('raa'))
    • @kennethshsu further improved tutorials

    Bug fixes

    • #186 Fix bug that disallowed instantiating a full triangles
    • #180 Fix bug that mutated original DataFrame when instantiating Triangle
    • #182 Eliminate new deprecations warnings from pandas>=1.3
    • #183 Better alignment with pandas on index broadcasting
    • #179 Fixed nan befavior on val_to_dev
    • implement TriangleGroupby.__getitem__ to support column selection on groupby operations to align with pandas, e.g. cl.load_sample('clrd').groupby('LOB')['CumPaidLoss']
    Source code(tar.gz)
    Source code(zip)
  • v0.8.5(Jul 11, 2021)

    Enhancements

    • #154 - Added groupby hyperparameter to several more estimators including the widely used Development estimator. This allows fitting development patterns at a higher grain than the Triangle all within the estiamtor or Pipeline.
    • Improved index broadcasting for sparse arrays. Prior to 0.8.5, this code would inappropriately consume too much memory
    prism = cl.load_sample('prism') 
    prism / prism.groupby('Line').sum()
    
    • Arithmetic label matching improved for all axes to align more with pandas
    • Added model_diagnostics utility function to be used on fitted estimators.
    • Initial support for dask arrays. Current support is basic, but will eventually allow for distributed computations on massive triangles.
    • added numpy array protocol support to the Triangle class. Now numpy functions can be called on Triangles. For example:
    np.sin(cl.load_sample('raa'))
    
    • #169 - Made Triangle tutorial more beginner friendly - courtesy of @kennethhsu

    Bug fixes

    • Better Estimator pickling support when callables are included in estimators.
    • Minor bug fix in grain estimator.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.4(May 9, 2021)

    Enhancements

    • #153 - Introduced CapeCod groupby parameter that allows for apriroi computation at user-specified grain for granular triangles
    • #154 - Introduced groupby support in key development estimators

    Bug fixes

    • #152 - CapeCod apriori is not supporting nans effectively
    • #157 - cum_to_incr() not working as expected on full_triangle_
    • #155 - full_triangle_ cdf broadcasting bug
    • #156 - Unable to inspect sigma_ or std_err_ properties after tail fit
    Source code(tar.gz)
    Source code(zip)
  • v0.8.3(Apr 25, 2021)

    Enhancements

    • #135 - Added .at and .iat slicing and value assignment

    Bug Fixes

    • #144 - Eliminated error when trying to assign a column from a different array backend.
    • #134 - Clearer error handling when attempting to instantiate a triangle with ages instead of date-likes
    • #143 - Reworked full_triangle_ run-off for expected loss methods.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.2(Mar 27, 2021)

    Enhancements

    • #131 - Added a BarnettZenwirth development estimator
    • #117 - VotingChainladder enhancements to allow for more flexibility in defining weights (courtesy of @cbalona)

    Bug Fixes

    • #130 - Fixed bug in triangle aggregation
    • #134 - Fixed a bug in triangle broadcasting
    • #137 - Fixed valuation_date bug occuring when partial year Triangle is instantiated as a vector.
    • #138 - Introduced fix for incompatibility with sparse>=0.12.0
    • #122 - Implemented nightly continuous integration to identify new bugs associated with verion bumps of dependencies.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Mar 1, 2021)

    Enhancements

    • Included a truncation_age in the TailClark estimator to replicate examples from the paper
    • #129 Included new development estimators TweedieGLM and DevelopmentML to take advantage of a broader set of (sklearn-compliant) regression frameworks.
    • Included a helper Transformer PatsyFormula to make working with ML algorithms easier.

    Bug fixes

    • #128 Made IncrementalAdditive method comply with the rest of the package API
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Feb 16, 2021)

    Enhancements

    • #112 Advanced groupby support.
    • #113 Added reg_threshold argument to TailCurve for finer control of fit. Huge thanks to @brian-13 for the contribution.
    • #115 Introducing VotingChainladder workflow estimator to allow for averaging multiple IBNR estimators Huge thanks to @cbalona for the contribution.
    • Introduced CaseOutstanding development estimator
    • #124 Standardized CapCod functionality to that of Benktander and BornhuetterFerguson

    Bug Fixes

    • #83 Fixed grain issues when using the trailing argument
    • #119 Fixed pickle compatibility issue introduced in v0.7.12
    • #120 Fixed Trend estimator API
    • #121 Fixed numpy==1.20.1 compatibility issue
    • #123 Fixed BerquistSherman estimator
    • #125 Fixed various issues with compatibility of estimators in a Pipeline

    Deprecations

    • #118 Deprecation warnings on xlcompose. Will be removed as a depdendency in v0.9.0
    Source code(tar.gz)
    Source code(zip)
  • v0.7.12(Jan 20, 2021)

  • v0.7.11a(Jan 19, 2021)

    Enhancements

    • #110 Added virtual_column functionality

    Bug fixes

    • Minor bug fix on sort_index not accepting kwargs
    • Minor bug fix in DevelopmentConstant when using a callable

    Misc

    • Bringing release up to parity with docs.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.11(Jan 19, 2021)

    Enhancements

    • #110 Added virtual_column functionality

    Bug fixes

    • Minor bug fix on sort_index not accepting kwargs
    • Minor bug fix in DevelopmentConstant when using a callable

    Misc

    • Bringing release up to parity with docs.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.10(Jan 16, 2021)

    Bug fixes

    • #108 - sample_weight error handling on predict - thank you @cbalona
    • #107 - Latest diagonal of empty triangle now resolves
    • #106 - Improved loc consistency with pandas
    • #105 - Addressed broadcasting error during triangle arithmetic
    • #103 - Fixed Index alignment with triangle arithmetic consistent with pd.Series
    Source code(tar.gz)
    Source code(zip)
  • v0.7.9(Nov 6, 2020)

    Bug fixes

    • #101 Bug where LDF labels were not aligned with underlying LDF array

    Enhancements

    • #66 Allow for onleveling with new ParallelogramOLF transformer
    • #98 Allow for more complex trends in estimators with Trend transformer Refer to this example on how to apply the new estimators.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.8(Oct 23, 2020)

    Bug Fixes

    • Resolved #87 val_to_dev with malformed triangle

    Enhancements

    • Major overhaul of Triangle internals for better code clarity and more efficiency
    • Made sparse operations more efficient for larger triangles
    • to_frame now works on Triangles that are 3D or 4D. For example clrd.to_frame()
    • Advanced groupby operations supported. For (trivial) example:
    clrd = cl.load_sample('clrd')
    # Split companies with names less than 15 characters vs those above: 
    clrd.groupby(clrd.index['GRNAME'].str.len()<15).sum()
    
    Source code(tar.gz)
    Source code(zip)
  • v0.7.7(Sep 13, 2020)

    Enhancements

    • #97, loc and iloc now support Ellipsis
    • Development can now take a float value for averaging. When float value is used, it corresponds to weight exponent (delta in Barnett/Zenwirth). Only special cases had previously existed - {"regression": 0.0, "volume": 1.0, "simple": 2.0}
    • Major improvements in slicing performance.

    Bug fixes

    • #96, Fix for TailBase transform
    • #94, n_periods with asymmetric triangles fixed
    Source code(tar.gz)
    Source code(zip)
  • v0.7.6(Aug 27, 2020)

    Enhancements

    • Four Dimensional slicing is now supported.
    clrd = cl.load_sample('clrd')
    clrd.iloc[[0,10, 3], 1:8, :5, :]
    clrd.loc[:'Aegis Grp', 'CumPaidLoss':, '1990':'1994', :48]
    
    • #92 to_frame() now takes optional origin_as_datetime argument for better compatibility with various plotting libraries (Thank you @johalnes )
    tri.to_frame(origin_as_datetime=True)
    

    Bug Fixes

    • Patches to the interaction between sparse and numpy arrays to accomodate more scenarios.
    • Patches to multi-index broadcasting
    • Improved performance of latest_diagonal for sparse backends
    • #91 Bug fix to MackChainladder which errored on asymmetric triangles (Thank you @johalnes for reporting)
    Source code(tar.gz)
    Source code(zip)
  • v0.7.5(Aug 15, 2020)

    Enhancements

    • Enabled multi-index broadcasting.
    clrd = cl.load_sample('clrd')
    clrd / clrd.groupby('LOB').sum()  # LOB alignment works now instead of throwing error
    
    • Added sparse representation of triangles which substantially increases the size limit of in-memory triangles. Check out the new Large Datasets tutorial for details

    Bug fixes

    • Fixed cupy backend which had previously been neglected
    • Fixed xlcompose issue where Period fails when included as column header
    Source code(tar.gz)
    Source code(zip)
  • v0.7.4(Jul 26, 2020)

    Tiny release.

    Bug Fixes:

    • Fixed a bug where Triangle did not support full accident dates at creation
    • Fixed an inappropriate index mutation in Triangle index

    Enhancements

    • Added head and tail methods to Triangle
    • Prepped Triangle class to support sparse backend
    • Added prism sample dataset for sparse demonstrations and unit tests
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Jul 12, 2020)

    Enhancements

    • Improved performance of valuation axis
    • Improved performance of groupby
    • Added sort_index method to Triangle consistent with pandas
    • Allow for fit_predict to be called on a Pipeline estimator

    Bug fixes

    • Fixed issue with Bootstrap process variance where it was being applied more than once
    • Fixed but where Triangle.index did not ingest numeric columns appropriately.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.2(Jul 2, 2020)

    Bug fixes:

    • Index slicing not compatible with pandas #84 fixed
    • arithmetic fail #68 - Substantial reworking of how arithmetic works.
    • JSON IO on sub-triangles now works
    • predict and fit_predict methods added to all IBNR models and now function as expected

    Enhancements:

    • Allow DevelopmentConstant to take on more than one set of patterns by passing in a callable
    • MunichAdjustmentAllow ` does not work when P/I or I/P ratios cannot be calculated. You can now optionally back-fill zero values with expectaton from simple chainladder so that Munich can be performed on sparser triangles.

    Refactors:

    • Performance optimized several triangle functions including slicing and val_to_dev
    • Reduced footprint of ldf_, sigma, and std_err_ triangles
    • Standardized IBNR model methods
    • Changed cdf_, full_triangle_, full_expectation_, ibnr_ to function-based properties instead of in-memory objects to reduce memory footprint
    Source code(tar.gz)
    Source code(zip)
  • v0.7.1(Jun 22, 2020)

    Enhancements

    • Added heatmap method to Triangle - allows for conditionally formatting a 2D triangle. Useful for detecting link_ratio outliers
    • Introduced BerquistSherman estimator
    • Better error messaging when triangle columns are non-numeric
    • Broadened the functionality of Triangle.trend
    • Allow for nested estimators in to_json. Required addition for the new BerquistSherman method
    • Docs, docs, and more docs.

    Bug Fixes

    • Mixed an inappropriate mutation in MunichAdjustment.transform
    • Triangle column slicing now supports pd.Index objects instead of just lists

    Other

    • Moved BootstrapODPSample to workflow section as it is not a development estimator.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Jun 2, 2020)

    Bug fixes

    • TailBondy now works with multiple (4D) triangles
    • TailBondy computes correctly when earliest_age is selected
    • Sub-triangles now honor index and column slicing of the parent.
    • fit_transform for all tail estimators now correctly propagate all estimator attributes
    • Bondy decay now uses the generalized Bondy formula instead of exponential decay

    Enhancements

    • Every tail estimator now has a tail_ attribute representing the point estimate of the tail
    • Every tail estimator how has an attachment_age parameter to allow for attachment before the end of the triangle
    • TailCurve now has slope_ and intercept_ attributes for a diagnostics of the estimator.
    • TailBondy now has earliest_ldf_ attributes to allow for diagnostics of the estimator.
    • Substantial improvement to the documents on Tails.
    • Introduced the deterministic components of ClarkLDF and TailClark estimators to allow for growth curve selection of development patterns.
    Source code(tar.gz)
    Source code(zip)
Owner
Casualty Actuarial Society
This will serve as a platform for development of tools that prove useful to non-life actuaries.
Casualty Actuarial Society
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 4, 2022
Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Cuong Nguyen 1 Jan 18, 2022
Few-Shot Graph Learning for Molecular Property Prediction

Few-shot Graph Learning for Molecular Property Prediction Introduction This is the source code and dataset for the following paper: Few-shot Graph Lea

Zhichun Guo 94 Dec 12, 2022
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-trained models.

GMUM 90 Jan 8, 2023
MolRep: A Deep Representation Learning Library for Molecular Property Prediction

MolRep: A Deep Representation Learning Library for Molecular Property Prediction Summary MolRep is a Python package for fairly measuring algorithmic p

AI-Health @NSCC-gz 83 Dec 24, 2022
Official implementation of "Motif-based Graph Self-Supervised Learning forMolecular Property Prediction"

Motif-based Graph Self-Supervised Learning for Molecular Property Prediction Official Pytorch implementation of NeurIPS'21 paper "Motif-based Graph Se

zaixi 71 Dec 20, 2022
face property detection pytorch

This is the face property train code of project face-detection-project

i am x 2 Oct 18, 2021
Let Python optimize the best stop loss and take profits for your TradingView strategy.

TradingView Machine Learning TradeView is a free and open source Trading View bot written in Python. It is designed to support all major exchanges. It

Robert Roman 473 Jan 9, 2023
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

null 290 Dec 25, 2022
Official pytorch implementation of "Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization" ACMMM 2021 (Oral)

Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization This is an official implementation of "Feature Stylization and Domain-

null 22 Sep 22, 2022
Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

Jinming Su 4 Aug 25, 2022
PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric

PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric This repository contains the implementation of MSBG hearing loss m

BUT Speech@fit 9 Nov 8, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

Xinyu Huang 28 Oct 27, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising

EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising By Tengfei Liang, Yi Jin, Yidong Li, Tao Wang. Th

workingcoder 115 Jan 5, 2023
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022