A MNIST-like fashion product database. Benchmark

Overview

Fashion-MNIST

GitHub stars Gitter Readme-CN Readme-JA License: MIT Year-In-Review

Table of Contents

Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.

Here's an example of how the data looks (each class takes three-rows):

Why we made Fashion-MNIST

The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."

To Serious Machine Learning Researchers

Seriously, we are talking about replacing MNIST. Here are some good reasons:

Get the Data

Many ML libraries already include Fashion-MNIST data/API, give it a try!

You can use direct links to download the dataset. The data is stored in the same format as the original MNIST data.

Name Content Examples Size Link MD5 Checksum
train-images-idx3-ubyte.gz training set images 60,000 26 MBytes Download 8d4fb7e6c68d591d4c3dfef9ec88bf0d
train-labels-idx1-ubyte.gz training set labels 60,000 29 KBytes Download 25c81989df183df01b3e8a0aad5dffbe
t10k-images-idx3-ubyte.gz test set images 10,000 4.3 MBytes Download bef4ecab320f06d8554ea6380940ec79
t10k-labels-idx1-ubyte.gz test set labels 10,000 5.1 KBytes Download bb300cfdad3c16e7a12a480ee83cd310

Alternatively, you can clone this GitHub repository; the dataset appears under data/fashion. This repo also contains some scripts for benchmark and visualization.

git clone [email protected]:zalandoresearch/fashion-mnist.git

Labels

Each training and test example is assigned to one of the following labels:

Label Description
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot

Usage

Loading data with Python (requires NumPy)

Use utils/mnist_reader in this repo:

import mnist_reader
X_train, y_train = mnist_reader.load_mnist('data/fashion', kind='train')
X_test, y_test = mnist_reader.load_mnist('data/fashion', kind='t10k')

Loading data with Tensorflow

Make sure you have downloaded the data and placed it in data/fashion. Otherwise, Tensorflow will download and use the original MNIST.

from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/fashion')

data.train.next_batch(BATCH_SIZE)

Note, Tensorflow supports passing in a source url to the read_data_sets. You may use:

data = input_data.read_data_sets('data/fashion', source_url='http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/')

Also, an official Tensorflow tutorial of using tf.keras, a high-level API to train Fashion-MNIST can be found here.

Loading data with other machine learning libraries

To date, the following libraries have included Fashion-MNIST as a built-in dataset. Therefore, you don't need to download Fashion-MNIST by yourself. Just follow their API and you are ready to go.

You are welcome to make pull requests to other open-source machine learning packages, improving their support to Fashion-MNIST dataset.

Loading data with other languages

As one of the Machine Learning community's most popular datasets, MNIST has inspired people to implement loaders in many different languages. You can use these loaders with the Fashion-MNIST dataset as well. (Note: may require decompressing first.) To date, we haven't yet tested all of these loaders with Fashion-MNIST.

Benchmark

We built an automatic benchmarking system based on scikit-learn that covers 129 classifiers (but no deep learning) with different parameters. Find the results here.

You can reproduce the results by running benchmark/runner.py. We recommend building and deploying this Dockerfile.

You are welcome to submit your benchmark; simply create a new issue and we'll list your results here. Before doing that, please make sure it does not already appear in this list. Visit our contributor guidelines for additional details.

The table below collects the submitted benchmarks. Note that we haven't yet tested these results. You are welcome to validate the results using the code provided by the submitter. Test accuracy may differ due to the number of epoch, batch size, etc. To correct this table, please create a new issue.

Classifier Preprocessing Fashion test accuracy MNIST test accuracy Submitter Code
2 Conv+pooling None 0.876 - Kashif Rasul 🔗
2 Conv+pooling None 0.916 - Tensorflow's doc 🔗
2 Conv+pooling+ELU activation (PyTorch) None 0.903 - @AbhirajHinge 🔗
2 Conv Normalization, random horizontal flip, random vertical flip, random translation, random rotation. 0.919 0.971 Kyriakos Efthymiadis 🔗
2 Conv <100K parameters None 0.925 0.992 @hardmaru 🔗
2 Conv ~113K parameters Normalization 0.922 0.993 Abel G. 🔗
2 Conv+3 FC ~1.8M parameters Normalization 0.932 0.994 @Xfan1025 🔗
2 Conv+3 FC ~500K parameters Augmentation, batch normalization 0.934 0.994 @cmasch 🔗
2 Conv+pooling+BN None 0.934 - @khanguyen1207 🔗
2 Conv+2 FC Random Horizontal Flips 0.939 - @ashmeet13 🔗
3 Conv+2 FC None 0.907 - @Cenk Bircanoğlu 🔗
3 Conv+pooling+BN None 0.903 0.994 @meghanabhange 🔗
3 Conv+pooling+2 FC+dropout None 0.926 - @Umberto Griffo 🔗
3 Conv+BN+pooling None 0.921 0.992 @gchhablani 🔗
5 Conv+BN+pooling None 0.931 - @Noumanmufc1 🔗
CNN with optional shortcuts, dense-like connectivity standardization+augmentation+random erasing 0.947 - @kennivich 🔗
GRU+SVM None 0.888 0.965 @AFAgarap 🔗
GRU+SVM with dropout None 0.897 0.988 @AFAgarap 🔗
WRN40-4 8.9M params standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.967 - @ajbrock 🔗 🔗
DenseNet-BC 768K params standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.954 - @ajbrock 🔗 🔗
MobileNet augmentation (horizontal flips) 0.950 - @苏剑林 🔗
ResNet18 Normalization, random horizontal flip, random vertical flip, random translation, random rotation. 0.949 0.979 Kyriakos Efthymiadis 🔗
GoogleNet with cross-entropy loss None 0.937 - @Cenk Bircanoğlu 🔗
AlexNet with Triplet loss None 0.899 - @Cenk Bircanoğlu 🔗
SqueezeNet with cyclical learning rate 200 epochs None 0.900 - @snakers4 🔗
Dual path network with wide resnet 28-10 standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.957 - @Queequeg 🔗
MLP 256-128-100 None 0.8833 - @heitorrapela 🔗
VGG16 26M parameters None 0.935 - @QuantumLiu 🔗 🔗
WRN-28-10 standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.959 - @zhunzhong07 🔗
WRN-28-10 + Random Erasing standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.963 - @zhunzhong07 🔗
Human Performance Crowd-sourced evaluation of human (with no fashion expertise) performance. 1000 randomly sampled test images, 3 labels per image, majority labelling. 0.835 - Leo -
Capsule Network 8M parameters Normalization and shift at most 2 pixel and horizontal flip 0.936 - @XifengGuo 🔗
HOG+SVM HOG 0.926 - @subalde 🔗
XgBoost scaling the pixel values to mean=0.0 and var=1.0 0.898 0.958 @anktplwl91 🔗
DENSER - 0.953 0.997 @fillassuncao 🔗 🔗
Dyra-Net Rescale to unit interval 0.906 - @Dirk Schäfer 🔗 🔗
Google AutoML 24 compute hours (higher quality) 0.939 - @Sebastian Heinz 🔗
Fastai Resnet50+Fine-tuning+Softmax on last layer's activations 0.9312 - @Sayak 🔗

Other Explorations of Fashion-MNIST

Fashion-MNIST: Year in Review

Fashion-MNIST on Google Scholar

Generative adversarial networks (GANs)

Clustering

Video Tutorial

Machine Learning Meets Fashion by Yufeng G @ Google Cloud

Machine Learning Meets Fashion

Introduction to Kaggle Kernels by Yufeng G @ Google Cloud

Introduction to Kaggle Kernels

动手学深度学习 by Mu Li @ Amazon AI

MXNet/Gluon中文频道

Apache MXNet으로 배워보는 딥러닝(Deep Learning) - 김무현 (AWS 솔루션즈아키텍트)

Apache MXNet으로 배워보는 딥러닝(Deep Learning)

Visualization

t-SNE on Fashion-MNIST (left) and original MNIST (right)

PCA on Fashion-MNIST (left) and original MNIST (right)

UMAP on Fashion-MNIST (left) and original MNIST (right)

PyMDE on Fashion-MNIST (left) and original MNIST (right)

Contributing

Thanks for your interest in contributing! There are many ways to get involved; start with our contributor guidelines and then check these open issues for specific tasks.

Contact

To discuss the dataset, please use Gitter.

Citing Fashion-MNIST

If you use Fashion-MNIST in a scientific publication, we would appreciate references to the following paper:

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf. arXiv:1708.07747

Biblatex entry:

@online{xiao2017/online,
  author       = {Han Xiao and Kashif Rasul and Roland Vollgraf},
  title        = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms},
  date         = {2017-08-28},
  year         = {2017},
  eprintclass  = {cs.LG},
  eprinttype   = {arXiv},
  eprint       = {cs.LG/1708.07747},
}

Who is citing Fashion-MNIST?

License

The MIT License (MIT) Copyright © [2017] Zalando SE, https://tech.zalando.com

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Comments
  • GRU+SVM+DROPOUT+LR-DECAY

    GRU+SVM+DROPOUT+LR-DECAY

    GRU+SVM+DROPOUT+LR-DECAY in TF

    https://github.com/mpekalski/zalando/blob/master/GRU%2BSVM%2BDROPOUT%2BLR-DECAY.ipynb

    100 epochs, test accuracy: 0.9841001033782959

    No preprocessing.

    opened by mpekalski 11
  • The download link of t10k-images-idx3-ubyte.gz is wrong

    The download link of t10k-images-idx3-ubyte.gz is wrong

    I have downloaded the t10k-images-idx3-ubyte.gz from the readme provided download link, but I checked the md5sum value is 9fb629c4189551a2d022fa330f9573f3, it not the same as readme given, I have deleted it and redownload again for five times, it is wrong.

    wontfix 
    opened by leftthomas 10
  • Suggestion: Rename the repository from MNIST to something else

    Suggestion: Rename the repository from MNIST to something else

    Someone commented about this issue on Reddit (pasted below) and I think you should seriously consider changing the name of the benchmark to something else while it's still early on.

    MNIST stands for "Modified National Institute of Standards and Technology" and "National Institute of Standards and Technology" might not be too happy with their name being used. Call it something else. Especially when its an entirely new dataset and not a modification/extension of original NIST dataset.

    opened by rhiever 10
  • Benchmark: 2 conv avg pool + 1 fc

    Benchmark: 2 conv avg pool + 1 fc

    No preprocessing. See source code for exact network config.

    Fashion-MNIST test accuracy: 97.39 % Digit-MNIST test accuracy: 99.13 %

    Source code: https://github.com/rfratila/Vulcan/blob/master/train_mnist_conv.py

    Built with Lasagne and Theano

    opened by rfratila 7
  • Mislabeled Instances Found

    Mislabeled Instances Found

    Hi Everybody,

    in a recent publication of mine, we surveyed popular data sets and looked into finding mislabeled instances. For Fashion MNST, we found a large number of mislabeled / incorrect instances (i.e. where the automated cutting failed). See table 7 on the last page. It contains 64 instances, but we found a lot more. While it may or may not be dramatic for training, this may be disadvantageous for evaluation (since it skews accuracy scores).

    If you're interested in fixing / looking into this, let me know.

    Best Nicolas

    I added some examples (taken directly from the paper) . The heading indicates the label (which is incorrect as far as i can see) and the instance number in the training set.

    45592 16691 28264 33982 40513 42018

    opened by mueller91 5
  • Validation Performance

    Validation Performance

    Hello there, these MLP results say in the author README that 90% is the Validation Performance, not the test.

    MLP 256-128-64 | None | 0.900 | - | @lianghong |   -- | -- | -- | -- | -- | --

    opened by heitorrapela 5
  • MNIST-Fashion-CNN

    MNIST-Fashion-CNN

    https://github.com/abelusha/MNIST-Fashion-CNN/blob/master/MNIST_Fashon_CNN_using_Keras.ipynb

    Preprocessing : Normalization Result: result

    Keras based Architecture: model_summary

    benchmark 
    opened by abelusha 5
  • Plain 9 layers CNN for Benchmark

    Plain 9 layers CNN for Benchmark

    I have several experimental results with different activation function and learning rate with CNN architecture like this : C(3,32)-C(3,32)-P2-C(3,64)-C(3,64)-P2-FC64-FC64-S10

    |Activation|Learning Rate|MNIST|Fashion MNIST| |---|---|---|---| |RELU|0.01|0.9874|0.9883| |RELU|0.001|0.9388|0.9368| |SELU|0.01|0.9871|0.9819| |SELU|0.001|0.9490|0.8202|

    For now, my best results are 98.74% and 98.83% for MNIST and Fashion-MNIST, respectively. The train-val curve could be found in my repository : https://github.com/JMingKuo/fashion-mnist

    benchmark 
    opened by JMingKuo 5
  • BenchMark: CNN with 2 Conv Layers. Accuracy on FashionMNIST Dataset: 99.2% and on MNIST dataset: 99.1%

    BenchMark: CNN with 2 Conv Layers. Accuracy on FashionMNIST Dataset: 99.2% and on MNIST dataset: 99.1%

    The model details are as follows:

    Preprocessing by calculating mean and std beforehand
    Trained using Cross Entropy Loss and Adam Optimizer.
    
    The layers in sequence are:
    
    Convolutional layer with 6 feature maps of size 5 x 5
    BatchNorm layer followed by ReLU activation.
    Average Pooling layer of size 2 x 2.
    Convolutional layer with 16 feature maps of size 5 x 5
    BatchNorm layer followed by ReLU activation.
    Average Pooling layer of size 2 x 2.
    Fully connected layer of input size 400 and output size 120 followed by ReLU activation.
    Fully connected layer of input size 120 and output size 84 followed by ReLU activation.
    Fully connected layer of input size 84 and output size 10 .
    

    Accuracy achieved on Fashion MNIST Test Dataset is 99.2 % Accuracy achieved on MNIST Test Dataset is 99.1% .

    This network has been implemented in PyTorch. The code can be found here

    opened by nouman-10 4
  • Clustering performance

    Clustering performance

    @hanxiao I wonder what's the clustering performance of the state-of-the-art clustering algorithms on fashion-mnist. I tested my algorithm and got an accuracy of 0.59 and NMI of 0.63. Have you collected other clustering results? Thanks.

    benchmark 
    opened by XifengGuo 4
  • Benchmark: Wide ResNet 28-10 + Random Erasing, Top-1 accuracy: 96.35%

    Benchmark: Wide ResNet 28-10 + Random Erasing, Top-1 accuracy: 96.35%

    Hi, we achieve 95.99% top-1 accuracy with using WRN-28-10 on Fashion-MNIST using standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips).

    When using WRN-28-10 + Random Erasing, it gives 96.35% top-1 accuracy.

    The code will be available soon on github.

    benchmark 
    opened by zhunzhong07 4
  • Incremental learning

    Incremental learning

    Hi everyone! I have a project with fashion mnist , and I want use Incremental learning with it, could you tell me please is it possible or not? Thank you!

    opened by Frogleim 1
  • convolution network mean acc achieve 0.9765

    convolution network mean acc achieve 0.9765

    import argparse
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim from torchvision import datasets, transforms

    class ConvBlocck(nn.Module): def init(self, inchannel, outchannel, kernel_size=3, stride=1): super().init() self.conv = nn.Sequential( nn.Conv2d(inchannel, outchannel, 1, 1), nn.BatchNorm2d(outchannel), nn.GELU(), ) self.conv1 = nn.Sequential( nn.Conv2d(outchannel, outchannel, kernel_size=kernel_size, padding=kernel_size//2, stride=stride, groups=outchannel), nn.BatchNorm2d(outchannel), nn.GELU(), ) self.kernel_size = kernel_size self.stride = stride def forward(self, x): out = self.conv(x) out = out + self.conv1(out) return out

    class Net(nn.Module): def init(self): super(Net, self).init() self.conv1 = ConvBlocck(1, 20, 5, 1) self.conv2 = ConvBlocck(20, 50, 5, 1) self.conv3 = nn.Sequential( ConvBlocck(50, 100, 5, 1), ConvBlocck(100, 100, 7, 1), ) self.conv4 = nn.Sequential( ConvBlocck(100, 200, 5, 1), ConvBlocck(200, 200, 5, 1), )

        self.fc1 = nn.Linear(200, 100)
        self.fc2 = nn.Linear(100, 10)
    
    def forward(self, x):
        x = self.conv1(x)
        x = F.max_pool2d(x, 2, 2)
        x = self.conv2(x)
        x = F.max_pool2d(x, 2, 2)
        x = self.conv3(x)
        x = self.conv4(x)
        x = F.avg_pool2d(x, kernel_size=7, stride=1, padding=0)
        # import pdb; pdb.set_trace()
        x = x.view(-1, 200)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)
    

    def train(args, model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item()))

    def test(args, model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item()

    test_loss /= len(test_loader.dataset)
    
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))
    

    def main(): # Training settings parser = argparse.ArgumentParser(description='PyTorch MNIST Example') parser.add_argument('--batch-size', type=int, default=128, metavar='N', help='input batch size for training (default: 64)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--epochs', type=int, default=140, metavar='N', help='number of epochs to train (default: 10)') parser.add_argument('--lr', type=float, default=0.1, metavar='LR', help='learning rate (default: 0.01)') parser.add_argument('--momentum', type=float, default=0.9, metavar='M', help='SGD momentum (default: 0.5)') parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training') parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') parser.add_argument('--save-model', action='store_true', default=False, help='For Saving the current Model') args = parser.parse_args() use_cuda = not args.no_cuda and torch.cuda.is_available() torch.manual_seed(args.seed) device = torch.device("cuda" if use_cuda else "cpu") kwargs = {'num_workers': 4, 'pin_memory': True} if use_cuda else {} train_loader = torch.utils.data.DataLoader( datasets.FashionMNIST('./fashionmnist_data/', train=True, download=False, transform=transforms.Compose([ transforms.RandomCrop(28, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)), # transforms.RandomErasing(p = 0.5, scale = (0, 0.4), ratio = (0.5, 2), value = 'random'), ])), batch_size=args.batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.FashionMNIST('./fashionmnist_data/', train=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.test_batch_size, shuffle=True, **kwargs) model = Net().to(device) optimizer = optim.AdamW(model.parameters(), eps=1e-8, betas=(0.9, 0.99), lr=5e-4, weight_decay=5e-2) scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50, 120], gamma=0.1) for epoch in range(1, args.epochs + 1): train(args, model, device, train_loader, optimizer, epoch) test(args, model, device, test_loader) if (args.save_model): torch.save(model.state_dict(), "mnist_cnn.pt") if name == 'main': main()

    `

    just run with python -u main.py

    some log

    Test set: Average loss: 0.0810, Accuracy: 58119/60000 (97%)

    Train Epoch: 138 [0/60000 (0%)] Loss: 0.103322 Train Epoch: 138 [1280/60000 (2%)] Loss: 0.124087 Train Epoch: 138 [2560/60000 (4%)] Loss: 0.105898 Train Epoch: 138 [3840/60000 (6%)] Loss: 0.114831 Train Epoch: 138 [5120/60000 (9%)] Loss: 0.055228 Train Epoch: 138 [6400/60000 (11%)] Loss: 0.057790 Train Epoch: 138 [7680/60000 (13%)] Loss: 0.077030 Train Epoch: 138 [8960/60000 (15%)] Loss: 0.104552 Train Epoch: 138 [10240/60000 (17%)] Loss: 0.098626 Train Epoch: 138 [11520/60000 (19%)] Loss: 0.095885 Train Epoch: 138 [12800/60000 (21%)] Loss: 0.066495 Train Epoch: 138 [14080/60000 (23%)] Loss: 0.053589 Train Epoch: 138 [15360/60000 (26%)] Loss: 0.092867 Train Epoch: 138 [16640/60000 (28%)] Loss: 0.116169 Train Epoch: 138 [17920/60000 (30%)] Loss: 0.107934 Train Epoch: 138 [19200/60000 (32%)] Loss: 0.116899 Train Epoch: 138 [20480/60000 (34%)] Loss: 0.095697 Train Epoch: 138 [21760/60000 (36%)] Loss: 0.112671 Train Epoch: 138 [23040/60000 (38%)] Loss: 0.075007 Train Epoch: 138 [24320/60000 (41%)] Loss: 0.083380 Train Epoch: 138 [25600/60000 (43%)] Loss: 0.136541 Train Epoch: 138 [26880/60000 (45%)] Loss: 0.098393 Train Epoch: 138 [28160/60000 (47%)] Loss: 0.156382 Train Epoch: 138 [29440/60000 (49%)] Loss: 0.120168 Train Epoch: 138 [30720/60000 (51%)] Loss: 0.102728 Train Epoch: 138 [32000/60000 (53%)] Loss: 0.093192 Train Epoch: 138 [33280/60000 (55%)] Loss: 0.067673 Train Epoch: 138 [34560/60000 (58%)] Loss: 0.118263 Train Epoch: 138 [35840/60000 (60%)] Loss: 0.063559 Train Epoch: 138 [37120/60000 (62%)] Loss: 0.107007 Train Epoch: 138 [38400/60000 (64%)] Loss: 0.097562 Train Epoch: 138 [39680/60000 (66%)] Loss: 0.067643 Train Epoch: 138 [40960/60000 (68%)] Loss: 0.119229 Train Epoch: 138 [42240/60000 (70%)] Loss: 0.153711 Train Epoch: 138 [43520/60000 (72%)] Loss: 0.103719 Train Epoch: 138 [44800/60000 (75%)] Loss: 0.120675 Train Epoch: 138 [46080/60000 (77%)] Loss: 0.092273 Train Epoch: 138 [47360/60000 (79%)] Loss: 0.148049 Train Epoch: 138 [48640/60000 (81%)] Loss: 0.096311 Train Epoch: 138 [49920/60000 (83%)] Loss: 0.067373 Train Epoch: 138 [51200/60000 (85%)] Loss: 0.084663 Train Epoch: 138 [52480/60000 (87%)] Loss: 0.149150 Train Epoch: 138 [53760/60000 (90%)] Loss: 0.069273 Train Epoch: 138 [55040/60000 (92%)] Loss: 0.050591 Train Epoch: 138 [56320/60000 (94%)] Loss: 0.059370 Train Epoch: 138 [57600/60000 (96%)] Loss: 0.132310 Train Epoch: 138 [58880/60000 (98%)] Loss: 0.084755

    Test set: Average loss: 0.0648, Accuracy: 58591/60000 (98%)

    opened by liubo0902 0
  • Loading the dataset from the local path using tensorflow 2.0

    Loading the dataset from the local path using tensorflow 2.0

    I downloaded the dataset from a source and placing it in an arbitrary path. And I found some people having trouble in loading the dataset from a local path using tensorflow 2.0. The API tf.keras.datasets.fashion_mnist.load_data() seems not support loading data locally.

    I write a new function that may help to solve this issue. I hope that this function could help somebody in need. I don`t know whether the issue is big enough for a pull request. So I open an issue here and post my code here. Hope that I won't cause any inconvenience.

    The code of new function:

    import os
    import numpy as np
    import gzip
    def load_data_fromlocalpath(input_path):
      """Loads the Fashion-MNIST dataset.
      Modified by Henry Huang in 2020/12/24.
      We assume that the input_path should in a correct path address format.
      We also assume that potential users put all the four files in the path.
    
      Load local data from path ‘input_path’.
    
      Returns:
          Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
      """
      files = [
          'train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz',
          't10k-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz'
      ]
    
      paths = []
      for fname in files:
        paths.append(os.path.join(input_path, fname))  # The location of the dataset.
    
    
      with gzip.open(paths[0], 'rb') as lbpath:
        y_train = np.frombuffer(lbpath.read(), np.uint8, offset=8)
    
      with gzip.open(paths[1], 'rb') as imgpath:
        x_train = np.frombuffer(
            imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28)
    
      with gzip.open(paths[2], 'rb') as lbpath:
        y_test = np.frombuffer(lbpath.read(), np.uint8, offset=8)
    
      with gzip.open(paths[3], 'rb') as imgpath:
        x_test = np.frombuffer(
            imgpath.read(), np.uint8, offset=16).reshape(len(y_test), 28, 28)
    
      return (x_train, y_train), (x_test, y_test)
    

    When calling this function:

    (x_train,y_train),(x_test,y_test)=load_data_fromlocalpath('Your path')
    
    opened by henryhuanghenry 3
  • Interactive Visualization of the dataset

    Interactive Visualization of the dataset

    Hi team, thanks for making this wonderful dataset.

    Just want to share that I made a web based 3D interactive explorer of all the 70k images. It used UMAP to embed the images into a 3D space where users can freely navigate, or jump directly to a specific image by id.

    Here is the link to the demo. Wondering if this would be an nice addition to the current explorations examples, and I am happy to make an PR if appropriate.

    Best

    opened by stwind 0
  • Benchmark: E2E-3M Accuracy Result of 95.92% for Fashion-MNIST

    Benchmark: E2E-3M Accuracy Result of 95.92% for Fashion-MNIST

    Dear Fashion-MNIST Creator,

    Thank you for your great contribution to our research society.

    With E2E-3M, we have achieved a competitive result of 95.92% accuracy. More details can be found in the following links

    https://arxiv.org/abs/2007.15161 https://github.com/leonlha/e2e-3m

    Cheers! Phong

    opened by leonlha 0
Owner
Zalando Research
Repositories of the research branch of Zalando SE
Zalando Research
Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Random Erasing Data Augmentation =============================================================== black white random This code has the source code for

Zhun Zhong 654 Dec 26, 2022
PyTorch experiments with the Zalando fashion-mnist dataset

zalando-pytorch PyTorch experiments with the Zalando fashion-mnist dataset Project Organization ├── LICENSE ├── Makefile <- Makefile with co

Federico Baldassarre 31 Sep 25, 2021
Streamlit App For Product Analysis - Streamlit App For Product Analysis

Streamlit_App_For_Product_Analysis Здравствуйте! Перед вами дашборд, позволяющий

Grigory Sirotkin 1 Jan 10, 2022
Fashion Landmark Estimation with HRNet

HRNet for Fashion Landmark Estimation (Modified from deep-high-resolution-net.pytorch) Introduction This code applies the HRNet (Deep High-Resolution

SVIP Lab 91 Dec 26, 2022
Extract MNIST handwritten digits dataset binary file into bmp images

MNIST-dataset-extractor Extract MNIST handwritten digits dataset binary file into bmp images More info at http://yann.lecun.com/exdb/mnist/ Dependenci

Omar Mostafa 6 May 24, 2021
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

null 248 Dec 4, 2022
Leveraging Two Types of Global Graph for Sequential Fashion Recommendation, ICMR 2021

This is the repo for the paper: Leveraging Two Types of Global Graph for Sequential Fashion Recommendation Requirements OS: Ubuntu 16.04 or higher ver

Yujuan Ding 10 Oct 10, 2022
Attention mechanism with MNIST dataset

[TensorFlow] Attention mechanism with MNIST dataset Usage $ python run.py Result Training Loss graph. Test Each figure shows input digit, attention ma

YeongHyeon Park 12 Jun 10, 2022
MNIST, but with Bezier curves instead of pixels

bezier-mnist This is a work-in-progress vector version of the MNIST dataset. Samples Here are some samples from the training set. Note that, while the

Alex Nichol 15 Jan 16, 2022
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

null 250 Jan 8, 2023
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)

This is a playground for pytorch beginners, which contains predefined models on popular dataset. Currently we support mnist, svhn cifar10, cifar100 st

Aaron Chen 2.4k Dec 28, 2022
Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Hector Kohler 0 Mar 30, 2022
A script that trains a model to recognize handwritten digits using the MNIST data set.

handwritten-digits-recognition A script that trains a model to recognize handwritten digits using the MNIST data set. Then it loads external files and

Hamza Sayih 1 Oct 30, 2021
Cluttered MNIST Dataset

Cluttered MNIST Dataset A setup script will download MNIST and produce mnist/*.t7 files: luajit download_mnist.lua Example usage: local mnist_clutter

DeepMind 50 Jul 12, 2022
An executor that performs image segmentation on fashion items

ClothingSegmenter U2NET fashion image/clothing segmenter based on https://github.com/levindabhi/cloth-segmentation Overview The ClothingSegmenter exec

Jina AI 5 Mar 30, 2022
An implementation of quantum convolutional neural network with MindQuantum. Huawei, classifying MNIST dataset

关于实现的一点说明 山东大学 2020级 苏博南 www.subonan.com 文件说明 tools.py 这里面主要有两个函数: resize(a, lenb) 这其实是我找同学写的一个小算法hhh。给出一个$28\times 28$的方阵a,返回一个$lenb\times lenb$的方阵。因

ぼっけなす 2 Aug 29, 2022