Skip to content

datasets

OpenMLDataFeature

Data Feature (a.k.a. Attribute) object.

Parameters:

Name Type Description Default
index int

The index of this feature

required
name str

Name of the feature

required
data_type str

can be nominal, numeric, string, date (corresponds to arff)

required
nominal_values list(str)

list of the possible values, in case of nominal attribute

required
number_missing_values int

Number of rows that have a missing value for this feature.

required
ontologies list(str)

list of ontologies attached to this feature. An ontology describes the concept that are described in a feature. An ontology is defined by an URL where the information is provided.

None
Source code in openml/datasets/data_feature.py
class OpenMLDataFeature:
    """
    Data Feature (a.k.a. Attribute) object.

    Parameters
    ----------
    index : int
        The index of this feature
    name : str
        Name of the feature
    data_type : str
        can be nominal, numeric, string, date (corresponds to arff)
    nominal_values : list(str)
        list of the possible values, in case of nominal attribute
    number_missing_values : int
        Number of rows that have a missing value for this feature.
    ontologies : list(str)
        list of ontologies attached to this feature. An ontology describes the
        concept that are described in a feature. An ontology is defined by an
        URL where the information is provided.
    """

    LEGAL_DATA_TYPES: ClassVar[Sequence[str]] = ["nominal", "numeric", "string", "date"]

    def __init__(  # noqa: PLR0913
        self,
        index: int,
        name: str,
        data_type: str,
        nominal_values: list[str],
        number_missing_values: int,
        ontologies: list[str] | None = None,
    ):
        if not isinstance(index, int):
            raise TypeError(f"Index must be `int` but is {type(index)}")

        if data_type not in self.LEGAL_DATA_TYPES:
            raise ValueError(
                f"data type should be in {self.LEGAL_DATA_TYPES!s}, found: {data_type}",
            )

        if data_type == "nominal":
            if nominal_values is None:
                raise TypeError(
                    "Dataset features require attribute `nominal_values` for nominal "
                    "feature type.",
                )

            if not isinstance(nominal_values, list):
                raise TypeError(
                    "Argument `nominal_values` is of wrong datatype, should be list, "
                    f"but is {type(nominal_values)}",
                )
        elif nominal_values is not None:
            raise TypeError("Argument `nominal_values` must be None for non-nominal feature.")

        if not isinstance(number_missing_values, int):
            msg = f"number_missing_values must be int but is {type(number_missing_values)}"
            raise TypeError(msg)

        self.index = index
        self.name = str(name)
        self.data_type = str(data_type)
        self.nominal_values = nominal_values
        self.number_missing_values = number_missing_values
        self.ontologies = ontologies

    def __repr__(self) -> str:
        return "[%d - %s (%s)]" % (self.index, self.name, self.data_type)

    def __eq__(self, other: Any) -> bool:
        return isinstance(other, OpenMLDataFeature) and self.__dict__ == other.__dict__

    def _repr_pretty_(self, pp: pretty.PrettyPrinter, cycle: bool) -> None:  # noqa: FBT001, ARG002
        pp.text(str(self))

OpenMLDataset

Bases: OpenMLBase

Dataset object.

Allows fetching and uploading datasets to OpenML.

Parameters:

Name Type Description Default
name str

Name of the dataset.

required
description str

Description of the dataset.

required
data_format str

Format of the dataset which can be either 'arff' or 'sparse_arff'.

'arff'
cache_format str

Format for caching the dataset which can be either 'feather' or 'pickle'.

'pickle'
dataset_id int

Id autogenerated by the server.

None
version int

Version of this dataset. '1' for original version. Auto-incremented by server.

None
creator str

The person who created the dataset.

None
contributor str

People who contributed to the current version of the dataset.

None
collection_date str

The date the data was originally collected, given by the uploader.

None
upload_date str

The date-time when the dataset was uploaded, generated by server.

None
language str

Language in which the data is represented. Starts with 1 upper case letter, rest lower case, e.g. 'English'.

None
licence str

License of the data.

None
url str

Valid URL, points to actual data file. The file can be on the OpenML server or another dataset repository.

None
default_target_attribute str

The default target attribute, if it exists. Can have multiple values, comma separated.

None
row_id_attribute str

The attribute that represents the row-id column, if present in the dataset.

None
ignore_attribute str | list

Attributes that should be excluded in modelling, such as identifiers and indexes.

None
version_label str

Version label provided by user. Can be a date, hash, or some other type of id.

None
citation str

Reference(s) that should be cited when building on this data.

None
tag str

Tags, describing the algorithms.

None
visibility str

Who can see the dataset. Typical values: 'Everyone','All my friends','Only me'. Can also be any of the user's circles.

None
original_data_url str

For derived data, the url to the original dataset.

None
paper_url str

Link to a paper describing the dataset.

None
update_comment str

An explanation for when the dataset is uploaded.

None
md5_checksum str

MD5 checksum to check if the dataset is downloaded without corruption.

None
data_file str

Path to where the dataset is located.

None
features_file dict

A dictionary of dataset features, which maps a feature index to a OpenMLDataFeature.

None
qualities_file dict

A dictionary of dataset qualities, which maps a quality name to a quality value.

None
dataset str | None

Serialized arff dataset string.

None
parquet_url str | None

This is the URL to the storage location where the dataset files are hosted. This can be a MinIO bucket URL. If specified, the data will be accessed from this URL when reading the files.

None
parquet_file str | None

Path to the local file.

None
Source code in openml/datasets/dataset.py
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
class OpenMLDataset(OpenMLBase):
    """Dataset object.

    Allows fetching and uploading datasets to OpenML.

    Parameters
    ----------
    name : str
        Name of the dataset.
    description : str
        Description of the dataset.
    data_format : str
        Format of the dataset which can be either 'arff' or 'sparse_arff'.
    cache_format : str
        Format for caching the dataset which can be either 'feather' or 'pickle'.
    dataset_id : int, optional
        Id autogenerated by the server.
    version : int, optional
        Version of this dataset. '1' for original version.
        Auto-incremented by server.
    creator : str, optional
        The person who created the dataset.
    contributor : str, optional
        People who contributed to the current version of the dataset.
    collection_date : str, optional
        The date the data was originally collected, given by the uploader.
    upload_date : str, optional
        The date-time when the dataset was uploaded, generated by server.
    language : str, optional
        Language in which the data is represented.
        Starts with 1 upper case letter, rest lower case, e.g. 'English'.
    licence : str, optional
        License of the data.
    url : str, optional
        Valid URL, points to actual data file.
        The file can be on the OpenML server or another dataset repository.
    default_target_attribute : str, optional
        The default target attribute, if it exists.
        Can have multiple values, comma separated.
    row_id_attribute : str, optional
        The attribute that represents the row-id column,
        if present in the dataset.
    ignore_attribute : str | list, optional
        Attributes that should be excluded in modelling,
        such as identifiers and indexes.
    version_label : str, optional
        Version label provided by user.
        Can be a date, hash, or some other type of id.
    citation : str, optional
        Reference(s) that should be cited when building on this data.
    tag : str, optional
        Tags, describing the algorithms.
    visibility : str, optional
        Who can see the dataset.
        Typical values: 'Everyone','All my friends','Only me'.
        Can also be any of the user's circles.
    original_data_url : str, optional
        For derived data, the url to the original dataset.
    paper_url : str, optional
        Link to a paper describing the dataset.
    update_comment : str, optional
        An explanation for when the dataset is uploaded.
    md5_checksum : str, optional
        MD5 checksum to check if the dataset is downloaded without corruption.
    data_file : str, optional
        Path to where the dataset is located.
    features_file : dict, optional
        A dictionary of dataset features,
        which maps a feature index to a OpenMLDataFeature.
    qualities_file : dict, optional
        A dictionary of dataset qualities,
        which maps a quality name to a quality value.
    dataset: string, optional
        Serialized arff dataset string.
    parquet_url: string, optional
        This is the URL to the storage location where the dataset files are hosted.
        This can be a MinIO bucket URL. If specified, the data will be accessed
        from this URL when reading the files.
    parquet_file: string, optional
        Path to the local file.
    """

    def __init__(  # noqa: C901, PLR0912, PLR0913, PLR0915
        self,
        name: str,
        description: str | None,
        data_format: Literal["arff", "sparse_arff"] = "arff",
        cache_format: Literal["feather", "pickle"] = "pickle",
        dataset_id: int | None = None,
        version: int | None = None,
        creator: str | None = None,
        contributor: str | None = None,
        collection_date: str | None = None,
        upload_date: str | None = None,
        language: str | None = None,
        licence: str | None = None,
        url: str | None = None,
        default_target_attribute: str | None = None,
        row_id_attribute: str | None = None,
        ignore_attribute: str | list[str] | None = None,
        version_label: str | None = None,
        citation: str | None = None,
        tag: str | None = None,
        visibility: str | None = None,
        original_data_url: str | None = None,
        paper_url: str | None = None,
        update_comment: str | None = None,
        md5_checksum: str | None = None,
        data_file: str | None = None,
        features_file: str | None = None,
        qualities_file: str | None = None,
        dataset: str | None = None,
        parquet_url: str | None = None,
        parquet_file: str | None = None,
    ):
        if cache_format not in ["feather", "pickle"]:
            raise ValueError(
                "cache_format must be one of 'feather' or 'pickle. "
                f"Invalid format specified: {cache_format}",
            )

        def find_invalid_characters(string: str, pattern: str) -> str:
            invalid_chars = set()
            regex = re.compile(pattern)
            for char in string:
                if not regex.match(char):
                    invalid_chars.add(char)
            return ",".join(
                [f"'{char}'" if char != "'" else f'"{char}"' for char in invalid_chars],
            )

        if dataset_id is None:
            pattern = "^[\x00-\x7f]*$"
            if description and not re.match(pattern, description):
                # not basiclatin (XSD complains)
                invalid_characters = find_invalid_characters(description, pattern)
                raise ValueError(
                    f"Invalid symbols {invalid_characters} in description: {description}",
                )
            pattern = "^[\x00-\x7f]*$"
            if citation and not re.match(pattern, citation):
                # not basiclatin (XSD complains)
                invalid_characters = find_invalid_characters(citation, pattern)
                raise ValueError(
                    f"Invalid symbols {invalid_characters} in citation: {citation}",
                )
            pattern = "^[a-zA-Z0-9_\\-\\.\\(\\),]+$"
            if not re.match(pattern, name):
                # regex given by server in error message
                invalid_characters = find_invalid_characters(name, pattern)
                raise ValueError(f"Invalid symbols {invalid_characters} in name: {name}")

        self.ignore_attribute: list[str] | None = None
        if isinstance(ignore_attribute, str):
            self.ignore_attribute = [ignore_attribute]
        elif isinstance(ignore_attribute, list) or ignore_attribute is None:
            self.ignore_attribute = ignore_attribute
        else:
            raise ValueError("Wrong data type for ignore_attribute. Should be list.")

        # TODO add function to check if the name is casual_string128
        # Attributes received by querying the RESTful API
        self.dataset_id = int(dataset_id) if dataset_id is not None else None
        self.name = name
        self.version = int(version) if version is not None else None
        self.description = description
        self.cache_format = cache_format
        # Has to be called format, otherwise there will be an XML upload error
        self.format = data_format
        self.creator = creator
        self.contributor = contributor
        self.collection_date = collection_date
        self.upload_date = upload_date
        self.language = language
        self.licence = licence
        self.url = url
        self.default_target_attribute = default_target_attribute
        self.row_id_attribute = row_id_attribute

        self.version_label = version_label
        self.citation = citation
        self.tag = tag
        self.visibility = visibility
        self.original_data_url = original_data_url
        self.paper_url = paper_url
        self.update_comment = update_comment
        self.md5_checksum = md5_checksum
        self.data_file = data_file
        self.parquet_file = parquet_file
        self._dataset = dataset
        self._parquet_url = parquet_url

        self._features: dict[int, OpenMLDataFeature] | None = None
        self._qualities: dict[str, float] | None = None
        self._no_qualities_found = False

        if features_file is not None:
            self._features = _read_features(Path(features_file))

        # "" was the old default value by `get_dataset` and maybe still used by some
        if qualities_file == "":
            # TODO(0.15): to switch to "qualities_file is not None" below and remove warning
            warnings.warn(
                "Starting from Version 0.15 `qualities_file` must be None and not an empty string "
                "to avoid reading the qualities from file. Set `qualities_file` to None to avoid "
                "this warning.",
                FutureWarning,
                stacklevel=2,
            )
            qualities_file = None

        if qualities_file is not None:
            self._qualities = _read_qualities(Path(qualities_file))

        if data_file is not None:
            data_pickle, data_feather, feather_attribute = self._compressed_cache_file_paths(
                Path(data_file)
            )
            self.data_pickle_file = data_pickle if Path(data_pickle).exists() else None
            self.data_feather_file = data_feather if Path(data_feather).exists() else None
            self.feather_attribute_file = feather_attribute if Path(feather_attribute) else None
        else:
            self.data_pickle_file = None
            self.data_feather_file = None
            self.feather_attribute_file = None

    @property
    def features(self) -> dict[int, OpenMLDataFeature]:
        """Get the features of this dataset."""
        if self._features is None:
            # TODO(eddiebergman): These should return a value so we can set it to be not None
            self._load_features()

        assert self._features is not None
        return self._features

    @property
    def qualities(self) -> dict[str, float] | None:
        """Get the qualities of this dataset."""
        # TODO(eddiebergman): Better docstring, I don't know what qualities means

        # We have to check `_no_qualities_found` as there might not be qualities for a dataset
        if self._qualities is None and (not self._no_qualities_found):
            self._load_qualities()

        return self._qualities

    @property
    def id(self) -> int | None:
        """Get the dataset numeric id."""
        return self.dataset_id

    def _get_repr_body_fields(self) -> Sequence[tuple[str, str | int | None]]:
        """Collect all information to display in the __repr__ body."""
        # Obtain number of features in accordance with lazy loading.
        n_features: int | None = None
        if self._qualities is not None and self._qualities["NumberOfFeatures"] is not None:
            n_features = int(self._qualities["NumberOfFeatures"])
        elif self._features is not None:
            n_features = len(self._features)

        fields: dict[str, int | str | None] = {
            "Name": self.name,
            "Version": self.version,
            "Format": self.format,
            "Licence": self.licence,
            "Download URL": self.url,
            "Data file": str(self.data_file) if self.data_file is not None else None,
            "Pickle file": (
                str(self.data_pickle_file) if self.data_pickle_file is not None else None
            ),
            "# of features": n_features,
        }
        if self.upload_date is not None:
            fields["Upload Date"] = self.upload_date.replace("T", " ")
        if self.dataset_id is not None:
            fields["OpenML URL"] = self.openml_url
        if self._qualities is not None and self._qualities["NumberOfInstances"] is not None:
            fields["# of instances"] = int(self._qualities["NumberOfInstances"])

        # determines the order in which the information will be printed
        order = [
            "Name",
            "Version",
            "Format",
            "Upload Date",
            "Licence",
            "Download URL",
            "OpenML URL",
            "Data File",
            "Pickle File",
            "# of features",
            "# of instances",
        ]
        return [(key, fields[key]) for key in order if key in fields]

    def __eq__(self, other: Any) -> bool:
        if not isinstance(other, OpenMLDataset):
            return False

        server_fields = {
            "dataset_id",
            "version",
            "upload_date",
            "url",
            "_parquet_url",
            "dataset",
            "data_file",
            "format",
            "cache_format",
        }

        cache_fields = {
            "_dataset",
            "data_file",
            "data_pickle_file",
            "data_feather_file",
            "feather_attribute_file",
            "parquet_file",
        }

        # check that common keys and values are identical
        ignore_fields = server_fields | cache_fields
        self_keys = set(self.__dict__.keys()) - ignore_fields
        other_keys = set(other.__dict__.keys()) - ignore_fields
        return self_keys == other_keys and all(
            self.__dict__[key] == other.__dict__[key] for key in self_keys
        )

    def _download_data(self) -> None:
        """Download ARFF data file to standard cache directory. Set `self.data_file`."""
        # import required here to avoid circular import.
        from .functions import _get_dataset_arff, _get_dataset_parquet

        skip_parquet = os.environ.get(OPENML_SKIP_PARQUET_ENV_VAR, "false").casefold() == "true"
        if self._parquet_url is not None and not skip_parquet:
            parquet_file = _get_dataset_parquet(self)
            self.parquet_file = None if parquet_file is None else str(parquet_file)
        if self.parquet_file is None:
            self.data_file = str(_get_dataset_arff(self))

    def _get_arff(self, format: str) -> dict:  # noqa: A002
        """Read ARFF file and return decoded arff.

        Reads the file referenced in self.data_file.

        Parameters
        ----------
        format : str
            Format of the ARFF file.
            Must be one of 'arff' or 'sparse_arff' or a string that will be either of those
            when converted to lower case.



        Returns
        -------
        dict
            Decoded arff.

        """
        # TODO: add a partial read method which only returns the attribute
        # headers of the corresponding .arff file!
        import struct

        filename = self.data_file
        assert filename is not None
        filepath = Path(filename)

        bits = 8 * struct.calcsize("P")

        # Files can be considered too large on a 32-bit system,
        # if it exceeds 120mb (slightly more than covtype dataset size)
        # This number is somewhat arbitrary.
        if bits != 64:
            MB_120 = 120_000_000
            file_size = filepath.stat().st_size
            if file_size > MB_120:
                raise NotImplementedError(
                    f"File {filename} too big for {file_size}-bit system ({bits} bytes).",
                )

        if format.lower() == "arff":
            return_type = arff.DENSE
        elif format.lower() == "sparse_arff":
            return_type = arff.COO
        else:
            raise ValueError(f"Unknown data format {format}")

        def decode_arff(fh: Any) -> dict:
            decoder = arff.ArffDecoder()
            return decoder.decode(fh, encode_nominal=True, return_type=return_type)  # type: ignore

        if filepath.suffix.endswith(".gz"):
            with gzip.open(filename) as zipfile:
                return decode_arff(zipfile)
        else:
            with filepath.open(encoding="utf8") as fh:
                return decode_arff(fh)

    def _parse_data_from_arff(  # noqa: C901, PLR0912, PLR0915
        self,
        arff_file_path: Path,
    ) -> tuple[pd.DataFrame | scipy.sparse.csr_matrix, list[bool], list[str]]:
        """Parse all required data from arff file.

        Parameters
        ----------
        arff_file_path : str
            Path to the file on disk.

        Returns
        -------
        Tuple[Union[pd.DataFrame, scipy.sparse.csr_matrix], List[bool], List[str]]
            DataFrame or csr_matrix: dataset
            List[bool]: List indicating which columns contain categorical variables.
            List[str]: List of column names.
        """
        try:
            data = self._get_arff(self.format)
        except OSError as e:
            logger.critical(
                f"Please check that the data file {arff_file_path} is there and can be read.",
            )
            raise e

        ARFF_DTYPES_TO_PD_DTYPE = {
            "INTEGER": "integer",
            "REAL": "floating",
            "NUMERIC": "floating",
            "STRING": "string",
        }
        attribute_dtype = {}
        attribute_names = []
        categories_names = {}
        categorical = []
        for name, type_ in data["attributes"]:
            # if the feature is nominal and a sparse matrix is
            # requested, the categories need to be numeric
            if isinstance(type_, list) and self.format.lower() == "sparse_arff":
                try:
                    # checks if the strings which should be the class labels
                    # can be encoded into integers
                    pd.factorize(type_)[0]
                except ValueError as e:
                    raise ValueError(
                        "Categorical data needs to be numeric when using sparse ARFF."
                    ) from e

            # string can only be supported with pandas DataFrame
            elif type_ == "STRING" and self.format.lower() == "sparse_arff":
                raise ValueError("Dataset containing strings is not supported with sparse ARFF.")

            # infer the dtype from the ARFF header
            if isinstance(type_, list):
                categorical.append(True)
                categories_names[name] = type_
                if len(type_) == 2:
                    type_norm = [cat.lower().capitalize() for cat in type_]
                    if {"True", "False"} == set(type_norm):
                        categories_names[name] = [cat == "True" for cat in type_norm]
                        attribute_dtype[name] = "boolean"
                    else:
                        attribute_dtype[name] = "categorical"
                else:
                    attribute_dtype[name] = "categorical"
            else:
                categorical.append(False)
                attribute_dtype[name] = ARFF_DTYPES_TO_PD_DTYPE[type_]
            attribute_names.append(name)

        if self.format.lower() == "sparse_arff":
            X = data["data"]
            X_shape = (max(X[1]) + 1, max(X[2]) + 1)
            X = scipy.sparse.coo_matrix((X[0], (X[1], X[2])), shape=X_shape, dtype=np.float32)
            X = X.tocsr()
        elif self.format.lower() == "arff":
            X = pd.DataFrame(data["data"], columns=attribute_names)

            col = []
            for column_name in X.columns:
                if attribute_dtype[column_name] in ("categorical", "boolean"):
                    categories = self._unpack_categories(
                        X[column_name],  # type: ignore
                        categories_names[column_name],
                    )
                    col.append(categories)
                elif attribute_dtype[column_name] in ("floating", "integer"):
                    X_col = X[column_name]
                    if X_col.min() >= 0 and X_col.max() <= 255:
                        try:
                            X_col_uint = X_col.astype("uint8")
                            if (X_col == X_col_uint).all():
                                col.append(X_col_uint)
                                continue
                        except ValueError:
                            pass
                    col.append(X[column_name])
                else:
                    col.append(X[column_name])
            X = pd.concat(col, axis=1)
        else:
            raise ValueError(f"Dataset format '{self.format}' is not a valid format.")

        return X, categorical, attribute_names  # type: ignore

    def _compressed_cache_file_paths(self, data_file: Path) -> tuple[Path, Path, Path]:
        data_pickle_file = data_file.with_suffix(".pkl.py3")
        data_feather_file = data_file.with_suffix(".feather")
        feather_attribute_file = data_file.with_suffix(".feather.attributes.pkl.py3")
        return data_pickle_file, data_feather_file, feather_attribute_file

    def _cache_compressed_file_from_file(
        self,
        data_file: Path,
    ) -> tuple[pd.DataFrame | scipy.sparse.csr_matrix, list[bool], list[str]]:
        """Store data from the local file in compressed format.

        If a local parquet file is present it will be used instead of the arff file.
        Sets cache_format to 'pickle' if data is sparse.
        """
        (
            data_pickle_file,
            data_feather_file,
            feather_attribute_file,
        ) = self._compressed_cache_file_paths(data_file)

        attribute_names, categorical, data = self._parse_data_from_file(data_file)

        # Feather format does not work for sparse datasets, so we use pickle for sparse datasets
        if scipy.sparse.issparse(data):
            self.cache_format = "pickle"

        logger.info(f"{self.cache_format} write {self.name}")
        if self.cache_format == "feather":
            assert isinstance(data, pd.DataFrame)

            data.to_feather(data_feather_file)
            with open(feather_attribute_file, "wb") as fh:  # noqa: PTH123
                pickle.dump((categorical, attribute_names), fh, pickle.HIGHEST_PROTOCOL)
            self.data_feather_file = data_feather_file
            self.feather_attribute_file = feather_attribute_file

        else:
            with open(data_pickle_file, "wb") as fh:  # noqa: PTH123
                pickle.dump((data, categorical, attribute_names), fh, pickle.HIGHEST_PROTOCOL)
            self.data_pickle_file = data_pickle_file

        data_file = data_pickle_file if self.cache_format == "pickle" else data_feather_file
        logger.debug(f"Saved dataset {int(self.dataset_id or -1)}: {self.name} to file {data_file}")

        return data, categorical, attribute_names

    def _parse_data_from_file(
        self,
        data_file: Path,
    ) -> tuple[list[str], list[bool], pd.DataFrame | scipy.sparse.csr_matrix]:
        if data_file.suffix == ".arff":
            data, categorical, attribute_names = self._parse_data_from_arff(data_file)
        elif data_file.suffix == ".pq":
            attribute_names, categorical, data = self._parse_data_from_pq(data_file)
        else:
            raise ValueError(f"Unknown file type for file '{data_file}'.")

        return attribute_names, categorical, data

    def _parse_data_from_pq(self, data_file: Path) -> tuple[list[str], list[bool], pd.DataFrame]:
        try:
            data = pd.read_parquet(data_file)
        except Exception as e:
            raise Exception(f"File: {data_file}") from e
        categorical = [data[c].dtype.name == "category" for c in data.columns]
        attribute_names = list(data.columns)
        return attribute_names, categorical, data

    def _load_data(self) -> tuple[pd.DataFrame, list[bool], list[str]]:  # noqa: PLR0912, C901, PLR0915
        """Load data from compressed format or arff. Download data if not present on disk."""
        need_to_create_pickle = self.cache_format == "pickle" and self.data_pickle_file is None
        need_to_create_feather = self.cache_format == "feather" and self.data_feather_file is None

        if need_to_create_pickle or need_to_create_feather:
            if self.data_file is None:
                self._download_data()

            file_to_load = self.data_file if self.parquet_file is None else self.parquet_file
            assert file_to_load is not None
            data, cats, attrs = self._cache_compressed_file_from_file(Path(file_to_load))
            return _ensure_dataframe(data, attrs), cats, attrs

        # helper variable to help identify where errors occur
        fpath = self.data_feather_file if self.cache_format == "feather" else self.data_pickle_file
        logger.info(f"{self.cache_format} load data {self.name}")
        try:
            if self.cache_format == "feather":
                assert self.data_feather_file is not None
                assert self.feather_attribute_file is not None

                data = pd.read_feather(self.data_feather_file)
                fpath = self.feather_attribute_file
                with self.feather_attribute_file.open("rb") as fh:
                    categorical, attribute_names = pickle.load(fh)  # noqa: S301
            else:
                assert self.data_pickle_file is not None
                with self.data_pickle_file.open("rb") as fh:
                    data, categorical, attribute_names = pickle.load(fh)  # noqa: S301

        except FileNotFoundError as e:
            raise ValueError(
                f"Cannot find file for dataset {self.name} at location '{fpath}'."
            ) from e
        except (EOFError, ModuleNotFoundError, ValueError, AttributeError) as e:
            error_message = getattr(e, "message", e.args[0])
            hint = ""

            if isinstance(e, EOFError):
                readable_error = "Detected a corrupt cache file"
            elif isinstance(e, (ModuleNotFoundError, AttributeError)):
                readable_error = "Detected likely dependency issues"
                hint = (
                    "This can happen if the cache was constructed with a different pandas version "
                    "than the one that is used to load the data. See also "
                )
                if isinstance(e, ModuleNotFoundError):
                    hint += "https://github.com/openml/openml-python/issues/918. "
                elif isinstance(e, AttributeError):
                    hint += "https://github.com/openml/openml-python/pull/1121. "

            elif isinstance(e, ValueError) and "unsupported pickle protocol" in e.args[0]:
                readable_error = "Encountered unsupported pickle protocol"
            else:
                raise e

            logger.warning(
                f"{readable_error} when loading dataset {self.id} from '{fpath}'. "
                f"{hint}"
                f"Error message was: {error_message}. "
                "We will continue loading data from the arff-file, "
                "but this will be much slower for big datasets. "
                "Please manually delete the cache file if you want OpenML-Python "
                "to attempt to reconstruct it.",
            )
            file_to_load = self.data_file if self.parquet_file is None else self.parquet_file
            assert file_to_load is not None
            attr, cat, df = self._parse_data_from_file(Path(file_to_load))
            return _ensure_dataframe(df), cat, attr

        data_up_to_date = isinstance(data, pd.DataFrame) or scipy.sparse.issparse(data)
        if self.cache_format == "pickle" and not data_up_to_date:
            logger.info("Updating outdated pickle file.")
            file_to_load = self.data_file if self.parquet_file is None else self.parquet_file
            assert file_to_load is not None

            data, cats, attrs = self._cache_compressed_file_from_file(Path(file_to_load))

        return _ensure_dataframe(data, attribute_names), categorical, attribute_names

    @staticmethod
    def _unpack_categories(series: pd.Series, categories: list) -> pd.Series:
        # nan-likes can not be explicitly specified as a category
        def valid_category(cat: Any) -> bool:
            return isinstance(cat, str) or (cat is not None and not np.isnan(cat))

        filtered_categories = [c for c in categories if valid_category(c)]
        col = []
        for x in series:
            try:
                col.append(categories[int(x)])
            except (TypeError, ValueError):
                col.append(np.nan)

        # We require two lines to create a series of categories as detailed here:
        # https://pandas.pydata.org/pandas-docs/version/0.24/user_guide/categorical.html#series-creation
        raw_cat = pd.Categorical(col, ordered=True, categories=filtered_categories)
        return pd.Series(raw_cat, index=series.index, name=series.name)

    def get_data(  # noqa: C901
        self,
        target: list[str] | str | None = None,
        include_row_id: bool = False,  # noqa: FBT001, FBT002
        include_ignore_attribute: bool = False,  # noqa: FBT001, FBT002
    ) -> tuple[pd.DataFrame, pd.Series | None, list[bool], list[str]]:
        """Returns dataset content as dataframes.

        Parameters
        ----------
        target : string, List[str] or None (default=None)
            Name of target column to separate from the data.
            Splitting multiple columns is currently not supported.
        include_row_id : boolean (default=False)
            Whether to include row ids in the returned dataset.
        include_ignore_attribute : boolean (default=False)
            Whether to include columns that are marked as "ignore"
            on the server in the dataset.


        Returns
        -------
        X : dataframe, shape (n_samples, n_columns)
            Dataset, may have sparse dtypes in the columns if required.
        y : pd.Series, shape (n_samples, ) or None
            Target column
        categorical_indicator : list[bool]
            Mask that indicate categorical features.
        attribute_names : list[str]
            List of attribute names.
        """
        data, categorical_mask, attribute_names = self._load_data()

        to_exclude = []
        if not include_row_id and self.row_id_attribute is not None:
            if isinstance(self.row_id_attribute, str):
                to_exclude.append(self.row_id_attribute)
            elif isinstance(self.row_id_attribute, Iterable):
                to_exclude.extend(self.row_id_attribute)

        if not include_ignore_attribute and self.ignore_attribute is not None:
            if isinstance(self.ignore_attribute, str):
                to_exclude.append(self.ignore_attribute)
            elif isinstance(self.ignore_attribute, Iterable):
                to_exclude.extend(self.ignore_attribute)

        if len(to_exclude) > 0:
            logger.info(f"Going to remove the following attributes: {to_exclude}")
            keep = np.array([column not in to_exclude for column in attribute_names])
            data = data.drop(columns=to_exclude)
            categorical_mask = [cat for cat, k in zip(categorical_mask, keep) if k]
            attribute_names = [att for att, k in zip(attribute_names, keep) if k]

        if target is None:
            return data, None, categorical_mask, attribute_names

        if isinstance(target, str):
            target_names = target.split(",") if "," in target else [target]
        else:
            target_names = target

        # All the assumptions below for the target are dependant on the number of targets being 1
        n_targets = len(target_names)
        if n_targets > 1:
            raise NotImplementedError(f"Number of targets {n_targets} not implemented.")

        target_name = target_names[0]
        x = data.drop(columns=[target_name])
        y = data[target_name].squeeze()

        # Finally, remove the target from the list of attributes and categorical mask
        target_index = attribute_names.index(target_name)
        categorical_mask.pop(target_index)
        attribute_names.remove(target_name)

        assert isinstance(y, pd.Series)
        return x, y, categorical_mask, attribute_names

    def _load_features(self) -> None:
        """Load the features metadata from the server and store it in the dataset object."""
        # Delayed Import to avoid circular imports or having to import all of dataset.functions to
        # import OpenMLDataset.
        from openml.datasets.functions import _get_dataset_features_file

        if self.dataset_id is None:
            raise ValueError(
                "No dataset id specified. Please set the dataset id. Otherwise we cannot load "
                "metadata.",
            )

        features_file = _get_dataset_features_file(None, self.dataset_id)
        self._features = _read_features(features_file)

    def _load_qualities(self) -> None:
        """Load qualities information from the server and store it in the dataset object."""
        # same reason as above for _load_features
        from openml.datasets.functions import _get_dataset_qualities_file

        if self.dataset_id is None:
            raise ValueError(
                "No dataset id specified. Please set the dataset id. Otherwise we cannot load "
                "metadata.",
            )

        qualities_file = _get_dataset_qualities_file(None, self.dataset_id)

        if qualities_file is None:
            self._no_qualities_found = True
        else:
            self._qualities = _read_qualities(qualities_file)

    def retrieve_class_labels(self, target_name: str = "class") -> None | list[str]:
        """Reads the datasets arff to determine the class-labels.

        If the task has no class labels (for example a regression problem)
        it returns None. Necessary because the data returned by get_data
        only contains the indices of the classes, while OpenML needs the real
        classname when uploading the results of a run.

        Parameters
        ----------
        target_name : str
            Name of the target attribute

        Returns
        -------
        list
        """
        for feature in self.features.values():
            if feature.name == target_name:
                if feature.data_type == "nominal":
                    return feature.nominal_values

                if feature.data_type == "string":
                    # Rel.: #1311
                    # The target is invalid for a classification task if the feature type is string
                    # and not nominal. For such miss-configured tasks, we silently fix it here as
                    # we can safely interpreter string as nominal.
                    df, *_ = self.get_data()
                    return list(df[feature.name].unique())

        return None

    def get_features_by_type(  # noqa: C901
        self,
        data_type: str,
        exclude: list[str] | None = None,
        exclude_ignore_attribute: bool = True,  # noqa: FBT002, FBT001
        exclude_row_id_attribute: bool = True,  # noqa: FBT002, FBT001
    ) -> list[int]:
        """
        Return indices of features of a given type, e.g. all nominal features.
        Optional parameters to exclude various features by index or ontology.

        Parameters
        ----------
        data_type : str
            The data type to return (e.g., nominal, numeric, date, string)
        exclude : list(int)
            List of columns to exclude from the return value
        exclude_ignore_attribute : bool
            Whether to exclude the defined ignore attributes (and adapt the
            return values as if these indices are not present)
        exclude_row_id_attribute : bool
            Whether to exclude the defined row id attributes (and adapt the
            return values as if these indices are not present)

        Returns
        -------
        result : list
            a list of indices that have the specified data type
        """
        if data_type not in OpenMLDataFeature.LEGAL_DATA_TYPES:
            raise TypeError("Illegal feature type requested")
        if self.ignore_attribute is not None and not isinstance(self.ignore_attribute, list):
            raise TypeError("ignore_attribute should be a list")
        if self.row_id_attribute is not None and not isinstance(self.row_id_attribute, str):
            raise TypeError("row id attribute should be a str")
        if exclude is not None and not isinstance(exclude, list):
            raise TypeError("Exclude should be a list")
            # assert all(isinstance(elem, str) for elem in exclude),
            #            "Exclude should be a list of strings"
        to_exclude = []
        if exclude is not None:
            to_exclude.extend(exclude)
        if exclude_ignore_attribute and self.ignore_attribute is not None:
            to_exclude.extend(self.ignore_attribute)
        if exclude_row_id_attribute and self.row_id_attribute is not None:
            to_exclude.append(self.row_id_attribute)

        result = []
        offset = 0
        # this function assumes that everything in to_exclude will
        # be 'excluded' from the dataset (hence the offset)
        for idx in self.features:
            name = self.features[idx].name
            if name in to_exclude:
                offset += 1
            elif self.features[idx].data_type == data_type:
                result.append(idx - offset)
        return result

    def _get_file_elements(self) -> dict:
        """Adds the 'dataset' to file elements."""
        file_elements: dict = {}
        path = None if self.data_file is None else Path(self.data_file).absolute()

        if self._dataset is not None:
            file_elements["dataset"] = self._dataset
        elif path is not None and path.exists():
            with path.open("rb") as fp:
                file_elements["dataset"] = fp.read()

            try:
                dataset_utf8 = str(file_elements["dataset"], encoding="utf8")
                arff.ArffDecoder().decode(dataset_utf8, encode_nominal=True)
            except arff.ArffException as e:
                raise ValueError("The file you have provided is not a valid arff file.") from e

        elif self.url is None:
            raise ValueError("No valid url/path to the data file was given.")
        return file_elements

    def _parse_publish_response(self, xml_response: dict) -> None:
        """Parse the id from the xml_response and assign it to self."""
        self.dataset_id = int(xml_response["oml:upload_data_set"]["oml:id"])

    def _to_dict(self) -> dict[str, dict]:
        """Creates a dictionary representation of self."""
        props = [
            "id",
            "name",
            "version",
            "description",
            "format",
            "creator",
            "contributor",
            "collection_date",
            "upload_date",
            "language",
            "licence",
            "url",
            "default_target_attribute",
            "row_id_attribute",
            "ignore_attribute",
            "version_label",
            "citation",
            "tag",
            "visibility",
            "original_data_url",
            "paper_url",
            "update_comment",
            "md5_checksum",
        ]

        prop_values = {}
        for prop in props:
            content = getattr(self, prop, None)
            if content is not None:
                prop_values["oml:" + prop] = content

        return {
            "oml:data_set_description": {
                "@xmlns:oml": "http://openml.org/openml",
                **prop_values,
            }
        }

features: dict[int, OpenMLDataFeature] property

Get the features of this dataset.

id: int | None property

Get the dataset numeric id.

qualities: dict[str, float] | None property

Get the qualities of this dataset.

get_data(target=None, include_row_id=False, include_ignore_attribute=False)

Returns dataset content as dataframes.

Parameters:

Name Type Description Default
target (string, List[str] or None(default=None))

Name of target column to separate from the data. Splitting multiple columns is currently not supported.

None
include_row_id boolean(default=False)

Whether to include row ids in the returned dataset.

False
include_ignore_attribute boolean(default=False)

Whether to include columns that are marked as "ignore" on the server in the dataset.

False

Returns:

Name Type Description
X (dataframe, shape(n_samples, n_columns))

Dataset, may have sparse dtypes in the columns if required.

y (Series, shape(n_samples) or None)

Target column

categorical_indicator list[bool]

Mask that indicate categorical features.

attribute_names list[str]

List of attribute names.

Source code in openml/datasets/dataset.py
def get_data(  # noqa: C901
    self,
    target: list[str] | str | None = None,
    include_row_id: bool = False,  # noqa: FBT001, FBT002
    include_ignore_attribute: bool = False,  # noqa: FBT001, FBT002
) -> tuple[pd.DataFrame, pd.Series | None, list[bool], list[str]]:
    """Returns dataset content as dataframes.

    Parameters
    ----------
    target : string, List[str] or None (default=None)
        Name of target column to separate from the data.
        Splitting multiple columns is currently not supported.
    include_row_id : boolean (default=False)
        Whether to include row ids in the returned dataset.
    include_ignore_attribute : boolean (default=False)
        Whether to include columns that are marked as "ignore"
        on the server in the dataset.


    Returns
    -------
    X : dataframe, shape (n_samples, n_columns)
        Dataset, may have sparse dtypes in the columns if required.
    y : pd.Series, shape (n_samples, ) or None
        Target column
    categorical_indicator : list[bool]
        Mask that indicate categorical features.
    attribute_names : list[str]
        List of attribute names.
    """
    data, categorical_mask, attribute_names = self._load_data()

    to_exclude = []
    if not include_row_id and self.row_id_attribute is not None:
        if isinstance(self.row_id_attribute, str):
            to_exclude.append(self.row_id_attribute)
        elif isinstance(self.row_id_attribute, Iterable):
            to_exclude.extend(self.row_id_attribute)

    if not include_ignore_attribute and self.ignore_attribute is not None:
        if isinstance(self.ignore_attribute, str):
            to_exclude.append(self.ignore_attribute)
        elif isinstance(self.ignore_attribute, Iterable):
            to_exclude.extend(self.ignore_attribute)

    if len(to_exclude) > 0:
        logger.info(f"Going to remove the following attributes: {to_exclude}")
        keep = np.array([column not in to_exclude for column in attribute_names])
        data = data.drop(columns=to_exclude)
        categorical_mask = [cat for cat, k in zip(categorical_mask, keep) if k]
        attribute_names = [att for att, k in zip(attribute_names, keep) if k]

    if target is None:
        return data, None, categorical_mask, attribute_names

    if isinstance(target, str):
        target_names = target.split(",") if "," in target else [target]
    else:
        target_names = target

    # All the assumptions below for the target are dependant on the number of targets being 1
    n_targets = len(target_names)
    if n_targets > 1:
        raise NotImplementedError(f"Number of targets {n_targets} not implemented.")

    target_name = target_names[0]
    x = data.drop(columns=[target_name])
    y = data[target_name].squeeze()

    # Finally, remove the target from the list of attributes and categorical mask
    target_index = attribute_names.index(target_name)
    categorical_mask.pop(target_index)
    attribute_names.remove(target_name)

    assert isinstance(y, pd.Series)
    return x, y, categorical_mask, attribute_names

get_features_by_type(data_type, exclude=None, exclude_ignore_attribute=True, exclude_row_id_attribute=True)

Return indices of features of a given type, e.g. all nominal features. Optional parameters to exclude various features by index or ontology.

Parameters:

Name Type Description Default
data_type str

The data type to return (e.g., nominal, numeric, date, string)

required
exclude list(int)

List of columns to exclude from the return value

None
exclude_ignore_attribute bool

Whether to exclude the defined ignore attributes (and adapt the return values as if these indices are not present)

True
exclude_row_id_attribute bool

Whether to exclude the defined row id attributes (and adapt the return values as if these indices are not present)

True

Returns:

Name Type Description
result list

a list of indices that have the specified data type

Source code in openml/datasets/dataset.py
def get_features_by_type(  # noqa: C901
    self,
    data_type: str,
    exclude: list[str] | None = None,
    exclude_ignore_attribute: bool = True,  # noqa: FBT002, FBT001
    exclude_row_id_attribute: bool = True,  # noqa: FBT002, FBT001
) -> list[int]:
    """
    Return indices of features of a given type, e.g. all nominal features.
    Optional parameters to exclude various features by index or ontology.

    Parameters
    ----------
    data_type : str
        The data type to return (e.g., nominal, numeric, date, string)
    exclude : list(int)
        List of columns to exclude from the return value
    exclude_ignore_attribute : bool
        Whether to exclude the defined ignore attributes (and adapt the
        return values as if these indices are not present)
    exclude_row_id_attribute : bool
        Whether to exclude the defined row id attributes (and adapt the
        return values as if these indices are not present)

    Returns
    -------
    result : list
        a list of indices that have the specified data type
    """
    if data_type not in OpenMLDataFeature.LEGAL_DATA_TYPES:
        raise TypeError("Illegal feature type requested")
    if self.ignore_attribute is not None and not isinstance(self.ignore_attribute, list):
        raise TypeError("ignore_attribute should be a list")
    if self.row_id_attribute is not None and not isinstance(self.row_id_attribute, str):
        raise TypeError("row id attribute should be a str")
    if exclude is not None and not isinstance(exclude, list):
        raise TypeError("Exclude should be a list")
        # assert all(isinstance(elem, str) for elem in exclude),
        #            "Exclude should be a list of strings"
    to_exclude = []
    if exclude is not None:
        to_exclude.extend(exclude)
    if exclude_ignore_attribute and self.ignore_attribute is not None:
        to_exclude.extend(self.ignore_attribute)
    if exclude_row_id_attribute and self.row_id_attribute is not None:
        to_exclude.append(self.row_id_attribute)

    result = []
    offset = 0
    # this function assumes that everything in to_exclude will
    # be 'excluded' from the dataset (hence the offset)
    for idx in self.features:
        name = self.features[idx].name
        if name in to_exclude:
            offset += 1
        elif self.features[idx].data_type == data_type:
            result.append(idx - offset)
    return result

retrieve_class_labels(target_name='class')

Reads the datasets arff to determine the class-labels.

If the task has no class labels (for example a regression problem) it returns None. Necessary because the data returned by get_data only contains the indices of the classes, while OpenML needs the real classname when uploading the results of a run.

Parameters:

Name Type Description Default
target_name str

Name of the target attribute

'class'

Returns:

Type Description
list
Source code in openml/datasets/dataset.py
def retrieve_class_labels(self, target_name: str = "class") -> None | list[str]:
    """Reads the datasets arff to determine the class-labels.

    If the task has no class labels (for example a regression problem)
    it returns None. Necessary because the data returned by get_data
    only contains the indices of the classes, while OpenML needs the real
    classname when uploading the results of a run.

    Parameters
    ----------
    target_name : str
        Name of the target attribute

    Returns
    -------
    list
    """
    for feature in self.features.values():
        if feature.name == target_name:
            if feature.data_type == "nominal":
                return feature.nominal_values

            if feature.data_type == "string":
                # Rel.: #1311
                # The target is invalid for a classification task if the feature type is string
                # and not nominal. For such miss-configured tasks, we silently fix it here as
                # we can safely interpreter string as nominal.
                df, *_ = self.get_data()
                return list(df[feature.name].unique())

    return None

attributes_arff_from_df(df)

Describe attributes of the dataframe according to ARFF specification.

Parameters:

Name Type Description Default
df (DataFrame, shape(n_samples, n_features))

The dataframe containing the data set.

required

Returns:

Name Type Description
attributes_arff list[str]

The data set attributes as required by the ARFF format.

Source code in openml/datasets/functions.py
def attributes_arff_from_df(df: pd.DataFrame) -> list[tuple[str, list[str] | str]]:
    """Describe attributes of the dataframe according to ARFF specification.

    Parameters
    ----------
    df : DataFrame, shape (n_samples, n_features)
        The dataframe containing the data set.

    Returns
    -------
    attributes_arff : list[str]
        The data set attributes as required by the ARFF format.
    """
    PD_DTYPES_TO_ARFF_DTYPE = {"integer": "INTEGER", "floating": "REAL", "string": "STRING"}
    attributes_arff: list[tuple[str, list[str] | str]] = []

    if not all(isinstance(column_name, str) for column_name in df.columns):
        logger.warning("Converting non-str column names to str.")
        df.columns = [str(column_name) for column_name in df.columns]

    for column_name in df:
        # skipna=True does not infer properly the dtype. The NA values are
        # dropped before the inference instead.
        column_dtype = pd.api.types.infer_dtype(df[column_name].dropna(), skipna=False)

        if column_dtype == "categorical":
            # for categorical feature, arff expects a list string. However, a
            # categorical column can contain mixed type and should therefore
            # raise an error asking to convert all entries to string.
            categories = df[column_name].cat.categories
            categories_dtype = pd.api.types.infer_dtype(categories)
            if categories_dtype not in ("string", "unicode"):
                raise ValueError(
                    f"The column '{column_name}' of the dataframe is of "
                    "'category' dtype. Therefore, all values in "
                    "this columns should be string. Please "
                    "convert the entries which are not string. "
                    f"Got {categories_dtype} dtype in this column.",
                )
            attributes_arff.append((column_name, categories.tolist()))
        elif column_dtype == "boolean":
            # boolean are encoded as categorical.
            attributes_arff.append((column_name, ["True", "False"]))
        elif column_dtype in PD_DTYPES_TO_ARFF_DTYPE:
            attributes_arff.append((column_name, PD_DTYPES_TO_ARFF_DTYPE[column_dtype]))
        else:
            raise ValueError(
                f"The dtype '{column_dtype}' of the column '{column_name}' is not "
                "currently supported by liac-arff. Supported "
                "dtypes are categorical, string, integer, "
                "floating, and boolean.",
            )
    return attributes_arff

check_datasets_active(dataset_ids, raise_error_if_not_exist=True)

Check if the dataset ids provided are active.

Raises an error if a dataset_id in the given list of dataset_ids does not exist on the server and raise_error_if_not_exist is set to True (default).

Parameters:

Name Type Description Default
dataset_ids List[int]

A list of integers representing dataset ids.

required
raise_error_if_not_exist bool(default=True)

Flag that if activated can raise an error, if one or more of the given dataset ids do not exist on the server.

True

Returns:

Type Description
dict

A dictionary with items {did: bool}

Source code in openml/datasets/functions.py
def check_datasets_active(
    dataset_ids: list[int],
    raise_error_if_not_exist: bool = True,  # noqa: FBT001, FBT002
) -> dict[int, bool]:
    """
    Check if the dataset ids provided are active.

    Raises an error if a dataset_id in the given list
    of dataset_ids does not exist on the server and
    `raise_error_if_not_exist` is set to True (default).

    Parameters
    ----------
    dataset_ids : List[int]
        A list of integers representing dataset ids.
    raise_error_if_not_exist : bool (default=True)
        Flag that if activated can raise an error, if one or more of the
        given dataset ids do not exist on the server.

    Returns
    -------
    dict
        A dictionary with items {did: bool}
    """
    datasets = list_datasets(status="all", data_id=dataset_ids)
    missing = set(dataset_ids) - set(datasets.index)
    if raise_error_if_not_exist and missing:
        missing_str = ", ".join(str(did) for did in missing)
        raise ValueError(f"Could not find dataset(s) {missing_str} in OpenML dataset list.")
    mask = datasets["status"] == "active"
    return dict(mask)

create_dataset(name, description, creator, contributor, collection_date, language, licence, attributes, data, default_target_attribute, ignore_attribute, citation, row_id_attribute=None, original_data_url=None, paper_url=None, update_comment=None, version_label=None)

Create a dataset.

This function creates an OpenMLDataset object. The OpenMLDataset object contains information related to the dataset and the actual data file.

Parameters:

Name Type Description Default
name str

Name of the dataset.

required
description str

Description of the dataset.

required
creator str

The person who created the dataset.

required
contributor str

People who contributed to the current version of the dataset.

required
collection_date str

The date the data was originally collected, given by the uploader.

required
language str

Language in which the data is represented. Starts with 1 upper case letter, rest lower case, e.g. 'English'.

required
licence str

License of the data.

required
attributes list, dict, or 'auto'

A list of tuples. Each tuple consists of the attribute name and type. If passing a pandas DataFrame, the attributes can be automatically inferred by passing 'auto'. Specific attributes can be manually specified by a passing a dictionary where the key is the name of the attribute and the value is the data type of the attribute.

required
data (ndarray, list, dataframe, coo_matrix, shape(n_samples, n_features))

An array that contains both the attributes and the targets. When providing a dataframe, the attribute names and type can be inferred by passing attributes='auto'. The target feature is indicated as meta-data of the dataset.

required
default_target_attribute str

The default target attribute, if it exists. Can have multiple values, comma separated.

required
ignore_attribute str | list

Attributes that should be excluded in modelling, such as identifiers and indexes. Can have multiple values, comma separated.

required
citation str

Reference(s) that should be cited when building on this data.

required
version_label str

Version label provided by user. Can be a date, hash, or some other type of id.

None
row_id_attribute str

The attribute that represents the row-id column, if present in the dataset. If data is a dataframe and row_id_attribute is not specified, the index of the dataframe will be used as the row_id_attribute. If the name of the index is None, it will be discarded.

.. versionadded: 0.8 Inference of row_id_attribute from a dataframe.

None
original_data_url str

For derived data, the url to the original dataset.

None
paper_url str

Link to a paper describing the dataset.

None
update_comment str

An explanation for when the dataset is uploaded.

None

Returns:

Name Type Description
class `openml.OpenMLDataset`
Dataset description.
Source code in openml/datasets/functions.py
def create_dataset(  # noqa: C901, PLR0912, PLR0915
    name: str,
    description: str | None,
    creator: str | None,
    contributor: str | None,
    collection_date: str | None,
    language: str | None,
    licence: str | None,
    # TODO(eddiebergman): Docstring says `type` but I don't know what this is other than strings
    # Edit: Found it could also be like ["True", "False"]
    attributes: list[tuple[str, str | list[str]]] | dict[str, str | list[str]] | Literal["auto"],
    data: pd.DataFrame | np.ndarray | scipy.sparse.coo_matrix,
    # TODO(eddiebergman): Function requires `default_target_attribute` exist but API allows None
    default_target_attribute: str,
    ignore_attribute: str | list[str] | None,
    citation: str,
    row_id_attribute: str | None = None,
    original_data_url: str | None = None,
    paper_url: str | None = None,
    update_comment: str | None = None,
    version_label: str | None = None,
) -> OpenMLDataset:
    """Create a dataset.

    This function creates an OpenMLDataset object.
    The OpenMLDataset object contains information related to the dataset
    and the actual data file.

    Parameters
    ----------
    name : str
        Name of the dataset.
    description : str
        Description of the dataset.
    creator : str
        The person who created the dataset.
    contributor : str
        People who contributed to the current version of the dataset.
    collection_date : str
        The date the data was originally collected, given by the uploader.
    language : str
        Language in which the data is represented.
        Starts with 1 upper case letter, rest lower case, e.g. 'English'.
    licence : str
        License of the data.
    attributes : list, dict, or 'auto'
        A list of tuples. Each tuple consists of the attribute name and type.
        If passing a pandas DataFrame, the attributes can be automatically
        inferred by passing ``'auto'``. Specific attributes can be manually
        specified by a passing a dictionary where the key is the name of the
        attribute and the value is the data type of the attribute.
    data : ndarray, list, dataframe, coo_matrix, shape (n_samples, n_features)
        An array that contains both the attributes and the targets. When
        providing a dataframe, the attribute names and type can be inferred by
        passing ``attributes='auto'``.
        The target feature is indicated as meta-data of the dataset.
    default_target_attribute : str
        The default target attribute, if it exists.
        Can have multiple values, comma separated.
    ignore_attribute : str | list
        Attributes that should be excluded in modelling,
        such as identifiers and indexes.
        Can have multiple values, comma separated.
    citation : str
        Reference(s) that should be cited when building on this data.
    version_label : str, optional
        Version label provided by user.
         Can be a date, hash, or some other type of id.
    row_id_attribute : str, optional
        The attribute that represents the row-id column, if present in the
        dataset. If ``data`` is a dataframe and ``row_id_attribute`` is not
        specified, the index of the dataframe will be used as the
        ``row_id_attribute``. If the name of the index is ``None``, it will
        be discarded.

        .. versionadded: 0.8
            Inference of ``row_id_attribute`` from a dataframe.
    original_data_url : str, optional
        For derived data, the url to the original dataset.
    paper_url : str, optional
        Link to a paper describing the dataset.
    update_comment : str, optional
        An explanation for when the dataset is uploaded.

    Returns
    -------
    class:`openml.OpenMLDataset`
    Dataset description.
    """
    if isinstance(data, pd.DataFrame):
        # infer the row id from the index of the dataset
        if row_id_attribute is None:
            row_id_attribute = data.index.name
        # When calling data.values, the index will be skipped.
        # We need to reset the index such that it is part of the data.
        if data.index.name is not None:
            data = data.reset_index()

    if attributes == "auto" or isinstance(attributes, dict):
        if not isinstance(data, pd.DataFrame):
            raise ValueError(
                "Automatically inferring attributes requires "
                f"a pandas DataFrame. A {data!r} was given instead.",
            )
        # infer the type of data for each column of the DataFrame
        attributes_ = attributes_arff_from_df(data)
        if isinstance(attributes, dict):
            # override the attributes which was specified by the user
            for attr_idx in range(len(attributes_)):
                attr_name = attributes_[attr_idx][0]
                if attr_name in attributes:
                    attributes_[attr_idx] = (attr_name, attributes[attr_name])
    else:
        attributes_ = attributes
    ignore_attributes = _expand_parameter(ignore_attribute)
    _validated_data_attributes(ignore_attributes, attributes_, "ignore_attribute")

    default_target_attributes = _expand_parameter(default_target_attribute)
    _validated_data_attributes(default_target_attributes, attributes_, "default_target_attribute")

    if row_id_attribute is not None:
        is_row_id_an_attribute = any(attr[0] == row_id_attribute for attr in attributes_)
        if not is_row_id_an_attribute:
            raise ValueError(
                "'row_id_attribute' should be one of the data attribute. "
                f" Got '{row_id_attribute}' while candidates are"
                f" {[attr[0] for attr in attributes_]}.",
            )

    if isinstance(data, pd.DataFrame):
        if all(isinstance(dtype, pd.SparseDtype) for dtype in data.dtypes):
            data = data.sparse.to_coo()
            # liac-arff only support COO matrices with sorted rows
            row_idx_sorted = np.argsort(data.row)  # type: ignore
            data.row = data.row[row_idx_sorted]  # type: ignore
            data.col = data.col[row_idx_sorted]  # type: ignore
            data.data = data.data[row_idx_sorted]  # type: ignore
        else:
            data = data.to_numpy()

    data_format: Literal["arff", "sparse_arff"]
    if isinstance(data, (list, np.ndarray)):
        if isinstance(data[0], (list, np.ndarray)):
            data_format = "arff"
        elif isinstance(data[0], dict):
            data_format = "sparse_arff"
        else:
            raise ValueError(
                "When giving a list or a numpy.ndarray, "
                "they should contain a list/ numpy.ndarray "
                "for dense data or a dictionary for sparse "
                f"data. Got {data[0]!r} instead.",
            )
    elif isinstance(data, coo_matrix):
        data_format = "sparse_arff"
    else:
        raise ValueError(
            "When giving a list or a numpy.ndarray, "
            "they should contain a list/ numpy.ndarray "
            "for dense data or a dictionary for sparse "
            f"data. Got {data[0]!r} instead.",
        )

    arff_object = {
        "relation": name,
        "description": description,
        "attributes": attributes_,
        "data": data,
    }

    # serializes the ARFF dataset object and returns a string
    arff_dataset = arff.dumps(arff_object)
    try:
        # check if ARFF is valid
        decoder = arff.ArffDecoder()
        return_type = arff.COO if data_format == "sparse_arff" else arff.DENSE
        decoder.decode(arff_dataset, encode_nominal=True, return_type=return_type)
    except arff.ArffException as e:
        raise ValueError(
            "The arguments you have provided do not construct a valid ARFF file"
        ) from e

    return OpenMLDataset(
        name=name,
        description=description,
        data_format=data_format,
        creator=creator,
        contributor=contributor,
        collection_date=collection_date,
        language=language,
        licence=licence,
        default_target_attribute=default_target_attribute,
        row_id_attribute=row_id_attribute,
        ignore_attribute=ignore_attribute,
        citation=citation,
        version_label=version_label,
        original_data_url=original_data_url,
        paper_url=paper_url,
        update_comment=update_comment,
        dataset=arff_dataset,
    )

delete_dataset(dataset_id)

Delete dataset with id dataset_id from the OpenML server.

This can only be done if you are the owner of the dataset and no tasks are attached to the dataset.

Parameters:

Name Type Description Default
dataset_id int

OpenML id of the dataset

required

Returns:

Type Description
bool

True if the deletion was successful. False otherwise.

Source code in openml/datasets/functions.py
def delete_dataset(dataset_id: int) -> bool:
    """Delete dataset with id `dataset_id` from the OpenML server.

    This can only be done if you are the owner of the dataset and
    no tasks are attached to the dataset.

    Parameters
    ----------
    dataset_id : int
        OpenML id of the dataset

    Returns
    -------
    bool
        True if the deletion was successful. False otherwise.
    """
    return openml.utils._delete_entity("data", dataset_id)

edit_dataset(data_id, description=None, creator=None, contributor=None, collection_date=None, language=None, default_target_attribute=None, ignore_attribute=None, citation=None, row_id_attribute=None, original_data_url=None, paper_url=None)

Edits an OpenMLDataset.

In addition to providing the dataset id of the dataset to edit (through data_id), you must specify a value for at least one of the optional function arguments, i.e. one value for a field to edit.

This function allows editing of both non-critical and critical fields. Critical fields are default_target_attribute, ignore_attribute, row_id_attribute.

  • Editing non-critical data fields is allowed for all authenticated users.
  • Editing critical fields is allowed only for the owner, provided there are no tasks associated with this dataset.

If dataset has tasks or if the user is not the owner, the only way to edit critical fields is to use fork_dataset followed by edit_dataset.

Parameters:

Name Type Description Default
data_id int

ID of the dataset.

required
description str

Description of the dataset.

None
creator str

The person who created the dataset.

None
contributor str

People who contributed to the current version of the dataset.

None
collection_date str

The date the data was originally collected, given by the uploader.

None
language str

Language in which the data is represented. Starts with 1 upper case letter, rest lower case, e.g. 'English'.

None
default_target_attribute str

The default target attribute, if it exists. Can have multiple values, comma separated.

None
ignore_attribute str | list

Attributes that should be excluded in modelling, such as identifiers and indexes.

None
citation str

Reference(s) that should be cited when building on this data.

None
row_id_attribute str

The attribute that represents the row-id column, if present in the dataset. If data is a dataframe and row_id_attribute is not specified, the index of the dataframe will be used as the row_id_attribute. If the name of the index is None, it will be discarded.

.. versionadded: 0.8 Inference of row_id_attribute from a dataframe.

None
original_data_url str

For derived data, the url to the original dataset.

None
paper_url str

Link to a paper describing the dataset.

None

Returns:

Type Description
Dataset id
Source code in openml/datasets/functions.py
def edit_dataset(
    data_id: int,
    description: str | None = None,
    creator: str | None = None,
    contributor: str | None = None,
    collection_date: str | None = None,
    language: str | None = None,
    default_target_attribute: str | None = None,
    ignore_attribute: str | list[str] | None = None,
    citation: str | None = None,
    row_id_attribute: str | None = None,
    original_data_url: str | None = None,
    paper_url: str | None = None,
) -> int:
    """Edits an OpenMLDataset.

    In addition to providing the dataset id of the dataset to edit (through data_id),
    you must specify a value for at least one of the optional function arguments,
    i.e. one value for a field to edit.

    This function allows editing of both non-critical and critical fields.
    Critical fields are default_target_attribute, ignore_attribute, row_id_attribute.

     - Editing non-critical data fields is allowed for all authenticated users.
     - Editing critical fields is allowed only for the owner, provided there are no tasks
       associated with this dataset.

    If dataset has tasks or if the user is not the owner, the only way
    to edit critical fields is to use fork_dataset followed by edit_dataset.

    Parameters
    ----------
    data_id : int
        ID of the dataset.
    description : str
        Description of the dataset.
    creator : str
        The person who created the dataset.
    contributor : str
        People who contributed to the current version of the dataset.
    collection_date : str
        The date the data was originally collected, given by the uploader.
    language : str
        Language in which the data is represented.
        Starts with 1 upper case letter, rest lower case, e.g. 'English'.
    default_target_attribute : str
        The default target attribute, if it exists.
        Can have multiple values, comma separated.
    ignore_attribute : str | list
        Attributes that should be excluded in modelling,
        such as identifiers and indexes.
    citation : str
        Reference(s) that should be cited when building on this data.
    row_id_attribute : str, optional
        The attribute that represents the row-id column, if present in the
        dataset. If ``data`` is a dataframe and ``row_id_attribute`` is not
        specified, the index of the dataframe will be used as the
        ``row_id_attribute``. If the name of the index is ``None``, it will
        be discarded.

        .. versionadded: 0.8
            Inference of ``row_id_attribute`` from a dataframe.
    original_data_url : str, optional
        For derived data, the url to the original dataset.
    paper_url : str, optional
        Link to a paper describing the dataset.

    Returns
    -------
    Dataset id
    """
    if not isinstance(data_id, int):
        raise TypeError(f"`data_id` must be of type `int`, not {type(data_id)}.")

    # compose data edit parameters as xml
    form_data = {"data_id": data_id}  # type: openml._api_calls.DATA_TYPE
    xml = OrderedDict()  # type: 'OrderedDict[str, OrderedDict]'
    xml["oml:data_edit_parameters"] = OrderedDict()
    xml["oml:data_edit_parameters"]["@xmlns:oml"] = "http://openml.org/openml"
    xml["oml:data_edit_parameters"]["oml:description"] = description
    xml["oml:data_edit_parameters"]["oml:creator"] = creator
    xml["oml:data_edit_parameters"]["oml:contributor"] = contributor
    xml["oml:data_edit_parameters"]["oml:collection_date"] = collection_date
    xml["oml:data_edit_parameters"]["oml:language"] = language
    xml["oml:data_edit_parameters"]["oml:default_target_attribute"] = default_target_attribute
    xml["oml:data_edit_parameters"]["oml:row_id_attribute"] = row_id_attribute
    xml["oml:data_edit_parameters"]["oml:ignore_attribute"] = ignore_attribute
    xml["oml:data_edit_parameters"]["oml:citation"] = citation
    xml["oml:data_edit_parameters"]["oml:original_data_url"] = original_data_url
    xml["oml:data_edit_parameters"]["oml:paper_url"] = paper_url

    # delete None inputs
    for k in list(xml["oml:data_edit_parameters"]):
        if not xml["oml:data_edit_parameters"][k]:
            del xml["oml:data_edit_parameters"][k]

    file_elements = {
        "edit_parameters": ("description.xml", xmltodict.unparse(xml)),
    }  # type: openml._api_calls.FILE_ELEMENTS_TYPE
    result_xml = openml._api_calls._perform_api_call(
        "data/edit",
        "post",
        data=form_data,
        file_elements=file_elements,
    )
    result = xmltodict.parse(result_xml)
    data_id = result["oml:data_edit"]["oml:id"]
    return int(data_id)

fork_dataset(data_id)

Creates a new dataset version, with the authenticated user as the new owner. The forked dataset can have distinct dataset meta-data, but the actual data itself is shared with the original version.

This API is intended for use when a user is unable to edit the critical fields of a dataset through the edit_dataset API. (Critical fields are default_target_attribute, ignore_attribute, row_id_attribute.)

Specifically, this happens when the user is: 1. Not the owner of the dataset. 2. User is the owner of the dataset, but the dataset has tasks.

In these two cases the only way to edit critical fields is: 1. STEP 1: Fork the dataset using fork_dataset API 2. STEP 2: Call edit_dataset API on the forked version.

Parameters:

Name Type Description Default
data_id int

id of the dataset to be forked

required

Returns:

Type Description
Dataset id of the forked dataset
Source code in openml/datasets/functions.py
def fork_dataset(data_id: int) -> int:
    """
     Creates a new dataset version, with the authenticated user as the new owner.
     The forked dataset can have distinct dataset meta-data,
     but the actual data itself is shared with the original version.

     This API is intended for use when a user is unable to edit the critical fields of a dataset
     through the edit_dataset API.
     (Critical fields are default_target_attribute, ignore_attribute, row_id_attribute.)

     Specifically, this happens when the user is:
            1. Not the owner of the dataset.
            2. User is the owner of the dataset, but the dataset has tasks.

     In these two cases the only way to edit critical fields is:
            1. STEP 1: Fork the dataset using fork_dataset API
            2. STEP 2: Call edit_dataset API on the forked version.


    Parameters
    ----------
    data_id : int
        id of the dataset to be forked

    Returns
    -------
    Dataset id of the forked dataset

    """
    if not isinstance(data_id, int):
        raise TypeError(f"`data_id` must be of type `int`, not {type(data_id)}.")
    # compose data fork parameters
    form_data = {"data_id": data_id}  # type: openml._api_calls.DATA_TYPE
    result_xml = openml._api_calls._perform_api_call("data/fork", "post", data=form_data)
    result = xmltodict.parse(result_xml)
    data_id = result["oml:data_fork"]["oml:id"]
    return int(data_id)

get_dataset(dataset_id, download_data=False, version=None, error_if_multiple=False, cache_format='pickle', download_qualities=False, download_features_meta_data=False, download_all_files=False, force_refresh_cache=False)

Download the OpenML dataset representation, optionally also download actual data file.

This function is by default NOT thread/multiprocessing safe, as this function uses caching. A check will be performed to determine if the information has previously been downloaded to a cache, and if so be loaded from disk instead of retrieved from the server.

To make this function thread safe, you can install the python package oslo.concurrency. If oslo.concurrency is installed get_dataset becomes thread safe.

Alternatively, to make this function thread/multiprocessing safe initialize the cache first by calling get_dataset(args) once before calling get_dataset(args) many times in parallel. This will initialize the cache and later calls will use the cache in a thread/multiprocessing safe way.

If dataset is retrieved by name, a version may be specified. If no version is specified and multiple versions of the dataset exist, the earliest version of the dataset that is still active will be returned. If no version is specified, multiple versions of the dataset exist and exception_if_multiple is set to True, this function will raise an exception.

Parameters:

Name Type Description Default
dataset_id int or str

Dataset ID (integer) or dataset name (string) of the dataset to download.

required
download_data bool(default=False)

If True, also download the data file. Beware that some datasets are large and it might make the operation noticeably slower. Metadata is also still retrieved. If False, create the OpenMLDataset and only populate it with the metadata. The data may later be retrieved through the OpenMLDataset.get_data method.

False
version (int, optional(default=None))

Specifies the version if dataset_id is specified by name. If no version is specified, retrieve the least recent still active version.

None
error_if_multiple bool(default=False)

If True raise an error if multiple datasets are found with matching criteria.

False
cache_format str(default='pickle') in {'pickle', 'feather'}

Format for caching the dataset - may be feather or pickle Note that the default 'pickle' option may load slower than feather when no.of.rows is very high.

'pickle'
download_qualities bool(default=False)

Option to download 'qualities' meta-data in addition to the minimal dataset description. If True, download and cache the qualities file. If False, create the OpenMLDataset without qualities metadata. The data may later be added to the OpenMLDataset through the OpenMLDataset.load_metadata(qualities=True) method.

False
download_features_meta_data bool(default=False)

Option to download 'features' meta-data in addition to the minimal dataset description. If True, download and cache the features file. If False, create the OpenMLDataset without features metadata. The data may later be added to the OpenMLDataset through the OpenMLDataset.load_metadata(features=True) method.

False
download_all_files bool

EXPERIMENTAL. Download all files related to the dataset that reside on the server. Useful for datasets which refer to auxiliary files (e.g., meta-album).

False
force_refresh_cache bool(default=False)

Force the cache to refreshed by deleting the cache directory and re-downloading the data. Note, if force_refresh_cache is True, get_dataset is NOT thread/multiprocessing safe, because this creates a race condition to creating and deleting the cache; as in general with the cache.

False

Returns:

Name Type Description
dataset :class:`openml.OpenMLDataset`

The downloaded dataset.

Source code in openml/datasets/functions.py
@openml.utils.thread_safe_if_oslo_installed
def get_dataset(  # noqa: C901, PLR0912
    dataset_id: int | str,
    download_data: bool = False,  # noqa: FBT002, FBT001
    version: int | None = None,
    error_if_multiple: bool = False,  # noqa: FBT002, FBT001
    cache_format: Literal["pickle", "feather"] = "pickle",
    download_qualities: bool = False,  # noqa: FBT002, FBT001
    download_features_meta_data: bool = False,  # noqa: FBT002, FBT001
    download_all_files: bool = False,  # noqa: FBT002, FBT001
    force_refresh_cache: bool = False,  # noqa: FBT001, FBT002
) -> OpenMLDataset:
    """Download the OpenML dataset representation, optionally also download actual data file.

    This function is by default NOT thread/multiprocessing safe, as this function uses caching.
    A check will be performed to determine if the information has previously been downloaded to a
    cache, and if so be loaded from disk instead of retrieved from the server.

    To make this function thread safe, you can install the python package ``oslo.concurrency``.
    If ``oslo.concurrency`` is installed `get_dataset` becomes thread safe.

    Alternatively, to make this function thread/multiprocessing safe initialize the cache first by
    calling `get_dataset(args)` once before calling `get_dataset(args)` many times in parallel.
    This will initialize the cache and later calls will use the cache in a thread/multiprocessing
    safe way.

    If dataset is retrieved by name, a version may be specified.
    If no version is specified and multiple versions of the dataset exist,
    the earliest version of the dataset that is still active will be returned.
    If no version is specified, multiple versions of the dataset exist and
    ``exception_if_multiple`` is set to ``True``, this function will raise an exception.

    Parameters
    ----------
    dataset_id : int or str
        Dataset ID (integer) or dataset name (string) of the dataset to download.
    download_data : bool (default=False)
        If True, also download the data file. Beware that some datasets are large and it might
        make the operation noticeably slower. Metadata is also still retrieved.
        If False, create the OpenMLDataset and only populate it with the metadata.
        The data may later be retrieved through the `OpenMLDataset.get_data` method.
    version : int, optional (default=None)
        Specifies the version if `dataset_id` is specified by name.
        If no version is specified, retrieve the least recent still active version.
    error_if_multiple : bool (default=False)
        If ``True`` raise an error if multiple datasets are found with matching criteria.
    cache_format : str (default='pickle') in {'pickle', 'feather'}
        Format for caching the dataset - may be feather or pickle
        Note that the default 'pickle' option may load slower than feather when
        no.of.rows is very high.
    download_qualities : bool (default=False)
        Option to download 'qualities' meta-data in addition to the minimal dataset description.
        If True, download and cache the qualities file.
        If False, create the OpenMLDataset without qualities metadata. The data may later be added
        to the OpenMLDataset through the `OpenMLDataset.load_metadata(qualities=True)` method.
    download_features_meta_data : bool (default=False)
        Option to download 'features' meta-data in addition to the minimal dataset description.
        If True, download and cache the features file.
        If False, create the OpenMLDataset without features metadata. The data may later be added
        to the OpenMLDataset through the `OpenMLDataset.load_metadata(features=True)` method.
    download_all_files: bool (default=False)
        EXPERIMENTAL. Download all files related to the dataset that reside on the server.
        Useful for datasets which refer to auxiliary files (e.g., meta-album).
    force_refresh_cache : bool (default=False)
        Force the cache to refreshed by deleting the cache directory and re-downloading the data.
        Note, if `force_refresh_cache` is True, `get_dataset` is NOT thread/multiprocessing safe,
        because this creates a race condition to creating and deleting the cache; as in general with
        the cache.

    Returns
    -------
    dataset : :class:`openml.OpenMLDataset`
        The downloaded dataset.
    """
    if download_all_files:
        warnings.warn(
            "``download_all_files`` is experimental and is likely to break with new releases.",
            FutureWarning,
            stacklevel=2,
        )

    if cache_format not in ["feather", "pickle"]:
        raise ValueError(
            "cache_format must be one of 'feather' or 'pickle. "
            f"Invalid format specified: {cache_format}",
        )

    if isinstance(dataset_id, str):
        try:
            dataset_id = int(dataset_id)
        except ValueError:
            dataset_id = _name_to_id(dataset_id, version, error_if_multiple)  # type: ignore
    elif not isinstance(dataset_id, int):
        raise TypeError(
            f"`dataset_id` must be one of `str` or `int`, not {type(dataset_id)}.",
        )

    if force_refresh_cache:
        did_cache_dir = _get_cache_dir_for_id(DATASETS_CACHE_DIR_NAME, dataset_id)
        if did_cache_dir.exists():
            _remove_cache_dir_for_id(DATASETS_CACHE_DIR_NAME, did_cache_dir)

    did_cache_dir = _create_cache_directory_for_id(
        DATASETS_CACHE_DIR_NAME,
        dataset_id,
    )

    remove_dataset_cache = True
    try:
        description = _get_dataset_description(did_cache_dir, dataset_id)
        features_file = None
        qualities_file = None

        if download_features_meta_data:
            features_file = _get_dataset_features_file(did_cache_dir, dataset_id)
        if download_qualities:
            qualities_file = _get_dataset_qualities_file(did_cache_dir, dataset_id)

        parquet_file = None
        skip_parquet = os.environ.get(OPENML_SKIP_PARQUET_ENV_VAR, "false").casefold() == "true"
        download_parquet = "oml:parquet_url" in description and not skip_parquet
        if download_parquet and (download_data or download_all_files):
            try:
                parquet_file = _get_dataset_parquet(
                    description,
                    download_all_files=download_all_files,
                )
            except urllib3.exceptions.MaxRetryError:
                parquet_file = None

        arff_file = None
        if parquet_file is None and download_data:
            if download_parquet:
                logger.warning("Failed to download parquet, fallback on ARFF.")
            arff_file = _get_dataset_arff(description)

        remove_dataset_cache = False
    except OpenMLServerException as e:
        # if there was an exception
        # check if the user had access to the dataset
        if e.code == NO_ACCESS_GRANTED_ERRCODE:
            raise OpenMLPrivateDatasetError(e.message) from None

        raise e
    finally:
        if remove_dataset_cache:
            _remove_cache_dir_for_id(DATASETS_CACHE_DIR_NAME, did_cache_dir)

    return _create_dataset_from_description(
        description,
        features_file,
        qualities_file,
        arff_file,
        parquet_file,
        cache_format,
    )

get_datasets(dataset_ids, download_data=False, download_qualities=False)

Download datasets.

This function iterates :meth:openml.datasets.get_dataset.

Parameters:

Name Type Description Default
dataset_ids iterable

Integers or strings representing dataset ids or dataset names. If dataset names are specified, the least recent still active dataset version is returned.

required
download_data bool

If True, also download the data file. Beware that some datasets are large and it might make the operation noticeably slower. Metadata is also still retrieved. If False, create the OpenMLDataset and only populate it with the metadata. The data may later be retrieved through the OpenMLDataset.get_data method.

False
download_qualities (bool, optional(default=True))

If True, also download qualities.xml file. If False it skip the qualities.xml.

False

Returns:

Name Type Description
datasets list of datasets

A list of dataset objects.

Source code in openml/datasets/functions.py
def get_datasets(
    dataset_ids: list[str | int],
    download_data: bool = False,  # noqa: FBT001, FBT002
    download_qualities: bool = False,  # noqa: FBT001, FBT002
) -> list[OpenMLDataset]:
    """Download datasets.

    This function iterates :meth:`openml.datasets.get_dataset`.

    Parameters
    ----------
    dataset_ids : iterable
        Integers or strings representing dataset ids or dataset names.
        If dataset names are specified, the least recent still active dataset version is returned.
    download_data : bool, optional
        If True, also download the data file. Beware that some datasets are large and it might
        make the operation noticeably slower. Metadata is also still retrieved.
        If False, create the OpenMLDataset and only populate it with the metadata.
        The data may later be retrieved through the `OpenMLDataset.get_data` method.
    download_qualities : bool, optional (default=True)
        If True, also download qualities.xml file. If False it skip the qualities.xml.

    Returns
    -------
    datasets : list of datasets
        A list of dataset objects.
    """
    datasets = []
    for dataset_id in dataset_ids:
        datasets.append(
            get_dataset(dataset_id, download_data, download_qualities=download_qualities),
        )
    return datasets

list_datasets(data_id=None, offset=None, size=None, status=None, tag=None, data_name=None, data_version=None, number_instances=None, number_features=None, number_classes=None, number_missing_values=None)

Return a dataframe of all dataset which are on OpenML.

Supports large amount of results.

Parameters:

Name Type Description Default
data_id list

A list of data ids, to specify which datasets should be listed

None
offset int

The number of datasets to skip, starting from the first.

None
size int

The maximum number of datasets to show.

None
status str

Should be {active, in_preparation, deactivated}. By default active datasets are returned, but also datasets from another status can be requested.

None
tag str
None
data_name str
None
data_version int
None
number_instances int | str
None
number_features int | str
None
number_classes int | str
None
number_missing_values int | str
None

Returns:

Name Type Description
datasets dataframe

Each row maps to a dataset Each column contains the following information: - dataset id - name - format - status If qualities are calculated for the dataset, some of these are also included as columns.

Source code in openml/datasets/functions.py
def list_datasets(
    data_id: list[int] | None = None,
    offset: int | None = None,
    size: int | None = None,
    status: str | None = None,
    tag: str | None = None,
    data_name: str | None = None,
    data_version: int | None = None,
    number_instances: int | str | None = None,
    number_features: int | str | None = None,
    number_classes: int | str | None = None,
    number_missing_values: int | str | None = None,
) -> pd.DataFrame:
    """Return a dataframe of all dataset which are on OpenML.

    Supports large amount of results.

    Parameters
    ----------
    data_id : list, optional
        A list of data ids, to specify which datasets should be
        listed
    offset : int, optional
        The number of datasets to skip, starting from the first.
    size : int, optional
        The maximum number of datasets to show.
    status : str, optional
        Should be {active, in_preparation, deactivated}. By
        default active datasets are returned, but also datasets
        from another status can be requested.
    tag : str, optional
    data_name : str, optional
    data_version : int, optional
    number_instances : int | str, optional
    number_features : int | str, optional
    number_classes : int | str, optional
    number_missing_values : int | str, optional

    Returns
    -------
    datasets: dataframe
        Each row maps to a dataset
        Each column contains the following information:
        - dataset id
        - name
        - format
        - status
        If qualities are calculated for the dataset, some of
        these are also included as columns.
    """
    listing_call = partial(
        _list_datasets,
        data_id=data_id,
        status=status,
        tag=tag,
        data_name=data_name,
        data_version=data_version,
        number_instances=number_instances,
        number_features=number_features,
        number_classes=number_classes,
        number_missing_values=number_missing_values,
    )
    batches = openml.utils._list_all(listing_call, offset=offset, limit=size)
    if len(batches) == 0:
        return pd.DataFrame()

    return pd.concat(batches)

list_qualities()

Return list of data qualities available.

The function performs an API call to retrieve the entire list of data qualities that are computed on the datasets uploaded.

Returns:

Type Description
list
Source code in openml/datasets/functions.py
def list_qualities() -> list[str]:
    """Return list of data qualities available.

    The function performs an API call to retrieve the entire list of
    data qualities that are computed on the datasets uploaded.

    Returns
    -------
    list
    """
    api_call = "data/qualities/list"
    xml_string = openml._api_calls._perform_api_call(api_call, "get")
    qualities = xmltodict.parse(xml_string, force_list=("oml:quality"))
    # Minimalistic check if the XML is useful
    if "oml:data_qualities_list" not in qualities:
        raise ValueError('Error in return XML, does not contain "oml:data_qualities_list"')

    if not isinstance(qualities["oml:data_qualities_list"]["oml:quality"], list):
        raise TypeError('Error in return XML, does not contain "oml:quality" as a list')

    return qualities["oml:data_qualities_list"]["oml:quality"]

status_update(data_id, status)

Updates the status of a dataset to either 'active' or 'deactivated'. Please see the OpenML API documentation for a description of the status and all legal status transitions: https://docs.openml.org/concepts/data/#dataset-status

Parameters:

Name Type Description Default
data_id int

The data id of the dataset

required
status (str,)

'active' or 'deactivated'

required
Source code in openml/datasets/functions.py
def status_update(data_id: int, status: Literal["active", "deactivated"]) -> None:
    """
    Updates the status of a dataset to either 'active' or 'deactivated'.
    Please see the OpenML API documentation for a description of the status
    and all legal status transitions:
    https://docs.openml.org/concepts/data/#dataset-status

    Parameters
    ----------
    data_id : int
        The data id of the dataset
    status : str,
        'active' or 'deactivated'
    """
    legal_status = {"active", "deactivated"}
    if status not in legal_status:
        raise ValueError(f"Illegal status value. Legal values: {legal_status}")

    data: openml._api_calls.DATA_TYPE = {"data_id": data_id, "status": status}
    result_xml = openml._api_calls._perform_api_call("data/status/update", "post", data=data)
    result = xmltodict.parse(result_xml)
    server_data_id = result["oml:data_status_update"]["oml:id"]
    server_status = result["oml:data_status_update"]["oml:status"]
    if status != server_status or int(data_id) != int(server_data_id):
        # This should never happen
        raise ValueError("Data id/status does not collide")