Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowTypeError
Message:      ("Expected bytes, got a 'int' object", 'Conversion failed for column metadata with type object')
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/categories/wikipedia/datasets/[]/records) changed from string to number in row 0
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 190, in _generate_tables
                  pa_table = pa.Table.from_pandas(df, preserve_index=False)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 4795, in pyarrow.lib.Table.from_pandas
                File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 637, in dataframe_to_arrays
                  arrays = [convert_column(c, f)
                            ^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 625, in convert_column
                  raise e
                File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 619, in convert_column
                  result = pa.array(col, type=type_, from_pandas=True, safe=safe)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 365, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 91, in pyarrow.lib._ndarray_to_array
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'int' object", 'Conversion failed for column metadata with type object')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

US Attention Data

License: MIT Data Sources HuggingFace

Weekly cross-platform attention metrics for tracking how much the world pays attention to the United States. Combines Wikipedia pageviews, GDELT global event mentions, and Google Trends search interest from 2020-2025.

I built this dataset for the one-year visualization project, which maps US global sentiment over time. Part of the Data Trove collection.


What's Inside

File Size Description
wikipedia_pageviews.json 2.5 MB Daily pageview counts for US-related Wikipedia articles
wikipedia_event_articles.json 214 KB Event-linked article metadata
wikipedia_trending.json 256 KB Trending article detection
trends_data.json 810 KB Google Trends search interest over time
weekly_trends.json 26 KB Weekly trending topic aggregations
gdelt_timeline.json 131 KB GDELT event mention timelines
gdelt_weekly_events.json 158 KB GDELT weekly aggregated event counts and tone
events_unified.json 89 KB Unified event data across all sources
weekly_attention_timeline.json 57 KB Combined weekly attention metrics
unified_data.json 27 KB Merged dataset across all attention sources
attention_metadata.json 2 KB Collection metadata and schema

Total: ~4.2 MB


Quick Start

Python

import json

with open("wikipedia_pageviews.json") as f:
    pageviews = json.load(f)

# Weekly attention across all sources
with open("weekly_attention_timeline.json") as f:
    timeline = json.load(f)

D3.js

const pageviews = await d3.json("wikipedia_pageviews.json");
const gdelt = await d3.json("gdelt_weekly_events.json");

Data Sources

Source What It Tracks Coverage
Wikipedia Pageviews API Article view counts 2020-2025, daily
GDELT Project Global event mentions and media tone 2020-2025, weekly
Google Trends Search interest indices 2020-2025, weekly

Use Cases

  • Tracking how global attention to the US shifts over time
  • Correlating media events with Wikipedia traffic and search interest
  • Identifying seasonal attention patterns (elections, holidays, crises)
  • Building composite attention indices from multiple independent signals

Related


Author

Luke Steuber -- @lukesteuber.com on Bluesky

License

MIT. See LICENSE.

Data sourced from Wikipedia (CC BY-SA), GDELT (open), and Google Trends (fair use for research).

Downloads last month
59