LLM Social Simulations Are a Promising Research Method
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Can We Trust AI Benchmarks? An Interdisciplinary Review of Current Issues in AI Evaluation
Exploring the limits of strong membership inference attacks on large language models
Trust and Friction: Negotiating How Information Flows Through Decentralized Social Media
Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development
Extending "GPTs Are GPTs" to Firms
OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens
Cybernetics
Cybernetics is the transdisciplinary study of circular causal processes such as feedback and recursion, where the effects of a system's actions (its outputs) return as inputs to that system, influencing subsequent action. It is concerned with general principles that are relevant across multiple contexts, including in engineering, ecological, economic, biological, cognitive and social systems and also in practical activities such as designing, learning, and managing. Cybernetics' transdisciplinary character has meant that it intersects with a number of other fields, leading to it having both wide influence and diverse interpretations. The field is named after an example of circular causal feedback—that of steering a ship (the ancient Greek κυβερνήτης (kybernḗtēs) refers to the person who steers a ship). In steering a ship, the position of the rudder is adjusted in continual response to the effect it is observed as having, forming a feedback loop through which a steady course can be maintained in a changing environment, responding to disturbances from cross winds and tide. Cybernetics has its origins in exchanges between numerous disciplines during the 1940s. Initial developments were consolidated through meetings such as the Macy Conferences and the Ratio Club. Early focuses included purposeful behaviour, neural networks, heterarchy, information theory, and self-organising systems. As cybernetics developed, it became broader in scope to include work in design, family therapy, management and organisation, pedagogy, sociology, the creative arts and the counterculture.
The Leaderboard Illusion
If open source is to win, it must go public
Canada as a Champion for Public AI: Data, Compute and Open Source Infrastructure for Economic Growth and Inclusive Innovation
Quantitative Analysis of AI-Generated Texts in Academic Research: A Study of AI Presence in Arxiv Submissions using AI Detection Tool
The Illusion of Artificial Inclusion
The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm
Machines of Loving Grace: How AI Could Transform the World for the Better
To code, or not to code? exploring impact of code in pre-training
The Rise of AI-Generated Content in Wikipedia
Poisoning Web-Scale Training Datasets is Practical
What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions
Large language models reduce public knowledge sharing on online Q&A platforms
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
GPTs are GPTs: Labor Market Impact Potential of LLMs
Artificial Intelligence Act
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Public {AI}: {Infrastructure} for the {Common} {Good}
ANSI/NISO Z39.96-2024, JATS: Journal Article Tag Suite
Wikimedia data for AI: a review of Wikimedia datasets for NLP tasks and AI-assisted editing
Data {Flywheel} {Go} {Brrr}: {Using} {Your} {Users} to {Build} {Better} {Products} - {Jason} {Liu}
Explore how data flywheels leverage user feedback to enhance product development and achieve business success with AI.
Consent in Crisis: The Rapid Decline of the AI Data Commons
StarCoder 2 and The Stack v2: The Next Generation
LLM Dataset Inference: Did you train on my dataset?
Public AI: Making AI Work for Everyone, by Everyone
Scalable Data Ablation Approximations for Language Models through Modular Training and Merging
Generative AI Profile (Draft/2024)
A Canary in the AI Coal Mine: American Jews May Be Disproportionately Harmed by Intellectual Property Dispossession in Large Language Model Training
What is a {Data} {Flywheel}? {A} {Guide} to {Sustainable} {Business} {Growth}
The data addition dilemma
Data {Flywheels} for {LLM} {Applications}
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Copyright and Artificial Intelligence: Policy Studies and Guidance
Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
Push and Pull: A Framework for Measuring Attentional Agency
A Systematic Review of NeurIPS Dataset Management Practices
Data-Sharing Markets: Model, Protocol, and Algorithms to Incentivize the Formation of Data-Sharing Consortia
Alpaca: A Strong, Replicable Instruction-Following Model
LEACE: Perfect linear concept erasure in closed form
Quantifying Memorization Across Neural Language Models
Understanding CC Licenses and Generative AI
Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4
Wikipedia's value in the age of generative {AI}
If there was a generative artificial intelligence system that could, on its own, write all the information contained in Wikipedia, would it be the same as Wikipedia today?
Algorithmic Collective Action in Machine Learning
Provides theoretical framework for algorithmic collective action, showing that small collectives can exert significant control over platform learning algorithms through coordinated data strategies.
ISO/IEC 23894:2023 Information Technology—Artificial Intelligence—Risk Management
A Watermark for Large Language Models
The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers
Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
Textbooks Are All You Need II: phi-1.5 technical report
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore
Artificial Intelligence Risk Management Framework (AI RMF 1.0)
OWASP Top 10 for Large Language Model Applications
TRAK: Attributing Model Behavior at Scale
Introduces TRAK (Tracing with the Randomly-projected After Kernel), a data attribution method that is both effective and computationally tractable for large-scale models by leveraging random projections.
Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harm
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Understanding the landscape of potential harms from algorithmic systems enables practitioners to better anticipate consequences of the systems they build. It also supports the prospect of incorporating controls to help minimize harms that emerge from the interplay of technologies and social and cultural dynamics. A growing body of scholarship has identified a wide range of harms across different algorithmic technologies. However, computing research and practitioners lack a high level and synthesized overview of harms from algorithmic systems. Based on a scoping review of computing research (n=172), we present an applied taxonomy of sociotechnical harms to support a more systematic surfacing of potential harms in algorithmic systems. The final taxonomy builds on and refers to existing taxonomies, classifications, and terminologies. Five major themes related to sociotechnical harms — representational, allocative, quality-of-service, interpersonal harms, and social system/societal harms — and sub-themes are presented along with a description of these categories. We conclude with a discussion of challenges and opportunities for future research.
An Alternative to Regulation: The Case for Public AI
The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers
Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity| Winners of the 2024 Nobel Prize for Economics
The Fallacy of AI Functionality
Common Crawl — Web-scale Data for Research
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Comprehensive survey systematically categorizing dataset vulnerabilities including poisoning and backdoor attacks, their threat models, and defense mechanisms.
Training Data Influence Analysis and Estimation: A Survey
DeepCore: A Comprehensive Library for Coreset Selection in Deep Learning
Comprehensive library and empirical study of coreset selection methods for deep learning, finding that random selection remains a strong baseline across many settings.
Datamodels: Predicting Predictions from Training Data
Proposes datamodels that predict model outputs as a function of training data subsets, providing a framework for understanding data attribution through retraining experiments.
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning
Generalizes Data Shapley using Beta weighting functions, providing noise-reduced data valuation that better handles outliers and mislabeled data detection.
LAION-5B: A New Era of Open Large-Scale Multi-Modal Datasets
Training language models to follow instructions with human feedback
Probabilistic Machine Learning: An introduction
Releasing Re-LAION-5B
Why Black Box Machine Learning Should Be Avoided for High-Stakes Decisions, in Brief
{LAION}-5B: An Open Large-Scale Dataset for Training Next {CLIP} Models
Beyond neural scaling laws: beating power law scaling via data pruning
The Stack: A Permissively Licensed Source Code Dataset
Introducing Whisper
Robust Speech Recognition via Large-Scale Weak Supervision
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Extracting Training Data from Large Language Models
Unsolved Problems in ML Safety
Beta Shapley: A Unified and Noise-Reduced Data Valuation Framework for Machine Learning
What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus
Measuring Mathematical Problem Solving With the MATH Dataset
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
What are you optimizing for? Aligning Recommender Systems with Human Values
Quantifying the Invisible Labor in Crowd Work
Can "Conscious Data Contribution" Help Users to Exert "Data Leverage" Against Technology Companies?
Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies
A Deeper Investigation of the Importance of Wikipedia Links to Search Engine Results
Ethical and Social Risks of Harm from Language Models
Language (Technology) is Power: A Critical Survey of “Bias” in NLP
Language Models are Few-Shot Learners
Artificial Intelligence, Values, and Alignment
Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning
Scaling Laws for Neural Language Models
Establishes power-law scaling relationships between language model performance and model size, dataset size, and compute, spanning seven orders of magnitude.
Exploring Research Interest in Stack Overflow -- A Systematic Mapping Study and Quality Evaluation
Coresets for Data-efficient Training of Machine Learning Models
Introduces CRAIG (Coresets for Accelerating Incremental Gradient descent), selecting subsets that approximate full gradient for 2-3x training speedups while maintaining performance.
The Economics of Maps
Estimating Training Data Influence by Tracing Gradient Descent
Introduces TracIn, which computes influence of training examples by tracing how test loss changes during training. Uses first-order gradient approximation and saved checkpoints for scalability.
The pushshift reddit dataset
Example Citation Placeholder
Placeholder reference to support example citations in docs. Replace with a real source when available.
Are anonymity-seekers just like everybody else? An analysis of contributions to Wikipedia from Tor
In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction
Reconciling modern machine-learning practice and the classical bias–variance trade-off
Common voice: A massively-multilingual speech corpus
The Secret Sharer: Measuring Unintended Memorization in Neural Networks
Excavating AI: The Politics of Images in Machine Learning Training Sets
Ecosystem Tipping Points in an Evolving World
Data Shapley: Equitable Valuation of Data for Machine Learning
Proposes data Shapley as a metric to quantify the value of each training datum to predictor performance, satisfying equitable data valuation properties from cooperative game theory.
Data Shapley: Equitable Valuation of Data for Machine Learning
Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects
Incomplete Contracting and AI Alignment
HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips
Towards Efficient Data Valuation Based on the Shapley Value
On the Accuracy of Influence Functions for Measuring Group Effects
Privacy, anonymity, and perceived risk in open collaboration: A study of service providers
Model Cards for Model Reporting
Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations
Rosenbach v. Six Flags Entertainment Corp.
Fairness and Abstraction in Sociotechnical Systems
Mapping the Potential and Pitfalls of "Data Dividends" as a Means of Sharing the Profits of Artificial Intelligence
"Data Strikes": Evaluating the Effectiveness of a New Form of Collective Action Against Technology Companies
Simulates data strikes against recommender systems, showing that collective withholding of training data can create leverage for users against technology platforms.
Measuring the Importance of User-Generated Content to Search Engines
A Reductions Approach to Fair Classification
Should We Treat Data as Labor? Moving Beyond 'Free'
Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Datasheets for Datasets
The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards
Troubling Trends in Machine Learning Scholarship
A Blueprint for a Better Digital Society
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Defines active learning as core-set selection, choosing points such that a model trained on the subset is competitive for remaining data. Provides theoretical bounds via k-Center problem.
Artificial Intelligence and Its Implications for Income Distribution and Unemployment
The Substantial Interdependence of Wikipedia and Google: A Case Study on the Relationship Between Peer Production Communities and Information Technologies
Deep learning scaling is predictable, empirically
The WARC Format 1.1
Understanding Black-box Predictions via Influence Functions
Uses influence functions from robust statistics to trace model predictions back to training data, identifying training points most responsible for a given prediction.
Big Data's Disparate Impact
General Data Protection Regulation (EU) 2016/679
What's wrong with social simulations?
The Algorithmic Foundations of Differential Privacy
Children's Online Privacy Protection Rule (COPPA) — 16 CFR Part 312
The Future of Crowd Work
Poisoning Attacks against Support Vector Machines
Investigates poisoning attacks against SVMs where adversaries inject crafted training data to increase test error. Uses gradient ascent to construct malicious data points.
Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)
Biometric Information Privacy Act (BIPA), 740 ILCS 14
Robust De-anonymization of Large Sparse Datasets
Privacy as Contextual Integrity
Modeling Complexity : The Limits to Prediction
HIPAA Privacy Rule — 45 CFR Parts 160 and 164
Simple Demographics Often Identify People Uniquely
Social {Dilemmas}: {The} {Anatomy} of {Cooperation}
The study of social dilemmas is the study of the tension between individual and collective rationality. In a social dilemma, individually reasonable behavior leads to a situation in which everyone is worse off. The first part of this review is a discussion of categories of social dilemmas and how they are modeled. The key two-person social dilemmas (Prisoner’s Dilemma, Assurance, Chicken) and multiple-person social dilemmas (public goods dilemmas and commons dilemmas) are examined. The second part is an extended treatment of possible solutions for social dilemmas. These solutions are organized into three broad categories based on whether the solutions assume egoistic actors and whether the structure of the situation can be changed: Motivational solutions assume actors are not completely egoistic and so give some weight to the outcomes of their partners. Strategic solutions assume egoistic actors, and neither of these categories of solutions involve changing the fundamental structure of the situation. Solutions that do involve changing the rules of the game are considered in the section on structural solutions. I conclude the review with a discussion of current research and directions for future work.