MiniLongBench / data /multifieldqa_en.jsonl
linggm's picture
Initial commit
0ba7bf4 verified
{"input": "Is the ISR necessary for transgene reactivation?", "context": "Current address: Division of Brain Sciences, Department of Medicine, Imperial College London, London, United Kingdom.\nIn a variety of species, reduced food intake, and in particular protein or amino acid (AA) restriction, extends lifespan and healthspan. However, the underlying epigenetic and/or transcriptional mechanisms are largely unknown, and dissection of specific pathways in cultured cells may contribute to filling this gap. We have previously shown that, in mammalian cells, deprivation of essential AAs (methionine/cysteine or tyrosine) leads to the transcriptional reactivation of integrated silenced transgenes, including plasmid and retroviral vectors and latent HIV-1 provirus, by a process involving epigenetic chromatic remodeling and histone acetylation. Here we show that the deprivation of methionine/cysteine also leads to the transcriptional upregulation of endogenous retroviruses, suggesting that essential AA starvation affects the expression not only of exogenous non-native DNA sequences, but also of endogenous anciently-integrated and silenced parasitic elements of the genome. Moreover, we show that the transgene reactivation response is highly conserved in different mammalian cell types, and it is reproducible with deprivation of most essential AAs. The General Control Non-derepressible 2 (GCN2) kinase and the downstream integrated stress response represent the best candidates mediating this process; however, by pharmacological approaches, RNA interference and genomic editing, we demonstrate that they are not implicated. Instead, the response requires MEK/ERK and/or JNK activity and is reproduced by ribosomal inhibitors, suggesting that it is triggered by a novel nutrient-sensing and signaling pathway, initiated by translational block at the ribosome, and independent of mTOR and GCN2. Overall, these findings point to a general transcriptional response to essential AA deprivation, which affects the expression of non-native genomic sequences, with relevant implications for the epigenetic/transcriptional effects of AA restriction in health and disease.\nCopyright: © 2018 De Vito et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.\nData Availability: All relevant data are within the paper and its Supporting Information files. RNAseq data are available in the ArrayExpress database under the accession number E-MTAB-6452.\nFunding: This study was funded by the Ajinomoto Innovation Alliance Program, (AIAP; https://www.ajinomoto.com/en/rd/AIAP/index.html#aiap) (to M.V.S and D.G), which is a joint research initiative of Ajinomoto Co., Inc., Japan. One of the authors [M.B.] is an employee of Ajinomoto Co., and his specific roles are articulated in the ‘author contributions’ section. The commercial funder provided support in the form of salary for author [M.B.] and some of the necessary research materials (medium for cell culture), but did not have any additional role in the study design, data collection and analysis, or preparation of the manuscript, and the authors had unrestricted access to the data. Due to a confidentiality agreement, the commercial funder participated only in the decision to publish the data obtained during the study, without any restriction.\nCompeting interests: This study was funded by Ajinomoto Co., Inc., Japan and one of the authors [M.B.] is an employee of this commercial funder. No other employment or consultancy relationships exist with the commercial funder, and no patents, products in development, or marketed products result from this study. The authors declare that no competing interests exist and that the commercial affiliation of one of the authors does not alter the adherence of authors to all PLOS ONE policies on sharing data and materials.\nIn animals, excessive, insufficient, or imbalanced nutrient availability is known to strongly impact on phenotype and health, both short and long-term, and across generations [1, 2]. In particular, studies in yeast, animal models and humans have shown that reduced food intake, reducing either overall calories, or only sugars, proteins, or even single amino acids (AA), such as Methionine (Met), may extend lifespan and healthspan, and reduce the risk of cancer and other age-related diseases [3–9]. In addition, fasting or specific AA deprivation have shown potential therapeutic applications, owing to their ability to directly reduce the growth of some tumor types [10, 11], sensitize cancer cells to chemo- or immunotherapy [12, 13], and allow efficient hematopoietic stem cell engraftment . However, little is known about the specific processes and molecular mechanisms mediating the roles of nutrient restriction in human health and longevity.\nA properly balanced diet in metazoans contains optimal amounts of a subset of AA, which cannot be synthetized de novo and are therefore named essential amino acids (EAAs). In humans these include Met, Histidine (His), Isoleucine (Ile), Leucine (Leu), Lysine (Lys), Phenylalanine (Phe), Threonine (Thr), Tryptophan (Trp), and Valine (Val), while a few others are considered as semi-essential, such as Glutamine (Gln) and Tyrosine (Tyr) [15, 16]. Consistently, EAA deprivation triggers a cell-autonomous adaptive response, characterized by extensive metabolic and gene expression modifications, implementing biosynthetic, catabolic, and plasma membrane transport processes, aimed at reconstituting the full AA complement [17, 18]. The best known and conserved pathways responding to AA deprivation are triggered by mechanistic Target of Rapamycin Complex 1 (mTORC1) and General amino acid Control Non-derepressible 2 (GCN2) protein kinases [15, 19, 20]. Activation of mTORC1 requires in particular the presence of Gln, Arg and Leu, but also Met , which activate the kinase through sensors mainly acting upstream of Rag GTPases at lysosomal membranes . In turn, mTORC1 promotes cell growth, proliferation and anabolism upon activation, and translational attenuation and autophagy upon inhibition [19, 20].\nBy contrast, GCN2 is activated by deprivation of any individual EAA, by means of its histidyl-tRNA synthetase-related domain, which binds uncharged tRNAs accumulating during AA limitation [23, 24]. Upon activation, GCN2 phosphorylates and inhibits its only known downstream target, namely the eukaryotic Initiation Factor 2 α (eIF2α), thereby initiating the Integrated Stress Response (ISR). This leads to attenuation of general translation, and induction of a transcriptional/translational program, aimed at increasing stress resistance and restoring cell homeostasis, by upregulating a specific subset of genes, including Activating Transcription Factor 4 (ATF4) and C/EBP-Homologous Protein (CHOP) [25–27]. Thus, inhibition of mTORC1 and activation of GCN2 by AA restriction cooperate to attenuate general translation at the initiation step, increase catabolism and turnover, and enhance stress resistance to promote adaptation . However, how these processes eventually induce protective mechanisms against the alterations associated with aging, which include pervasive epigenetic and transcriptional changes [28, 29], remains largely unknown.\nWe previously reported the unexpected observation that prolonged deprivation of either Tyr, or of both Methionine and Cysteine (Met/Cys), triggers the selective and reversible reactivation of exogenous transcriptional units, including plasmids, retroviral vectors and proviruses, integrated into the genome and transcriptionally repressed by defensive mechanisms against non-native DNA sequences [30, 31]. This phenomenon was observed both in HeLa epithelial and ACH-2 lymphocytic human cells, and was independent of the transgene or provirus (Ocular Albinism type 1, OA1; Green Fluorescent Protein, GFP; Lysosomal-Associated Membrane Protein 1, LAMP1; Human Immunodeficiency Virus-1, HIV-1), or of the exogenous promoter driving their transcription, either viral (cytomegalovirus, CMV; Long Terminal Repeat, LTR) or human (Phospho-Glycerate Kinase 1, PGK1; Elongation Factor-1α, EF-1α) . Furthermore, this transgene reactivation response was not reproduced by serum starvation, activation of p38, or pharmacological inhibitors of mTOR (PP242 or rapamycin), sirtuins and DNA methylation. By contrast, it was induced by pan histone deacetylase (HDAC) inhibitors, and by selective inhibitors of class II HDACs . Consistently, we found that the mechanism responsible involves epigenetic modifications at the transgene promoter, including reduced nucleosome occupancy and increased histone acetylation, and is mediated in part by reduced expression of a class II HDAC, namely HDAC4 .\nThese findings indicate that AA deprivation induces a specific epigenetic and transcriptional response, affecting the expression of newly-integrated exogenous transgenes and proviruses, and suggesting that endogenous sequences sharing similar structural and functional features may represent a transcriptional target as well [30, 31]. In particular, transposable elements, such as LTR-retrotransposons (or endogenous retroviruses, ERVs), are genomic “parasites” anciently-integrated into the genome, and silenced by epigenetic mechanisms of mammalian cells against the spreading of mobile elements, eventually becoming \"endogenized\" during evolution [32, 33]. This raises the question of whether their expression is also sensitive to AA restriction. In addition, it remains unclear whether or not the transgene reactivation response is related to specific AA deprivations, and most importantly which is the AA sensing/signaling pathway involved, in particular whether the GCN2 kinase is implicated. Thus, here we used the reactivation of silenced transgenes in cultured cells, as a model to investigate a novel molecular pathway induced by imbalanced EAA starvation, implicated in the epigenetic/transcriptional regulation of exogenous non-native DNA sequences and possibly of other endogenous anciently-integrated genomic elements.\nHeLa human epithelial carcinoma, HepG2 human hepatocellular carcinoma and C2C12 mouse skeletal muscle cells were maintained in DMEM containing glutaMAX (Invitrogen) and supplemented with 10% FBS (Sigma), 100 U/ml penicillin G (Invitrogen), 100 mg/ml streptomycin (Invitrogen), at 37°C in a 5% CO2 humidified atmosphere. Cell lines carrying integrated and partially silenced transgenes were also maintained in 600–1000 μg/ml G418.\nThe C2C12 cell line was provided by ATCC. HeLa and HepG2 cells were obtained by Drs. F. Blasi and G. Tonon at San Raffaele Scientific Institute, Milan, Italy, respectively, and were authenticated by Short Tandem Repeat (STR) profiling, using the Cell ID System kit (Promega), according to the manufacturer’s instructions. Briefly, STR-based multiplex PCR was carried out in a final volume of 25 μL/reaction, including 5 μL Cell ID Enzyme Mix 5X, 2.5 μL Cell ID Primer Mix 10X and 3 ng of template DNA. The thermal cycling conditions were: 1 cycle at 96°C for 2 min, followed by 32 cycles at 94°C for 30 sec, 62°C for 90 sec, and 72°C for 90 sec, and 1 cycle at 60°C for 45 sec. The following STR loci were amplified: AMEL, CSF1PO, D13S317, D16S539, D21S11, D5S818, D7S820, TH01, TPOX, vWA. Fragment length analysis of STR-PCR products was performed by Eurofins Genomics, using standard procedures of capillary electrophoresis on the Applied Biosystems 3130 XL sequencing machine, and assessment of the STR profile was performed at the online STR matching analysis service provided at http://www.dsmz.de/fp/cgi-bin/str.html.\nStable cell clones, expressing myc-tagged human OA1 (GPR143) or GFP transcripts, were generated using pcDNA3.1/OA1myc-His or pcDNA3.1/EGFP vectors . Briefly, HeLa, HepG2 and C2C12 cells were transfected using FuGENE 6 (Roche) and selected with 800, 1000, and 650 μg/ml of G418 (Sigma), respectively, which was maintained thereafter to avoid loss of plasmid integration. G418-resistant clones were isolated and analyzed for protein expression by epifluorescence and/or immunoblotting.\nFull DMEM-based medium, carrying the entire AA complement, and media deprived of Met/Cys (both AAs), Met (only), Cys (only), Alanine (Ala), Thr, Gln, Val, Leu, Tyr, Trp, Lys and His were prepared using the Nutrition free DMEM (cat.#09077–05, from Nacalai Tesque, Inc., Kyoto, Japan), by adding Glucose, NaHCO3, and either all 20 AAs (for full medium) or 18–19 AAs only (for deprivations of two-one AAs). Single AAs, Glucose, and NaHCO3 were from Sigma. Further details and amounts utilized are indicated in S1 Table. All media were supplemented with 10% dialyzed FBS (Invitrogen), 100 U/ml penicillin G (Invitrogen), 100 mg/ml streptomycin (Invitrogen), and G418 as required. HBSS was from Invitrogen. Cells were seeded at 10–30% of confluency; cells to be starved for 48 h were plated 2–3 times more confluent compared to the control. The following day, cells were washed and cultured in the appropriate medium, with or without EAA, for 24–48 h.\nL-Histidinol (HisOH), PP242, Integrated Stress Response Inhibitor (ISRIB), SP600125, Cycloheximide (CHX) were from Sigma; Salubrinal was from Tocris Bioscience; U0126 was from Promega. Drugs were used at the following final concentrations: HisOH at 4–16 mM; PP242 at 1–3 μM; ISRIB at 100 nM; SP600125 at 20 μM in HepG2 cells and 50 μM in HeLa cells; Cycloheximide (CHX) at 50 ug/ml in HepG2 cells and 100 ug/ml in HeLa cells; Salubrinal at 75 μM; U0126 at 50 μM. Vehicle was used as mock control. Treatments with drugs to be tested for their ability to inhibit transgene reactivation (ISRIB, SP600125 and U0126) were initiated 1h before the subsequent addition of L-Histidinol (ISRIB) or the subsequent depletion of Met/Cys (SP600125 and U0126).\nTotal RNA was purified using the RNeasy Mini kit (Qiagen), according to manufacturer’s instructions. RNA concentration was determined by Nanodrop 8000 Spectrophotometer (Thermo Scientific). Equal amount (1 μg) of RNA from HeLa, HepG2 and C2C12 cells was reverse transcribed using the SuperScript First-Strand Synthesis System for RT-PCR (Invitrogen) using oligo-dT as primers, and diluted to 5 ng/μl. The cDNA (2 μl) was amplified by real-time PCR using SYBR green Master Mix on a Light Cycler 480 (Roche), according to manufacturer’s instructions. The thermal cycling conditions were: 1 cycle at 95°C for 5 min, followed by 40–45 cycles at 95° for 20 sec, 56° for 20 sec and 72° for 20 sec. The sequences, efficiencies and annealing temperatures of the primers are provided in S2 Table. Data were analyzed with Microsoft Excel using the formula EtargetΔct target (control-sample) /EreferenceΔct reference (control-sample) . Reference genes for normalizations were ARPC2 (actin-related protein 2/3 complex, subunit 2) for HeLa and HepG2 cells; and Actb (actin beta) for C2C12 cells, unless otherwise indicated.\nsiRNA (Mission esiRNA, 200 ng/μL; Sigma) against ATF4 and GCN2 were designed against the targeted sequences NM_001675 and NM_001013703, respectively. Cells seeded in 6-well plates were transfected with 1 μg of siRNAs and 5 μL of Lipofectamine 2000 (Invitrogen), following manufacturer’s instructions, at day 1 post-plating for ATF4 and at day 1 and 2 post-plating for GCN2. At day 2 (ATF4) or 3 (GCN2) post-plating, cells were washed and cultured in medium in the absence or presence of HisOH 4 mM for 6 h. siRNAs against RLuc (Sigma), targeting Renilla Luciferase, were used as negative control. For CRISPR/Cas9 experiments, we used the “all-in-one Cas9-reporter” vector, expressing GFP (Sigma), which is characterized by a single vector format including the Cas9 protein expression cassette and gRNA (guide RNA). GFP is co-expressed from the same mRNA as the Cas9 protein, enabling tracking of transfection efficiency and enrichment of transfected cells by fluorescence activated cell sorting (FACS). The human U6 promoter drives gRNA expression, and the CMV promoter drives Cas9 and GFP expression. The oligonucleotide sequences for the three gRNAs targeting GCN2 exon 1 or 6 are listed in S2 Table. We transfected HeLa and HepG2 cells with these plasmids individually (one plasmid one guide) and sorted the GFP-positive, transfected cells by FACS. Screening GCN2-KO clones was performed by western blotting. In the case of HepG2-OA1 cells, two rounds of selection were necessary to obtain three GCN2-KO clones by using a guide RNA against exon 1. Compared to the original HepG2-OA1 cell line and to the clone resulting from the first round of selection (185#27), the selected clones E23, F22 and F27 showed a very low amount—if any—of residual GCN2 protein (see results).\nGenomic DNA of HeLa and HepG2 cells was purified using DNeasy Blood and Tissue kit (Qiagen), according to the manufacturer’s instructions. DNA concentration was determined by Nanodrop 8000 Spectrophotometer (Thermo Scientific). PCR conditions for amplification of GCN2 exon 1 and 6 were as follows: 1 cycle at 94°C for 5 min, followed by 35 cycles at 94°C for 40 sec, 56°C for 40 sec, and 72°C for 40 sec; and a final extension step of 5 min at 72°C. The primer sequences are provided in S2 Table.\nFor OA1, western immunoblotting was carried out as described . For GCN2, cells were lysed in RIPA buffer, boiled at 95°C for 5 min and resolved on a 7.5% polyacrylamide gel; immunoblotting was then performed following standard procedures. Primary Abs were as follows: anti-human OA1, previously developed by our group in rabbits ; anti-GCN2 (Cell Signaling, Cat. #3302).\nStatistical analyses were performed using Microsoft Excel for Mac (version 15.32, Microsoft) for Student’s t-test; or GraphPad Prism (version 5.0d for Mac, GraphPad Software, Inc.) for one-way analysis of variance (ANOVA), followed by Dunnett’s or Tukey’s multiple comparisons post-tests. T-test was used when only two means, typically sample versus control, were compared, as specified in the figure legends. One way ANOVA was used for multiple comparisons, followed by either a Dunnett’s (to compare every mean to a control mean), or a Tukey’s (to compare every mean with every other mean) post-test, by setting the significance level at 0.05 (95% confidence intervals). Both tests compare the difference between means to the amount of scatter, quantified using information from all the groups. Specifically, Prism computes the Tukey-Kramer test, allowing unequal sample sizes. P values in Figures are generally referred to comparison between a sample and the control (full medium/mock), and are indicated as follows: *P<0.05, **P<0.01, ***P<0.001. Comparisons not involving the control are similarly indicated, by a horizontal line at the top of the graphs, encompassing the two samples under analysis. Additional details regarding the specific experiments are reported in the Figure Legends.\nTo examine the expression behavior of genomic repeats upon AA starvation, we performed a transcriptomic analysis taking advantage of an intramural sequencing facility. HeLa-OA1 cells were cultured in normal medium (for 6-30-120 hours) or in absence of Met/Cys (for 6-15-30-72-120 hours). Total RNA was prepared using Trizol (Sigma) to preserve transcripts of both small and long sizes (from Alu, of about 0.3 kb, to Long Interspersed Nuclear Elements, LINEs, and ERVs, up to 6–8 kb long), DNase treated to avoid contamination of genomic DNA, and processed for NGS sequencing by Ovation RNA-Seq System V2 protocol and HiSeq 2000 apparatus. Raw sequence data (10–20 M reads/sample) were aligned to the human genome (build hg19) with SOAPSplice . Read counts over repeated regions, defined by RepeatMasker track from UCSC genome browser , were obtained using bedtools suite . Normalization factors and read dispersion (d) were estimated with edgeR , variation of abundance during time was analyzed using maSigPro package , fitting with a negative binomial distribution (Θ = 1/d, Q = 0.01), with a cutoff on stepwise regression fit r2 = 0.7. Read counts were transformed to RPKM for visualization purposes. The expression of the OA1 transgene and HDAC4, which are progressively up- and down-regulated during starvation, respectively , were used as internal controls.\nFor genomic repeat analysis, reads belonging to repetitive elements were classified according to RepeatMasker and assigned to repeat classes (total number in the genome = 21), families (total number in the genome = 56) and finally subfamilies (total number in the genome = 1396), each including a variable number of genomic loci (from a few hundred for endogenous retroviruses, up to several thousand for Alu). Repeat subfamilies were then clustered according to their expression pattern in starved vs control cells, by maSigPro using default parameters, and repeats classes or families that are significantly enriched in each cluster, compared to all genomic repeats, were identified by applying a Fisher Exact test (using scipy.stats, a statistical module of Python). Alternatively, differentially expressed repeat subfamilies were identified by averaging three time points of starvation (15-30-72 h) and controls. Repeats significantly up- or downregulated (104 and 77, respectively) were selected based on a P value <0.05 (unpaired two-tailed Student’s t-test, assuming equal variance), and analyzed for their class enrichment by a Fisher Exact test as described above.\nFor gene set enrichment analysis of Met/Cys deprived vs control HeLa cells, differentially expressed genes were selected considering three time points of starvation (15-30-72 h) and controls, based on a P value <0.05 (unpaired two-tailed Student’s t-test, assuming equal variance) and a fold change >2. This led to a total of 2033 differentially expressed genes, 996 upregulated and 1037 downregulated. The enrichment analysis was performed separately for up and down regulated genes, or with all differentially expressed genes together (both), using the KEGG database. The analysis was performed with correction for the background of all expressed genes (about 13600 genes showing an average expression over 3 starvation and 3 control samples of at least 5 counts) and by using default parameters (adjusted P value and q-value cut-off of <0.05 and 0.2, respectively). Differentially expressed genes were also selected considering all starvation time points, as with genomic repeats, by maSigPro using default parameters, and a fold change of at least 1.5, leading to similar enrichment results (not shown). RNAseq gene expression data are available in the ArrayExpress database under the accession number E-MTAB-6452.\nTo provide proof-of-principle that AA starvation may affect the expression of transposable elements, we performed an RNAseq analysis of the previously described HeLa-OA1 cells, carrying an integrated and partially silenced OA1 transgene . Since the reactivation of the transgene by starvation is a progressive phenomenon , we performed a time-course experiment, where each time point represents one biological sample, rather than a biological triplicate of a single time point. To this aim, cells were cultured either in normal medium, or in absence of Met/Cys for different time points (6-15-30-72-120 hours), resulting in the progressive upregulation of the OA1 transgene during starvation (Fig 1A and 1B), consistent with previously published results . The expression of genomic repeats was determined according to RepeatMasker annotation and classification into classes, families, and subfamilies. Repeat species were then subjected to differential expression and enrichment analyses in starved vs control conditions. Out of 1396 annotated repeat subfamilies, 172 species displayed a differential expression profile during starvation.\nFig 1. Exogenous transgene and endogenous retroviruses are upregulated in Met/Cys-deprived HeLa cells.\n(A,B) Exogenous integrated transgene (OA1) mRNA abundance in HeLa-OA1 cells, cultured in Met/Cys-deprived medium for the indicated time points, and analyzed by RNAseq (A), or RT-qPCR (B), compared to full medium. Data represent RPKM (A), or mean ± SD of 2 technical replicates, expressed as fold change vs. control (full medium at 6 h = 1) (B). (C) Clustering of 172 genomic repeat subfamilies, differentially expressed upon starvation, according to their expression profile. (D) Class distribution of repeat subfamilies belonging to differential expression clusters, compared to all genomic repeat subfamilies (first column). Class DNA includes DNA transposons; SINE includes Alu; LINE includes L1 an L2; LTR includes endogenous retroviruses and solitary LTRs; Satellite includes centromeric acrosomal and telomeric satellites; Others includes SVA, simple repeats, snRNA, and tRNAs. LTR-retroelements are significantly enriched among repeats that are upregulated upon starvation, while LINEs are significantly enriched among repeats that are downregulated. *P<0.05, ***P<0.001 (Fisher exact test).\nAs shown in Fig 1C, the clustering of differentially expressed repeats, according to their expression pattern, reveals profiles comparable to the behavior of the transgene in the same conditions, i.e. upregulation upon starvation and no change in regular medium (Cluster 1 and 2). In particular, Cluster 1 contains sequences that, similarly to the OA1 transgene, are progressively upregulated upon starvation (Fig 1A and 1C) , while Cluster 2 contains sequences that are upregulated at early time points. Interestingly, repeat families that are significantly enriched in these two clusters belong mostly to the group of LTR-retrotransposons, including ERV1, ERVK, ERVL, ERVL-MaLR and other LTR sequences (Fig 1D; S1A and S2A Figs). By contrast, DNA transposons (such as TcMar-Tigger) and L1 non-LTR retrotransposons are enriched among repeats that are downregulated during starvation, particularly at late time points (Clusters 3 and 4) (Fig 1D; S1A and S2A Figs). Consistent results were obtained by selecting significantly up- or downregulated genomic repeats (overall 181 species), based on their average expression out of three time points of starvation (15-30-72 h, when the transgene upregulation is more homogeneous) and controls, and on a P value <0.05 (S1B and S2B Figs). These findings suggest that EAA starvation induces genome-wide effects involving repetitive elements, and that—among major repeat classes—it upregulates in particular the expression of ERVs.\nIn addition, to obtain a general overview of main gene pathways changing their expression together with the transgene during AA starvation, we performed gene expression and enrichment analyses of regular genes, by considering three time points of starvation (15-30-72 h) and controls. Differentially expressed genes were selected based on a P value <0.05 and a fold change between means of at least 2, and analyzed with the EnrichR tool . As shown in Fig 2 and S1 File, enrichment analyses against the KEGG and Reactome databases reveals a predominance of downregulated pathways, namely ribosome and translation, proteasome, AA metabolism, oxidative phosphorylation and other pathways related to mitochondrial functions, which are affected in Huntington, Alzheimer and Parkinson diseases (http://www.genome.jp/kegg/pathway.html). In particular, a large fraction of ribosomal protein mRNAs is downregulated upon Met/Cys starvation (Fig 2A and 2C; S1 File), consistent with the notion that their genes–despite being scattered throughout the genome—are coordinately expressed in a variety of conditions . This reduced expression may depend on multiple pathways that control ribosome biogenesis in response to external stimuli, including the downregulation of Myc activity , the downregulation of mTORC1 [42, 44], or possibly the activation of the ISR, as described in yeast . By contrast, upregulated genes show a significant enrichment for transcription and gene expression (Fig 2B). Similar results were obtained by the Gene Ontology Biological Process (GO-BP) database (S1 File), overall indicating a general downregulation of translation and metabolism, and upregulation of transcription, during the time interval of Met/Cys starvation corresponding to the transgene upregulation.\nFig 2. Gene set enrichment analysis of Met/Cys-deprived HeLa cells.\nDifferentially expressed genes between three time points of starvation (15-30-72 h) and controls were selected based on a P value <0.05 and a fold change of at least 2, leading to a total of 996 upregulated, and 1037 downregulated genes. The enrichment analysis was performed separately for up and down regulated genes, using the EnrichR tool and the KEGG (A) and REACTOME (B, C) databases. Ranking is based on the combined score provided by EnrichR, and categories are displayed up to 20 items with an Adjusted P value <0.05. No significant categories were found with upregulated genes against the KEGG database. All data are shown in S1 File. The enrichment analysis using all differentially expressed genes together did not reveal any additional enriched process.\nTo characterize the pathway leading to the reactivation of silenced transgenes, we used HeLa-OA1 and HeLa-GFP cells, as described . In addition, to test cell types relevant for AA metabolism, such as liver and muscle, we generated clones of HepG2 human hepatoma and C2C12 mouse skeletal muscle cells, stably transfected with plasmids for OA1 and GFP transgenes, respectively (HepG2-OA1 and C2C12-GFP cells; endogenous OA1 is not expressed in any of these cell types). In all cases, the integrated transgenes are under the control of the CMV promoter in the context of a pcDNA3.1 plasmid, are partially silenced, and can be efficiently upregulated by HDAC inhibitors (trichostatin A, TSA; ref. and S3A, S3B and S4A Figs), indicating that their expression is controlled at least in part by epigenetic mechanisms, as previously described .\nTo establish whether the reactivation response results from the shortage of specific AAs only, such as Met/Cys, or it is triggered by any AA deprivations, we cultured HeLa-OA1, HeLa-GFP, HepG2-OA1 and C2C12-GFP cells for 24–48 hours with a battery of media deprived of EAAs or semi-EAAs, including Met/Cys, Thr, Gln, Val, Leu, Tyr, Trp, Lys, and His. As negative controls, cells were cultured in full medium, carrying the entire AA complement, and in a medium deprived of Ala, a non-essential AA. The expression of the transgene transcript was then evaluated by RT-qPCR. As shown in Fig 3, and in S3C and S4B Figs, most EAA-deficiencies induced reactivation of the OA1 or GFP transgenes in all four cell lines, with the notable exception of Trp deprivation, which consistently resulted in no or minimal reactivation of the transgenes. Indeed, despite some variability, Met/Cys deficiency, but also Thr, Val, Tyr, and His deprivation always gave an efficient response, while Leu, Gln and Lys elicited evident responses in some cases, but not in others. Depletion of Phe gave results comparable to Tyr deprivation, however it significantly altered multiple reference genes used for normalization and therefore was eventually omitted from the analysis (not shown). Finally, in the above experiments we used a combined Met/Cys deficiency, to avoid the potential sparing of Met by Cys and for consistency with our previous studies . Nevertheless, the analysis of single Met or Cys starvation, both at the protein and transcript levels, revealed an exclusive role of Met deprivation in transgene reactivation, consistent with the notion that Cys is not an EAA (S3D and S3E Fig).\nFig 3. EAA deprivation induces reactivation of silent transgenes in HeLa and HepG2 cells.\nRelative transgene (OA1) and CHOP mRNA abundance in HeLa-OA1 (A) and HepG2-OA1 (B) cells, cultured in various AA-deprived media for 48 h and 24 h, respectively, compared to full medium. Mean ± SEM of 3 independent experiments. Data are expressed as fold change vs. control (full medium = 1). *P<0.05, **P<0.01, ***P<0.001 (one way ANOVA, followed by Dunnett’s post-test vs. full medium).\nCollectively, these results indicate that transgene reactivation by EAA starvation is reproducible with most EAAs, shared by different cell types (epithelium, liver, and skeletal muscle), and conserved in different mammalian species (human, mouse).\nmTORC1 inhibition and GCN2 activation trigger the best-known signaling pathways responding to AA starvation . We previously showed that inhibition of mTORC1 is not sufficient to reproduce transgene reactivation in HeLa cells . By contrast, the involvement of GCN2 and the ISR, including the downstream effectors ATF4 and CHOP, has never been tested. In addition, this pathway has been typically assessed in transient assays, lasting for a few hours, which may not be comparable with the prolonged starvation conditions necessary to reactivate the transgene expression (at least 15–24 h). Thus, we tested whether CHOP expression was upregulated upon incubation of HeLa-OA1, HepG2-OA1 and C2C12-GFP cells in media deprived of different EAAs for 24–48 h.\nAs shown in Fig 3 and S4B Fig, we found that CHOP expression is increased in all EAA-starvation conditions, but not in the absence of Ala, in all tested cell lines. Similar, yet less pronounced, results were obtained with ATF4, consistent with the notion that activation of this transcription factor is mainly mediated by translational upregulation (not shown) [15, 26]. However, the upregulation of CHOP does not parallel quantitatively that of the transgene, neither appears sufficient to induce it. In fact, CHOP is highly upregulated even upon Trp starvation, which consistently results in no or minimal reactivation of the transgenes (compare CHOP with OA1 or GFP expression; Fig 3 and S4B Fig). Thus, while the ISR appears widely activated upon EAA starvation, the upregulation of its downstream effector CHOP only partly correlates with transgene reactivation and may not be sufficient to induce it.\nThe activation of the ISR upon AA starvation suggests that GCN2 may be involved in the transgene reactivation response. Therefore, we tested whether direct pharmacological activation of this kinase is sufficient to trigger the transgene reactivation similarly to starvation. In addition, we used pharmacological inhibitors of mTOR to corroborate previous negative results in HeLa cells in the other cell lines under study. To this aim, HeLa-OA1 or GFP, HepG2-OA1 and C2C12-GFP cells were cultured in the presence of different concentrations of PP242 (mTOR inhibitor) or L-Histidinol (GCN2 activator, inhibiting tRNAHis charging by histidyl-tRNA synthetase), either alone or in combination for 24 h, compared to Met/Cys-deprived and full medium. As shown in Fig 4 and S5 Fig, while inhibition of mTORC1 consistently leads to minor or no effects, in agreement with previous findings , treatment with L-Histidinol results in efficient reactivation of the transgene in HepG2-OA1 and C2C12-GFP cells, but not in HeLa cells.\nFig 4. mTOR inhibition and GCN2 activation differently affect transgene expression in HeLa and HepG2 cells.\nRelative transgene (OA1) and CHOP mRNA abundance in HeLa-OA1 (A) and HepG2-OA1 (B) cells, cultured in Met/Cys-deprived medium, or in the presence of PP242 (mTOR inhibitor; 1–3 μM) or L-Histidinol (HisOH, GCN2 activator; 4–16 mM), either alone or in combination for 24–48 h, compared to full medium. Mean ± SEM of 4 (A) or 3 (B) independent experiments. Data are expressed as fold change vs. control (full medium = 1). *P<0.05, **P<0.01, ***P<0.001 (one way ANOVA, followed by Dunnett’s post-test vs. full medium). PP-1 and PP-3, PP242 at 1 and 3 μM, respectively; HisOH-4 and HisOH-16, L-Histidinol at 4 and 16 mM, respectively.\nSpecifically, L-Histidinol is not effective in HeLa-OA1 and HeLa-GFP cells, either alone or in combination with PP242 (Fig 4A and S5A Fig), or by using different concentrations of the drug, with or without serum (not shown). In these cells, L-Histidinol appears also unable to trigger the ISR, as indicated by lack of CHOP upregulation, possibly due to their different sensitivity to the drug. These findings are consistent with previous reports, describing the use of L-Histidinol in HeLa cells in conditions of low His concentration in the culture medium , which would resemble AA starvation in our system and therefore may not be applicable. Thus, even though the amount of the amino alcohol was adapted to exceed 20 to 80 times that of the amino acid, as described , HeLa cells may be resistant or able to compensate.\nIn contrast, in other cell types, L-Histidinol has been utilized in regular DMEM, to mimic the AA response triggered by DMEM lacking His [48, 49]. Consistently, in HepG2-OA1 cells, L-Histidinol is sufficient to elicit extremely high levels of transgene reactivation, and its combination with PP242 results in additive or even synergistic effects, possibly due to an indirect effect of mTOR inhibition on GCN2 activity (Fig 4B) [50, 51]. Similarly, C2C12-GFP cells efficiently reactivate the transgene upon treatment with L-Histidinol, but not PP242 (S5B Fig). However, differently from HepG2-OA1 cells, simultaneous treatment of C2C12-GFP cells with L-Histidinol and PP242 does not lead to synergistic effects. Consistent with stimulation of the ISR, CHOP and to a minor extent ATF4 are upregulated by L-Histidinol in both cell lines, yet their expression levels show only an incomplete correlation with those of the transgene (Fig 4B, S5B Fig, and not shown).\nThe finding that GCN2 activation by L-Histidinol is sufficient to reactivate the transgenes in both HepG2-OA1 and C2C12-GFP cells pointed to this kinase, and to the downstream ISR, as the pathway possibly involved in the EAA starvation response. Thus, we investigated whether the ISR is sufficient to trigger upregulation of the OA1 transgene in HepG2-OA1 cells by pharmacological means. As CHOP expression does not correspond quantitatively and is not sufficient to induce transgene reactivation, we tested the role of the core upstream event of the ISR, namely the phosphorylation of eIF2α , which can be induced by pharmacological treatments, independent of GCN2 (Fig 5A). To this aim, we used Salubrinal, a specific phosphatase inhibitor that blocks both constitutive and ER stress-induced phosphatase complexes against eIF2α, thereby increasing its phosphorylation . We found that, while the ISR is activated upon Salubrinal treatment, as shown by increased CHOP expression, it does not induce OA1 transgene reactivation (Fig 5B).\nFig 5. The ISR is neither sufficient nor necessary to induce transgene reactivation in HepG2 cells.\n(A) Schematic representation of GCN2 activation by AA starvation, resulting in phosphorylation of eIF2a and initiation of the downstream ISR. In addition to GCN2, the ISR may be activated by other eIF2a kinases (PKR, HRI and PERK; not shown in the picture). (B) Relative transgene (OA1) and CHOP mRNA abundance in HepG2-OA1 cells treated for 24 h with Salubrinal (a drug that induces the ISR by inhibiting the dephosphorylation of eIF2α; 75 μM), compared to full medium. Mean ± range of two experiments. Data are expressed as fold change vs. control (DMEM = 1). *P<0.05 (paired two-tailed Student’s t-test vs. control). (C) Relative transgene (OA1) and CHOP mRNA abundance in HepG2-OA1 cells treated for 6 h with L-Histidinol (HisOH, GCN2 activator; 4 mM), in the absence or presence of ISRIB (a drug that bypasses the phosphorylation of eIF2α, inhibiting triggering of the ISR; 100 nM). Mean ± range of two experiments. Data are expressed as fold change vs. control (DMEM = 1). **P<0.01, ***P<0.001 (one way ANOVA, followed by Tukey’s post-test; P values refer to comparisons vs. control, unless otherwise indicated). (D) Relative transgene (OA1) and ATF4 mRNA abundance in HepG2-OA1 cells transfected with control (CTRL) or anti-ATF4 siRNAs, and incubated in the presence or absence of L-Histidinol (HisOH, GCN2 activator; 4 mM) for 6 h. Mean ± range of two experiments. Data are expressed as fold change vs. control (w/o HisOH = 1, top; control siRNA = 1, bottom). *P<0.05 (one way ANOVA, followed by Tukey’s post-test; P values refer to comparisons vs. control, unless otherwise indicated).\nTo test whether the ISR is necessary to trigger the transgene response to L-Histidinol, we used the chemical compound ISRIB, which inhibits the activation of the ISR, even in the presence of phosphorylated eIF2α, likely by boosting the activity of the guanine-nucleotide exchange factor (GEF) for eIF2α, namely eIF2B [53, 54]. HepG2-OA1 cells were stimulated with L-Histidinol, either in the presence or absence of ISRIB. As shown in Fig 5C, while the expression of CHOP is inhibited by ISRIB, as expected, the reactivation of the OA1 transgene is not affected. In addition, knockdown of the closest eIF2α downstream effector ATF4 by siRNAs does not interfere with the reactivation of the OA1 transgene by L-Histidinol (Fig 5D). Together, these data suggest that eIF2α phosphorylation and the downstream ISR pathway are neither sufficient nor necessary to induce transgene reactivation.\nTo definitively establish if GCN2 is necessary to trigger the transgene reactivation response to EAA starvation, we directly suppressed its expression by CRISPR/Cas9-mediated knock-out (KO). We generated three independent GCN2-KO clones from the parental HeLa-OA1 cell line, by using three different guide RNAs, two against exon 1 (clones 183#11 and 185#5), and one against exon 6 (clone 239#1) of the GCN2 gene. Genomic characterization confirmed the presence of mutations on both alleles of exon 1 of the GCN2 gene in clone 183#11, and on both alleles of exon 6 in clone 239#1; by contrast, clone 185#5 showed multiple alleles in exon 1, consistent with the presence of two cell populations, and was not characterized further at the genomic level (S6 Fig). None of these clones express GCN2 at the protein level, as shown by immunoblotting (Fig 6A). To test the GCN2-KO cells for their ability to respond to EAA starvation, parental HeLa-OA1 cells and the three GCN2-KO clones were cultured in media deprived of Met/Cys or Thr (corresponding to the most effective treatments in this cell line; see Fig 3A) for 24–48 h and transgene expression was assessed by RT-qPCR. We found that the reactivation of the OA1 transgene is neither abolished, nor reduced by KO of GCN2, thus excluding that this kinase is necessary for the response to EAA starvation in HeLa-OA1 cells (Fig 6B and 6C).\nFig 6. GCN2 knockout does not interfere with transgene reactivation in HeLa cells.\n(A) Immunoblotting of protein extracts from the HeLa-OA1 parental cell line and GCN2-KO clones 183#11, 185#5 and 239#1, immunodecorated with anti-GCN2 antibody. Arrow, GCN2 specific band. Ponceau staining was used as loading control. (B, C) Relative transgene (OA1) mRNA abundance in HeLa-OA1 cells and GCN2-KO clones, cultured in Met/Cys (B) or Thr (C) deprived medium for 24 h or 48 h, respectively, compared to full medium. Mean ± SD of 3 technical replicates from 1 experiment. Data are expressed as fold change vs. control (full medium = 1). Since independent clones may display variable reactivation responses (e.g. due to different levels of transgene expression in basal conditions), the results are not shown as means of the three clones, but as separate replicates.\nSimilarly, we generated GCN2-KO clones from the parental HepG2-OA1 cell line by the same strategy. By using a guide RNA against exon 1 of the GCN2 gene, we obtained three independent GCN2-KO clones, namely E23, F22 and F27. Genomic characterization confirmed the presence of mutations on both alleles of exon 1 of the GCN2 gene in clone F27 (S7 Fig) and all three clones showed a very low amount—if any—of residual GCN2 protein, compared to the original HepG2-OA1 cell line (Fig 7A). To assess the ability of GCN2-KO cells to reactivate the transgene upon starvation, we cultured parental HepG2-OA1 cells and the three GCN2-KO clones in media deprived of Met/Cys or His (corresponding to the most effective treatments in this cell line; see Fig 3B) for 24 h, and evaluated the transgene expression by RT-qPCR. As shown in Fig 7B and 7C, we found that the reactivation of the OA1 transgene is neither abolished, nor reduced by KO of GCN2, as in HeLa cells. To further confirm this result, we knocked-down GCN2 by RNA interference (RNAi), and incubated the cells with or without L-Histidinol for 6 h. As shown in Fig 8, treatment of HepG2-OA1 cells with L-Histidinol results in efficient transgene reactivation, even upon significant GCN2 downregulation, both at the mRNA and protein levels. Taken together, these data strongly support the conclusion that GCN2 is not necessary for transgene reactivation in response to EAA starvation, either in HeLa or in HepG2 cells.\nFig 7. GCN2 knockout does not interfere with transgene reactivation in HepG2 cells.\n(A) Immunoblotting of protein extracts from the HepG2-OA1 parental cell line and GCN2-KO clones 185#27, E23, F22, F27, immunodecorated with anti-GCN2 antibody. Clone 185#27 results from the first round of selection, and was used to generate clones E23, F22, F27. Arrow, GCN2 specific band. For GCN2 protein quantification, Ponceau staining was used as loading control and data are expressed as fold change vs. parental cell line (= 1). (B, C) Relative transgene (OA1) mRNA abundance in HepG2-OA1 cells and GCN2-KO clones, cultured in Met/Cys (B) or His (C) deprived medium for 24 h, compared to full medium. Mean ± SD of 3 technical replicates from 1 experiment.", "answers": ["No, it is not necessary."], "length": 6900, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "1d46294ee8fcc0a64828778b04198fcbe4f75841775e1205"}
{"input": "How is electricity used in everyday life?", "context": "For other uses, see Electricity (disambiguation).\n\"Electric\" redirects here. For other uses, see Electric (disambiguation).\nLightning is one of the most dramatic effects of electricity.\nElectricity is the set of physical phenomena associated with the presence and motion of matter that has a property of electric charge. In early days, electricity was considered as being not related to magnetism. Later on, many experimental results and the development of Maxwell's equations indicated that both electricity and magnetism are from a single phenomenon: electromagnetism. Various common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.\nThe presence of an electric charge, which can be either positive or negative, produces an electric field. The movement of electric charges is an electric current and produces a magnetic field.\nWhen a charge is placed in a location with a non-zero electric field, a force will act on it. The magnitude of this force is given by Coulomb's law. Thus, if that charge were to move, the electric field would be doing work on the electric charge. Thus we can speak of electric potential at a certain point in space, which is equal to the work done by an external agent in carrying a unit of positive charge from an arbitrarily chosen reference point to that point without any acceleration and is typically measured in volts.\nelectronics which deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies.\nElectrical phenomena have been studied since antiquity, though progress in theoretical understanding remained slow until the seventeenth and eighteenth centuries. Even then, practical applications for electricity were few, and it would not be until the late nineteenth century that electrical engineers were able to put it to industrial and residential use. The rapid expansion in electrical technology at this time transformed industry and society, becoming a driving force for the Second Industrial Revolution. Electricity's extraordinary versatility means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. Electrical power is now the backbone of modern industrial society.\nLong before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the \"Thunderer of the Nile\", and described them as the \"protectors\" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Possibly the earliest and nearest approach to the discovery of the identity of lightning, and electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning ra‘ad (رعد) applied to the electric ray.\nAncient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.\nBenjamin Franklin conducted extensive research on electricity in the 18th century, as documented by Joseph Priestley (1767) History and Present Status of Electricity, with whom Franklin carried on extended correspondence.\nElectricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus (\"of amber\" or \"like amber\", from ἤλεκτρον, elektron, the Greek word for \"amber\") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words \"electric\" and \"electricity\", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.\nFurther work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges.\nIn 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his \"On Physical Lines of Force\" in 1861 and 1862.\nWhile the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.\nIn 1887, Heinrich Hertz:843–44 discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for \"his discovery of the law of the photoelectric effect\". The photoelectric effect is also employed in photocells such as can be found in solar panels and this is frequently used to make electricity commercially.\nThe first solid-state device was the \"cat's-whisker detector\" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.\nThe solid-state device came into its own with the invention of the transistor in 1947. Common solid-state devices include transistors, microprocessor chips, and RAM. A specialized type of RAM called flash RAM is used in USB flash drives and more recently, solid-state drives to replace mechanically rotating magnetic disc hard disk drives. Solid state devices became prevalent in the 1950s and the 1960s, during the transition from vacuum tubes to semiconductor diodes, transistors, integrated circuit (IC) and the light-emitting diode (LED).\nThe presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity.:457 A lightweight ball suspended from a string can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.\nThe force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them.:35 The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.\nStudy has shown that the origin of charge is from certain types of subatomic particles which have the property of electric charge. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. The most familiar carriers of electrical charge are the electron and proton. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire.:25 The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.\nThe charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin. The amount of charge is usually given the symbol Q and expressed in coulombs; each electron carries the same charge of approximately −1.6022×1019 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×1019 coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.\nThe movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.\nBy historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.\nThe process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second,:17 the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.\nCurrent causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.:2324 One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.\nIn engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.:11 If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave.:20607 Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance.:22325 These properties however can become important when circuitry is subjected to transients, such as when first energised.\nThe concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.\nA hollow conducting body carries all its charge on its outer surface. The field is therefore zero at all places inside the body.:88 This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects.\nThe principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.\nA pair of AA cells. The + sign indicates the polarity of the potential difference between the battery terminals.\nThe concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity.:49498 This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated.:49498 The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.\nFor practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable.\nElectric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, otherwise this would produce a force that will move the charge carriers to even the potential of the surface.\nØrsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's slightly obscure words were that \"the electric conflict acts in a revolving manner.\" The force also depended on the direction of the current, for if the flow was reversed, then the force did too.\nØrsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.\nThis relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.\nExperimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.\nItalian physicist Alessandro Volta showing his \"battery\" to French emperor Napoleon Bonaparte in the early 19th century.\nThe ability of chemical reactions to produce electricity, and conversely the ability of electricity to drive chemical reactions has a wide array of uses.\nElectrochemistry has always been an important part of electricity. From the initial invention of the Voltaic pile, electrochemical cells have evolved into the many different types of batteries, electroplating and electrolysis cells. Aluminium is produced in vast quantities this way, and many portable devices are electrically powered using rechargeable cells.\nA basic electric circuit. The voltage source V on the left drives a current I around the circuit, delivering electrical energy into the resistor R. From the resistor, the current returns to the source, completing the circuit.\nAn electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.\nElectric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.\nElectricity generation is often done with electric generators, but can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.\nElectronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, optoelectronics, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible and electronics is widely used in information processing, telecommunications, and signal processing. The ability of electronic devices to act as switches makes digital information processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.\nToday, most electronic devices use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of solid state physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering.\nThus, the work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents, and via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.\nEarly 20th-century alternator made in Budapest, Hungary, in the power generating hall of a hydroelectric station (photograph by Prokudin-Gorsky, 1905–1915).\nIn the 6th century BC, the Greek philosopher Thales of Miletus experimented with amber rods and these experiments were the first studies into the production of electrical energy. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electrical energy. The battery is a versatile and very common power source which is ideally suited to many applications, but its energy storage is finite, and once discharged it must be disposed of or recharged. For large electrical demands electrical energy must be generated and transmitted continuously over conductive transmission lines.\nElectrical power is usually generated by electro-mechanical generators driven by steam produced from fossil fuel combustion, or the heat released from nuclear reactions; or from other sources such as kinetic energy extracted from wind or flowing water. The modern steam turbine invented by Sir Charles Parsons in 1884 today generates about 80 percent of the electric power in the world using a variety of heat sources. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.\nSince electrical energy cannot easily be stored in quantities large enough to meet demands on a national scale, at all times exactly as much must be produced as is required. This requires electricity utilities to make careful predictions of their electrical loads, and maintain constant co-ordination with their power stations. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses.\nElectricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.\nThe resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate.\nElectricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first intercontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.\nThe effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership.\nElectronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain several billion miniaturised transistors in a region only a few centimetres square.\nA voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock is referred to as electrocution. Electrocution is still the means of judicial execution in some jurisdictions, though its use has become rarer in recent times.\nElectricity is not a human invention, and may be observed in several forms in nature, a prominent manifestation of which is lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is thought to arise from a natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when subjected to external pressure. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal, and when a piezoelectric material is subjected to an electric field, a small change in physical dimensions takes place.\n§Bioelectrogenesis in microbial life is a prominent phenomenon in soils and sediment ecology resulting from anaerobic respiration. The microbial fuel cell mimics this ubiquitous natural phenomenon.\nSome organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system, and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants.\nIn the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. \"Revitalization\" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.\nAs the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who \"finger death at their gloves' end as they piece and repiece the living wires\" in Rudyard Kipling's 1907 poem Sons of Martha. Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books. The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.\nWith electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it required particular attention by popular culture only when it stops flowing, an event that usually signals disaster. The people who keep it flowing, such as the nameless hero of Jimmy Webb’s song \"Wichita Lineman\" (1968), are still often cast as heroic, wizard-like figures.\nAmpère's circuital law, connects the direction of an electric current and its associated magnetic currents.\n^ Diogenes Laertius. R.D. Hicks (ed.). \"Lives of Eminent Philosophers, Book 1 Chapter 1 \". Perseus Digital Library. Tufts University. Retrieved 5 February 2017. Aristotle and Hippias affirm that, arguing from the magnet and from amber, he attributed a soul or life even to inanimate objects.\n^ Aristotle. Daniel C. Stevenson (ed.). \"De Animus (On the Soul) Book 1 Part 2 (B4 verso)\". The Internet Classics Archive. Translated by J.A. Smith. Retrieved 5 February 2017. Thales, too, to judge from what is recorded about him, seems to have held soul to be a motive force, since he said that the magnet has a soul in it because it moves the iron.\n^ a b c Guarnieri, M. (2014). \"Electricity in the age of Enlightenment\". IEEE Industrial Electronics Magazine. 8 (3): 60–63. doi:10.1109/MIE.2014.2335431.\n^ Srodes, James (2002), Franklin: The Essential Founding Father, Regnery Publishing, pp. 92–94, ISBN 0-89526-163-4 It is uncertain if Franklin personally carried out this experiment, but it is popularly attributed to him.\n^ a b Guarnieri, M. (2014). \"The Big Jump from the Legs of a Frog\". IEEE Industrial Electronics Magazine. 8 (4): 59–61, 69. doi:10.1109/MIE.2014.2361237.\n^ Hertz, Heinrich (1887). \"Ueber den Einfluss des ultravioletten Lichtes auf die electrische Entladung\". Annalen der Physik. 267 (8): S. 983–1000. Bibcode:1887AnP...267..983H. doi:10.1002/andp.18872670827.\n^ \"The Nobel Prize in Physics 1921\". Nobel Foundation. Retrieved 2013-03-16.\n^ John Sydney Blakemore, Solid state physics, pp. 1–3, Cambridge University Press, 1985 ISBN 0-521-31391-0.\n^ Richard C. Jaeger, Travis N. Blalock, Microelectronic circuit design, pp. 46–47, McGraw-Hill Professional, 2003 ISBN 0-07-250503-6.\n^ \"The repulsive force between two small spheres charged with the same type of electricity is inversely proportional to the square of the distance between the centres of the two spheres.\" Charles-Augustin de Coulomb, Histoire de l'Academie Royal des Sciences, Paris 1785.\n^ Sewell, Tyson (1902), The Elements of Electrical Engineering, Lockwood, p. 18 . The Q originally stood for 'quantity of electricity', the term 'electricity' now more commonly expressed as 'charge'.\n^ a b Berkson, William (1974), Fields of Force: The Development of a World View from Faraday to Einstein, Routledge, p. 370, ISBN 0-7100-7626-6 Accounts differ as to whether this was before, during, or after a lecture.\n^ \"Lab Note #105 EMI ReductionUnsuppressed vs. Suppressed\". Arc Suppression Technologies. April 2011. Retrieved March 7, 2012.\n^ Almost all electric fields vary in space. An exception is the electric field surrounding a planar conductor of infinite extent, the field of which is uniform.\n^ Paul J. Nahin (9 October 2002). Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age. JHU Press. ISBN 978-0-8018-6909-9.\n^ \"The Bumpy Road to Energy Deregulation\". EnPowered. 2016-03-28.\n^ a b c d e f g h Van Riper, op.cit., p. 71.\nLook up electricity in Wiktionary, the free dictionary.\nBasic Concepts of Electricity chapter from Lessons In Electric Circuits Vol 1 DC book and series.", "answers": ["Electricity is used for transport, heating, lighting, communications, and computation."], "length": 6202, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "b5b0eb150f44a4d7641b9adf9267ca0c2492ea46626449b3"}
{"input": "What is the relationship between the maximum velocity and the amplitude of the blob or depletion?", "context": "\\section{Model equations} \\label{sec:equations}\n\nIn drift-fluid models the continuity equation\n\\begin{align}\n \\frac{\\partial n}{\\partial t} + \\nabla\\cdot\\left( n \\vec u_E \\right) &= 0 \\label{eq:generala} \n\\end{align}\ndescribes the dynamics of the electron density $n$. Here\n$\\vec u_E := (\\hat{\\vec b} \\times \\nabla \\phi)/B$ gives the electric drift\nvelocity in a magnetic field $\\vec B := B \\hat{\\vec b}$ and an electric\npotential $\\phi$. We neglect contributions of the diamagnetic drift~\\cite{Kube2016}.\n\n\n\n\nEquation~\\eqref{eq:generala} is closed by invoking quasineutrality, i.e. the divergence of the ion polarization, \nthe electron diamagnetic and the gravitational drift currents must vanish\n\\begin{align}\n \\nabla\\cdot\\left( \\frac{n}{\\Omega} \\left( \\frac{\\partial}{\\partial t} \n + \\vec u_E \\cdot\\nabla \\right)\\frac{\\nabla_\\perp \\phi}{B} + n\\vec u_d - n\\vec u_g\\right) &=0\n . \n \n \n \\label{eq:generalb}\n\\end{align}\nHere we denote \n$\\nabla_\\perp\\phi/B := - \\hat{\\vec b} \\times \\vec u_E$, \nthe electron diamagnetic drift\n$\\vec u_d := - T_e(\\hat{\\vec b} \\times\\nabla n ) /enB$\nwith the electron temperature $T_e$,\nthe ion gravitational drift velocity \n$\\vec u_g := m_i \\hat{\\vec b} \\times \\vec g /B$\nwith ion mass $m_i$, and the ion gyro-frequency\n$\\Omega := eB/m_i$.\n\nCombining Eq.~\\eqref{eq:generalb} with Eq.~\\eqref{eq:generala} yields\n\\begin{align}\n \\frac{\\partial \\rho}{\\partial t} + \\nabla\\cdot\\left( \\rho\\vec u_E \\right) + \\nabla \\cdot\\left( n(\\vec u_\\psi + \\vec u_d + \\vec u_g) \\right) &= 0\\label{eq:vorticity}\n\\end{align}\nwith the polarization charge density \n$\\rho = \\nabla\\cdot( n\\nabla_\\perp \\phi / \\Omega B)$ \nand\n$\\vec u_\\psi := \\hat{\\vec b}\\times \\nabla\\psi /B$ \nwith \n$\\psi:= m_i\\vec u_E^2 /2e$.\nWe exploit this form of Eq.~\\eqref{eq:generalb} in our numerical simulations.\n\nEquations~\\eqref{eq:generala} and \\eqref{eq:generalb} respectively \\eqref{eq:vorticity} have several invariants.\nFirst, in Eq.~\\eqref{eq:generala} the relative particle number \n$M(t) := \\int \\mathrm{dA}\\, (n-n_0)$ is conserved over time\n$\\d M(t)/\\d t = 0$. \nFurthermore, we integrate \n$( T_e(1+\\ln n) -T_e \\ln B)\\partial_t n$\nas well as\n$-e\\phi \\partial_t\\rho - (m_i\\vec u_E^2/2+gm_ix - T_e\\ln B)\\partial_t n$ \nover the domain to get, disregarding boundary contributions,\n\\begin{align}\n \\frac{\\d}{\\d t}\\left[T_eS(t) + H(t) \\right] = 0, \\label{eq:energya}\\\\ \n \\frac{\\d}{\\d t} \\left[ E(t) - G(t) - H(t)\\right] = 0,\n \\label{eq:energyb}\n\\end{align}\nwhere we define \nthe entropy\n$S(t):=\\int \\mathrm{dA}\\, [n\\ln(n/n_0) - (n-n_0)]$, \nthe kinetic energy \n$E(t):=m_i \\int \\mathrm{dA}\\, n\\vec u_E^2/2$ \nand the potential energies\n$G(t) := m_i g\\int \\mathrm{dA}\\, x(n-n_0)$\nand\n$H(t) := T_e\\int \\mathrm{dA}\\, (n-n_0) \\ln (B^{-1})$.\nNote that $n\\ln( n/n_0) - n + n_0 \\approx (n-n_0)^2/2$ for $|(n-n_0)/n_0| \\ll 1$ and $S(t)$ thus reduces to the \nlocal entropy form in Reference~\\cite{Kube2016}. \n\nWe now set up a gravitational field $\\vec g = g\\hat x$ and a constant homogeneous background\nmagnetic field $\\vec B = B_0 \\hat z$ in a Cartesian coordinate system.\nThen the divergences of the electric and gravitational drift velocities $\\nabla\\cdot\\vec u_E$ and $\\nabla\\cdot\\vec u_g$\nand the diamagnetic current $\\nabla\\cdot(n\\vec u_d)$ vanish, which makes the \nflow incompressible. Furthermore, the magnetic potential energy vanishes $H(t) = 0$.\n\nIn a second system we model the inhomogeneous magnetic field present in tokamaks as\n$\\vec B := B_0 (1+ x/R_0)^{-1}\\hat z$ and neglect the gravitational drift $\\vec u_g = 0$.\nThen, the potential energy $G(t) = 0$. \nNote that \n$H(t) = m_i \\ensuremath{C_\\mathrm{s}}^2/R_0\\int\\mathrm{dA}\\, x(n-n_0) +\\mathcal O(R_0^{-2}) $\nreduces to $G(t)$ with the effective gravity $g_\\text{eff}:= \\ensuremath{C_\\mathrm{s}}^2/R_0$ with $\\ensuremath{C_\\mathrm{s}}^2 := T_e/m_i$. \nFor the rest of this letter we treat $g$ and $g_\\text{eff}$ as well as $G(t)$ and $H(t)$ on the same footing.\nThe magnetic field inhomogeneity thus entails compressible flows, which is \nthe only difference to the model describing dynamics in a homogeneous magnetic field introduced above. \nSince both $S(t)\\geq 0$ and $E(t)\\geq 0$ we further derive from Eq.~\\eqref{eq:energya} and Eq.~\\eqref{eq:energyb} that the kinetic energy\nis bounded by $E(t) \\leq T_eS(t) + E(t) = T_e S(0)$; a feature absent from the gravitational system with \nincompressible flows, where $S(t) = S(0)$. \n\nWe now show that the invariants Eqs.~\\eqref{eq:energya} and \\eqref{eq:energyb} present restrictions on the velocity and\nacceleration of plasma blobs. \nFirst, we define the blobs' center of mass (COM) via $X(t):= \\int\\mathrm{dA}\\, x(n-n_0)/M$ and \nits COM velocity as $V(t):=\\d X(t)/\\d t$. \nThe latter is proportional to the total radial particle flux~\\cite{Garcia_Bian_Fundamensky_POP_2006, Held2016a}.\nWe assume\nthat $n>n_0$ and $(n-n_0)^2/2 \\leq [ n\\ln (n/n_0) - (n-n_0)]n $ to show for both systems \n\\begin{align}\n (MV)^2 &= \\left( \\int \\mathrm{dA}\\, n{\\phi_y}/{B} \\right)^2\n = \\left( \\int \\mathrm{dA}\\, (n-n_0){\\phi_y}/{B} \\right)^2\\nonumber\\\\\n \n&\\leq 2 \\left( \\int \\mathrm{dA}\\, \\left[n\\ln (n/n_0) -(n-n_0)\\right]^{1/2}\\sqrt{n}{\\phi_y}/{B}\\right)^2\\nonumber\\\\\n \n &\\leq 4 S(0) E(t)/m_i \n \n \\label{eq:inequality}\n\\end{align}\nHere we use the Cauchy-Schwartz inequality and \n$\\phi_y:=\\partial\\phi/\\partial y$. \nNote that although we derive the inequality Eq.~\\eqref{eq:inequality} only for amplitudes $\\triangle n >0$ we assume that the results also hold for depletions. This is justified by our numerical results later in this letter. \nIf we initialize our density field with a seeded blob of radius $\\ell$ and amplitude $\\triangle n$ as \n\\begin{align}\n n(\\vec x, 0) &= n_0 + \\triangle n \\exp\\left( -\\frac{\\vec x^2}{2\\ell^2} \\right), \\label{eq:inita}\n \n \n\\end{align}\nand \n$\\phi(\\vec x, 0 ) = 0$,\nwe immediately have $M := M(0) = 2\\pi \\ell^2 \\triangle n$, $E(0) = G(0) = 0$ and \n$S(0) = 2\\pi \\ell^2 f(\\triangle n)$, where $f(\\triangle n)$ captures the amplitude dependence of \nthe integral for $S(0)$. \n\nThe acceleration for both incompressible and compressible flows can be estimated\nby assuming a linear acceleration $V=A_0t$ and $X=A_0t^2/2$~\\cite{Held2016a} and using \n$E(t) = G(t) = m_igMX(t)$ in Eq.~\\eqref{eq:inequality}\n\\begin{align}\n \\frac{A_0}{g} = \\mathcal Q\\frac{2S(0)}{M} \\approx \\frac{\\mathcal Q}{2} \\frac{\\triangle n }{n_0+2\\triangle n/9}.\n \\label{eq:acceleration}\n\\end{align}\nHere, we use the Pad\\'e approximation of order $(1/1)$ of $2S(0)/M $\nand define a model parameter $\\mathcal Q$ with $0<\\mathcal Q\\leq1$ to be determined by numerical simulations.\nNote that the Pad\\'e approximation is a better approximation than a simple \ntruncated Taylor expansion especially for large relative amplitudes of order unity.\nEq.~\\eqref{eq:acceleration} predicts that $A_0/g\\sim \\triangle n/n_0$ for small \namplitudes $|\\triangle n/n_0| < 1$ and $A_0 \\sim g $ for very large amplitudes $\\triangle n /n_0 \\gg 1$, \nwhich confirms the predictions in~\\cite{Pecseli2016} and reproduces the limits discussed in~\\cite{Angus2014}.\n\nAs pointed out earlier for compressible flows Eq.~\\eqref{eq:inequality} can be further estimated\n\\begin{align}\n (MV)^2 \\leq 4 T_eS(0)^2/m_i. \n \\label{}\n\\end{align}\nWe therefore have a restriction on the maximum COM velocity for compressible flows, which is absent for incompressible flows\n\\begin{align}\n \\frac{\\max |V|}{\\ensuremath{C_\\mathrm{s}}} = {\\mathcal Q}\\frac{2S(0)}{M} \\approx \\frac{\\mathcal Q}{2} \\frac{|\\triangle n| }{n_0+2/9 \\triangle n } \\approx \\frac{\\mathcal Q}{2} \\frac{|\\triangle n|}{n_0}.\n \\label{eq:linear}\n\\end{align}\nFor $|\\triangle n /n_0|< 1$ Eq.~\\eqref{eq:linear} reduces to the linear scaling derived in~\\cite{Kube2016}. \nFinally, a scale analysis of Eq.~\\eqref{eq:vorticity} shows that~\\cite{Ott1978, Garcia2005, Held2016a}\n\\begin{align}\n \\frac{\\max |V|}{\\ensuremath{C_\\mathrm{s}}} = \\mathcal R \\left( \\frac{\\ell}{R_0}\\frac{|\\triangle n|}{n_0} \\right)^{1/2}.\n \\label{eq:sqrt}\n\\end{align}\nThis equation predicts a square root dependence of the center of mass velocity \non amplitude and size. \n\n\n\n\n\nWe now propose a simple phenomenological model that captures the essential dynamics\nof blobs and depletions in the previously stated systems. More specifically \nthe model reproduces the acceleration Eq.~\\eqref{eq:acceleration} with and without\nBoussinesq approximation, the square root scaling for the COM velocity \nEq.~\\eqref{eq:sqrt} for incompressible flows as well as the relation between the \nsquare root scaling Eq.~\\eqref{eq:sqrt} and the linear scaling \nEq.~\\eqref{eq:linear} for compressible flows. \nThe basic idea is that the COM of blobs behaves like \nthe one of an infinitely long plasma column immersed in an ambient plasma. \nThe dynamics of this column reduces to the one of a two-dimensional ball.\nThis idea is similar to the analytical ``top hat'' density solution for\nblob dynamics recently studied in~\\cite{Pecseli2016}.\nThe ball is subject to buoyancy as well as linear and nonlinear friction\n\\begin{align}\n M_{\\text{i}} \\frac{d V}{d t} = (M_{\\text{g}} - M_\\text{p}) g - c_1 V - \\mathrm{sgn}(V ) \\frac{1}{2}c_2 V^2.\n \\label{eq:ball}\n\\end{align}\nThe gravity $g$ has a positive sign in the coordinate system; sgn$(f)$ is the sign function. \nThe first term on the right hand side is the buoyancy, where \n$M_{\\text{g}} := \\pi \\ell^2 (n_0 + \\mathcal Q \\triangle n/2)$ \nis the gravitational mass of the ball with radius $\\ell$ and \n$M_\\mathrm{p} := n_0 \\pi \\ell^2 $ \nis the mass of the displaced ambient plasma.\nNote that if $\\triangle n<0$ the ball represents a depletion and the buoyancy term has a negative sign, i.e. the depletion will rise. \nWe introduce an inertial mass \n$M_{\\text{i}} := \\pi\\ell^2 (n_0 +2\\triangle n/9)$ \ndifferent from the gravitational mass $M_{\\text{g}}$ in order to \nrecover the initial acceleration in Eq.~\\eqref{eq:acceleration}. \nWe interpret the parameters $\\mathcal Q$ and $2/9$ as geometrical factors \nthat capture the difference of the actual blob form from the idealized\n``top hat'' solution. \nAlso note that the Boussinesq approximation appears in the model as a neglect of inertia, $M_{\\text{i}} = \\pi\\ell^2n_0$.\n\nThe second term is the linear friction term with coefficient $c_1(\\ell)$, which\ndepends on the size of the ball.\nIf we disregard the nonlinear friction, $c_2=0$, Eq.~\\eqref{eq:ball} directly yields a \nmaximum velocity $c_1V^*=\\pi \\ell^2 n g \\mathcal Q\\triangle n/2$.\nFrom our previous considerations $\\max V/\\ensuremath{C_\\mathrm{s}}=\\mathcal Q \\triangle n /2n_0$, we thus identify \n\\begin{align}\n c_1 = \\pi\\ell^2 n_0 g/\\ensuremath{C_\\mathrm{s}}. \n \\label{}\n\\end{align}\nThe linear friction coefficient thus depends on the gravity and the size of the\nball. \n\nThe last term in \\eqref{eq:ball} is the nonlinear friction. The sign of the force depends on whether\nthe ball rises or falls in the ambient plasma. \nIf we disregard linear friction $c_1=0$, we have the maximum velocity \n$V^*= \\sigma(\\triangle n)\\sqrt{\\pi \\ell^2|\\triangle n| g\\mathcal Q/c_2}$, \nwhich must equal \n$\\max V= \\sigma(\\triangle n) \\mathcal R \\sqrt{g \\ell |\\triangle n/n_0|}$ \nand thus\n\\begin{align}\n c_2 = {\\mathcal Q\\pi n_0\\ell }/{\\mathcal R^2}.\n \\label{}\n\\end{align}\nInserting $c_1$ and $c_2$ into Eq.~\\eqref{eq:ball}\nwe can derive the maximum absolute velocity in the form \n\\begin{align}\n \\frac{\\max |V|}{\\ensuremath{C_\\mathrm{s}}} = \n \\left(\\frac{\\mathcal R^2}{\\mathcal Q}\\right) \\frac{\\ell}{R_0} \\left( \n \\left({1+\\left( \\frac{\\mathcal Q}{\\mathcal R} \\right)^{2} \\frac{|\\triangle n|/n_0 }{\\ell/R_0}}\\right)^{1/2}-1 \\right)\n \\label{eq:vmax_theo}\n\\end{align}\nand thus have a concise expression for $\\max |V|$ that captures both the linear\nscaling \\eqref{eq:linear} as well as the square root scaling \\eqref{eq:sqrt}.\nWith Eq.~\\eqref{eq:acceleration} and Eq.~\\eqref{eq:sqrt} respectively Eq.~\\eqref{eq:vmax_theo} we \nfinally arrive at an analytical expression for the time at which the maximum velocity is reached via \n$t_{\\max V} \\sim \\max V/A_0$. Its inverse $\\gamma:=t_{\\max V}^{-1}$ gives the\nglobal interchange growth rate, for which an empirical expression was\npresented in Reference~\\cite{Held2016a}.\n\n\nWe use the open source library FELTOR \nto simulate \nEqs.~\\eqref{eq:generala} and \\eqref{eq:vorticity} with and without \ndrift compression.\nFor numerical stabilty we added small diffusive terms on the right hand \nsides of the equations.\nThe discontinuous Galerkin methods employ three polynomial coefficients and a minimum of $N_x=N_y=768$ grid cells. The box size is $50\\ell$ in order to mitigate \ninfluences of the finite box size on the blob dynamics. \nMoreover, we used the invariants in Eqs. \\eqref{eq:energya} and \\eqref{eq:energyb} as consistency tests to verify the code and repeated simulations \nalso in a gyrofluid model. \nNo differences to the results presented here were found. \nInitial perturbations on the particle density field are given by Eq.~\\eqref{eq:inita},\nwhere the perturbation amplitude $\\triangle n/n_0$ was chosen between $10^{-3}$ and $20$ for blobs and $-10^0$ and $ -10^{-3}$ for depletions. \nDue to computational reasons we show results only for $\\triangle n/n_0\\leq 20$. \n\n\nFor compressible flows we consider two different cases $\\ell/R_0 = 10^{-2}$ and\n$\\ell /R_0 = 10^{-3}$. \n For incompressible flows Eq.~\\eqref{eq:generala} and \\eqref{eq:vorticity}\n can be normalized such that the blob radius is absent from the equations~\\cite{Ott1978, Kube2012}. \n The simulations of incompressible flows can thus be used for both sizes. \nThe numerical code as well as input parameters and output data can be found \nin the supplemental dataset to this contribution~\\cite{Data2017}.\n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{com_blobs}\n \\caption{\n The maximum radial COM velocities of blobs for compressible and incompressible flows are shown. \n The continuous lines show Eq.~\\eqref{eq:vmax_theo} while the \n dashed line shows the square root scaling Eq.~\\eqref{eq:sqrt} with \n $\\mathcal Q = 0.32$ and $\\mathcal R=0.85$.\n }\n \\label{fig:com_blobs}\n\\end{figure}\nIn Fig.~\\ref{fig:com_blobs} we plot the maximum COM velocity for blobs \nwith and without drift compression.\nFor incompressible flows blobs follow the square root scaling almost \nperfectly. Only at very large amplitudes velocities are slightly below\nthe predicted values. \nFor small amplitudes we observe that the compressible blobs follow\na linear scaling. When the amplitudes increase there is a transition to the\nsquare root scaling at around $\\triangle n/n_0 \\simeq 0.5$ for \n$\\ell/R_0=10^{-2}$ and $\\triangle n/n_0 \\simeq 0.05$ for $\\ell/R_0=10^{-3}$, which is consistent with Eq.~\\eqref{eq:vmax_theo} and Reference~\\cite{Kube2016}. \nIn the transition regions the simulated velocities are slightly larger than the predicted ones from Eq.~\\eqref{eq:vmax_theo}.\nBeyond these amplitudes\nthe velocities of compressible and incompressible blobs align. \n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{com_holes}\n \\caption{\n The maximum radial COM velocities of depletions for compressible and incompressible flows are shown. \n The continuous lines show Eq.~\\eqref{eq:vmax_theo} while the \n dashed line shows the square root scaling Eq.~\\eqref{eq:sqrt} with \n $\\mathcal Q = 0.32$ and $\\mathcal R=0.85$.\n Note that small amplitudes are on the right and amplitudes close to unity are on the left side.\n }\n \\label{fig:com_depletions}\n\\end{figure}\nIn Fig.~\\ref{fig:com_depletions} we show the maximum radial COM velocity \nfor depletions instead of blobs.\nFor relative amplitudes below $|\\triangle n|/n_0 \\simeq 0.5$ (right of unity in the plot) the velocities\ncoincide with the corresponding blob velocities in Fig.~\\ref{fig:com_blobs}. \n For amplitudes larger than $|\\triangle n|/n_0\\simeq 0.5$ the \nvelocities follow the square root scaling.\nWe observe that for plasma depletions beyond $90$ percent the velocities \nin both systems reach a constant value that is very well predicted by the\nsquare root scaling. \n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{acc_blobs}\n \\caption{\n Average acceleration of blobs for compressible and incompressible flows are shown.\n The continuous line shows the acceleration in Eq.~\\eqref{eq:acceleration} \n with $\\mathcal Q=0.32$\n while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. \n }\n \\label{fig:acc_blobs}\n\\end{figure}\nIn Fig.~\\ref{fig:acc_blobs} we show the average acceleration of blobs \nfor compressible and incompressible flows computed\nby dividing the maximum velocity $\\max V$ by the time \nto reach this velocity $t_{\\max V}$. \nWe compare the simulation results\nto the theoretical predictions Eq.~\\eqref{eq:acceleration} of our model with and without inertia. \nThe results of the compressible and incompressible systems coincide and fit very\nwell to our theoretical values. \nFor amplitudes larger than unity the acceleration deviates significantly from the prediction with Boussinesq approximation.\n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{acc_holes}\n \\caption{\n Average acceleration of depletions for compressible and incompressible flows are shown.\n The continuous line shows the acceleration in Eq.~\\eqref{eq:acceleration} \n with $\\mathcal Q=0.32$\n while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. \n }\n \\label{fig:acc_depletions}\n\\end{figure}\nIn Fig.~\\ref{fig:acc_depletions} we show the simulated acceleration of depletions in the\ncompressible and the incompressible systems. We compare the simulation results\nto the theoretical predictions Eq.~\\eqref{eq:acceleration} of our model with and without inertia.\nDeviations from our theoretical prediction Eq.~\\eqref{eq:acceleration} are visible for amplitudes smaller than $\\triangle n/n_0 \\simeq -0.5$ (left of unity in the plot). The relative deviations are small at around $20$ percent. \nAs in Fig.~\\ref{fig:com_depletions} the acceleration reaches a constant values\nfor plasma depletions of more than $90$ percent.\nComparing Fig.~\\ref{fig:acc_depletions} to Fig.~\\ref{fig:acc_blobs} the asymmetry between blobs and depletions becomes \napparent. While the acceleration of blobs is reduced for large \namplitudes compared to a linear dependence the acceleration \nof depletions is increased. In the language of our simple buoyancy \nmodel the inertia of depletions is reduced but increased for blobs. \n\n\n\nIn conclusion \n we discuss the dynamics of seeded blobs and depletions in a \n compressible and an incompressible system.\n With only two fit parameters our theoretical results reproduce the \n numerical COM velocities and accelerations over five orders of magnitude.\n We derive the amplitude dependence of the acceleration of blobs and depletions from \n the conservation laws of our systems in Eq.~\\eqref{eq:acceleration}. \n From the same inequality a linear regime is derived in the compressible system for \n ratios of amplitudes to sizes smaller than a critical value.\n In this regime \n the blob and depletion velocity depends linearly on the initial amplitude and \n is independent of size. The regime is absent from the system with incompressible flows.\n Our theoretical results are verified by numerical simulations for all \n amplitudes that are relevant in magnetic fusion devices.\n Finally, we suggest a new empirical blob model that captures the detailed dynamics of more complicated models. \n The Boussinesq approximation is clarified as the absence of inertia and a thus altered acceleration of blobs and depletions.\n The maximum blob velocity is not altered by the Boussinesq approximation.\n\nThe authors were supported with financial subvention from the Research Council of Norway under grant\n240510/F20. M.W. and M.H. were supported by the Austrian Science Fund (FWF) Y398. The computational\nresults presented have been achieved in part using the Vienna Scientific Cluster (VSC). Part of this work was performed on the Abel Cluster, owned by the University of Oslo and the Norwegian metacenter\nfor High Performance Computing (NOTUR), and operated by the Department for Research Computing at USIT,\nthe University of Oslo IT-department.\nThis work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.", "answers": ["The maximum velocity scales with the square root of the amplitude."], "length": 2748, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "7a503a81877d3baca86f8d7179209e4899823433ab3326f3"}
{"input": "How does the performance of the PLM with decimation compare to other methods?", "context": "\\section{Introduction}\nGiven a data set and a model with some unknown parameters, the inverse problem aims to find the values of the model parameters that best fit the data. \nIn this work, in which we focus on systems of interacting elements,\n the inverse problem concerns the statistical inference\n of the underling interaction network and of its coupling coefficients from observed data on the dynamics of the system. \n Versions of this problem are encountered in physics, biology (e.g., \\cite{Balakrishnan11,Ekeberg13,Christoph14}), social sciences and finance (e.g.,\\cite{Mastromatteo12,yamanaka_15}), neuroscience (e.g., \\cite{Schneidman06,Roudi09a,tyrcha_13}), just to cite a few, and are becoming more and more important due to the increase in the amount of data available from these fields.\\\\\n \\indent \n A standard approach used in statistical inference is to predict the interaction couplings by maximizing the likelihood function.\n This technique, however, requires the evaluation of the \n \n partition function that, in the most general case, concerns a number of computations scaling exponentially with the system size.\n \n \n Boltzmann machine learning uses Monte Carlo sampling to compute the gradients of the Log-likelihood looking for stationary points \\cite{Murphy12} but this method is computationally manageable only for small systems. A series of faster approximations, such as naive mean-field, independent-pair approximation \\cite{Roudi09a, Roudi09b}, inversion of TAP equations \\cite{Kappen98,Tanaka98}, small correlations expansion \\cite{Sessak09}, adaptive TAP \\cite{Opper01}, adaptive cluster expansion \\cite{Cocco12} or Bethe approximations \\cite{Ricci-Tersenghi12, Nguyen12} have, then, been developed. These techniques take as input means and correlations of observed variables and most of them assume a fully connected graph as underlying connectivity network, or expand around it by perturbative dilution. In most cases, network reconstruction turns out to be not accurate for small data sizes and/or when couplings are strong or, else, if the original interaction network is sparse.\\\\\n\\indent\n A further method, substantially improving performances for small data, is the so-called Pseudo-Likelyhood Method (PLM) \\cite{Ravikumar10}. In Ref. \\cite{Aurell12} Aurell and Ekeberg performed a comparison between PLM and some of the just mentioned mean-field-based algorithms on the pairwise interacting Ising-spin ($\\sigma = \\pm 1$) model, showing how PLM performs sensitively better, especially on sparse graphs and in the high-coupling limit, i.e., for low temperature.\n \n In this work, we aim at performing statistical inference on a model whose interacting variables are continuous $XY$ spins, i.e., $\\sigma \\equiv \\left(\\cos \\phi,\\sin \\phi\\right)$ with $\\phi \\in [0, 2\\pi )$. The developed tools can, actually, be also straightforward applied to the $p$-clock model \\cite{Potts52} where the phase $\\phi$ takes discretely equispaced $p$ values in the $2 \\pi$ interval, $\\phi_a = a 2 \\pi/p$, with $a= 0,1,\\dots,p-1$. The $p$-clock model, else called vector Potts model, gives a hierarchy of discretization of the $XY$ model as $p$ increases. For $p=2$, one recovers the Ising model, for $p=4$ the Ashkin-Teller model \\cite{Ashkin43}, for $p=6$ the ice-type model \\cite{Pauling35,Baxter82} and the eight-vertex model \\cite{Sutherland70,Fan70,Baxter71} for $p=8$. \nIt turns out to be very useful also for numerical implementations of the continuous $XY$ model. \nRecent analysis on the multi-body $XY$ model has shown that for a limited number of discrete phase values ($p\\sim 16, 32$) the thermodynamic critical properties of the $p\\to\\infty$ $XY$ limit are promptly recovered \\cite{Marruzzo15, Marruzzo16}. \nOur main motivation to study statistical inference is that these kind of models have recently turned out to be rather useful in describing the behavior of optical systems, \nincluding standard mode-locking lasers \\cite{Gordon02,Gat04,Angelani07,Marruzzo15} and random lasers \\cite{Angelani06a,Leuzzi09a,Antenucci15a,Antenucci15b,Marruzzo16}. \nIn particular, the inverse problem on the pairwise XY model analyzed here might be of help in recovering images from light propagated through random media. \n\n\n This paper is organized as follows: in Sec. \\ref{sec:model} we introduce the general model and we discuss its derivation also as a model for light transmission through random scattering media. \n In Sec. \\ref{sec:plm} we introduce the PLM with $l_2$ regularization and with decimation, two variants of the PLM respectively introduced in Ref. \\cite{Wainwright06} and \\cite{Aurell12} for the inverse Ising problem. \n Here, we analyze these techniques for continuous $XY$ spins and we test them on thermalized data generated by Exchange Monte Carlo numerical simulations of the original model dynamics. In Sec. \\ref{sec:res_reg} we present the results related to the PLM-$l_2$. In Sec. \\ref{sec:res_dec} the results related to the PLM with decimation are reported and its performances are compared to the PLM-$l_2$ and to a variational mean-field method analyzed in Ref. \\cite{Tyagi15}. In Sec. \\ref{sec:conc}, we outline conclusive remarks and perspectives.\n\n \\section{The leading $XY$ model}\n \\label{sec:model}\n The leading model we are considering is defined, for a system of $N$ angular $XY$ variables, by the Hamiltonian \n \\begin{equation}\n \\mathcal{H} = - \\sum_{ik}^{1,N} J_{ik} \\cos{\\left(\\phi_i-\\phi_k\\right)} \n \\label{eq:HXY}\n \n \\end{equation} \n \n The $XY$ model is well known in statistical mechanics, displaying important physical\n insights, starting from the Berezinskii-Kosterlitz-Thouless\n transition in two dimensions\\cite{Berezinskii70,Berezinskii71,Kosterlitz72} and moving to, e.g., the\n transition of liquid helium to its superfluid state \\cite{Brezin82}, the roughening transition of the interface of a crystal in equilibrium with its vapor \\cite{Cardy96}. In presence of disorder and frustration \\cite{Villain77,Fradkin78} the model has been adopted to describe synchronization problems as the Kuramoto model \\cite{Kuramoto75} and in the theoretical modeling of Josephson junction arrays \\cite{Teitel83a,Teitel83b} and arrays of coupled lasers \\cite{Nixon13}.\n Besides several derivations and implementations of the model in quantum and classical physics, equilibrium or out of equilibrium, ordered or fully frustrated systems, Eq. (\\ref{eq:HXY}), in its generic form,\n has found applications also in other fields. A rather fascinating example being the behavior of starlings flocks \\cite{Reynolds87,Deneubourg89,Huth90,Vicsek95, Cavagna13}.\n Our interest on the $XY$ model resides, though, in optics. Phasor and phase models with pairwise and multi-body interaction terms can, indeed, describe the behavior of electromagnetic modes in both linear and nonlinear optical systems in the analysis of problems such as light propagation and lasing \\cite{Gordon02, Antenucci15c, Antenucci15d}. As couplings are strongly frustrated, these models turn out to be especially useful to the study of optical properties in random media \\cite{Antenucci15a,Antenucci15b}, as in the noticeable case of random lasers \\cite{Wiersma08,Andreasen11,Antenucci15e} and they might as well be applied to linear scattering problems, e.g., propagation of waves in opaque systems or disordered fibers. \n \n \n \\subsection{A propagating wave model}\n We briefly mention a derivation of the model as a proxy for the propagation of light through random linear media. \n Scattering of light is held responsible to obstruct our view and make objects opaque. Light rays, once that they enter the material, only exit after getting scattered multiple times within the material. In such a disordered medium, both the direction and the phase of the propagating waves are random. Transmitted light \n yields a disordered interference pattern typically having low intensity, random phase and almost no resolution, called a speckle. Nevertheless, in recent years it has been realized that disorder is rather a blessing in disguise \\cite{Vellekoop07,Vellekoop08a,Vellekoop08b}. Several experiments have made it possible to control the behavior of light and other optical processes in a given random disordered medium, \n by exploiting, e.g., the tools developed for wavefront shaping to control the propagation of light and to engineer the confinement of light \\cite{Yilmaz13,Riboli14}.\n \\\\\n \\indent\n In a linear dielectric medium, light propagation can be described through a part of the scattering matrix, the transmission matrix $\\mathbb{T}$, linking the outgoing to the incoming fields. \n Consider the case in which there are $N_I$ incoming channels and $N_O$ outgoing ones; we can indicate with $E^{\\rm in,out}_k$ the input/output electromagnetic field phasors of channel $k$. In the most general case, i.e., without making any particular assumptions on the field polarizations, each light mode and its polarization polarization state can be represented by means of the $4$-dimensional Stokes vector. Each $ t_{ki}$ element of $\\mathbb{T}$, thus, is a $4 \\times 4$ M{\\\"u}ller matrix. If, on the other hand, we know that the source is polarized and the observation is made on the same polarization, one can use a scalar model and adopt Jones calculus \\cite{Goodman85,Popoff10a,Akbulut11}:\n \\begin{eqnarray}\n E^{\\rm out}_k = \\sum_{i=1}^{N_I} t_{ki} E^{\\rm in}_i \\qquad \\forall~ k=1,\\ldots,N_O\n \\label{eq:transm}\n \\end{eqnarray}\n We recall that the elements of the transmission matrix are random complex coefficients\\cite{Popoff10a}. For the case of completely unpolarized modes, we can also use a scalar model similar to Eq. \\eqref{eq:transm}, but whose variables are the intensities of the outgoing/incoming fields, rather than the fields themselves.\\\\ \nIn the following, for simplicity, we will consider Eq. (\\ref{eq:transm}) as our starting point,\nwhere $E^{\\rm out}_k$, $E^{\\rm in}_i$ and $t_{ki}$ are all complex scalars. \nIf Eq. \\eqref{eq:transm} holds for any $k$, we can write:\n \\begin{eqnarray}\n \\int \\prod_{k=1}^{N_O} dE^{\\rm out}_k \\prod_{k=1}^{N_O}\\delta\\left(E^{\\rm out}_k - \\sum_{j=1}^{N_I} t_{kj} E^{\\rm in}_j \\right) = 1\n \\nonumber\n \\\\\n \\label{eq:deltas}\n \\end{eqnarray}\n\n Observed data are a noisy representation of the true values of the fields. Therefore, in inference problems it is statistically more meaningful to take that noise into account in a probabilistic way, \n rather than looking at the precise solutions of the exact equations (whose parameters are unknown). \n To this aim we can introduce Gaussian distributions whose limit for zero variance are the Dirac deltas in Eq. (\\ref{eq:deltas}).\n Moreover, we move to consider the ensemble of all possible solutions of Eq. (\\ref{eq:transm}) at given $\\mathbb{T}$, looking at all configurations of input fields. We, thus, define the function:\n \n \\begin{eqnarray}\n Z &\\equiv &\\int_{{\\cal S}_{\\rm in}} \\prod_{j=1}^{N_I} dE^{\\rm in}_j \\int_{{\\cal S}_{\\rm out}}\\prod_{k=1}^{N_O} dE^{\\rm out}_k \n \\label{def:Z}\n\\\\\n \\times\n &&\\prod_{k=1}^{N_O}\n \\frac{1}{\\sqrt{2\\pi \\Delta^2}} \\exp\\left\\{-\\frac{1}{2 \\Delta^2}\\left|\n E^{\\rm out}_k -\\sum_{j=1}^{N_I} t_{kj} E^{\\rm in}_j\\right|^2\n\\right\\} \n\\nonumber\n \\end{eqnarray}\n We stress that the integral of Eq. \\eqref{def:Z} is not exactly a Gaussian integral. Indeed, starting from Eq. \\eqref{eq:deltas}, two constraints on the electromagnetic field intensities must be taken into account. \n\n The space of solutions is delimited by the total power ${\\cal P}$ received by system, i.e., \n ${\\cal S}_{\\rm in}: \\{E^{\\rm in} |\\sum_k I^{\\rm in}_k = \\mathcal{P}\\}$, also implying a constraint on the total amount of energy that is transmitted through the medium, i. e., \n ${\\cal S}_{\\rm out}:\\{E^{\\rm out} |\\sum_k I^{\\rm out}_k=c\\mathcal{P}\\}$, where the attenuation factor $c<1$ accounts for total losses.\n As we will see more in details in the following, being interested in inferring the transmission matrix through the PLM, we can omit to explicitly include these terms in Eq. \\eqref{eq:H_J} since they do not depend on $\\mathbb{T}$ not adding any information on the gradients with respect to the elements of $\\mathbb{T}$.\n \n Taking the same number of incoming and outcoming channels, $N_I=N_O=N/2$, and ordering the input fields in the first $N/2$ mode indices and the output fields in the last $N/2$ indices, we can drop the ``in'' and ``out'' superscripts and formally write $Z$ as a partition function\n \\begin{eqnarray}\n \\label{eq:z}\n && Z =\\int_{\\mathcal S} \\prod_{j=1}^{N} dE_j \\left( \\frac{1}{\\sqrt{2\\pi \\Delta^2}} \\right)^{N/2} \n \\hspace*{-.4cm} \\exp\\left\\{\n -\\frac{ {\\cal H} [\\{E\\};\\mathbb{T}] }{2\\Delta^2}\n \\right\\}\n \\\\\n&&{\\cal H} [\\{E\\};\\mathbb{T}] =\n- \\sum_{k=1}^{N/2}\\sum_{j=N/2+1}^{N} \\left[E^*_j t_{jk} E_k + E_j t^*_{kj} E_k^* \n\\right]\n \\nonumber\n\\\\\n&&\\qquad\\qquad \\qquad + \\sum_{j=N/2+1}^{N} |E_j|^2+ \\sum_{k,l}^{1,N/2}E_k\nU_{kl} E_l^*\n \\nonumber\n \\\\\n \\label{eq:H_J}\n &&\\hspace*{1.88cm } = - \\sum_{nm}^{1,N} E_n J_{nm} E_m^*\n \\end{eqnarray}\n where ${\\cal H}$ is a real-valued function by construction, we have introduced the effective input-input coupling matrix\n\\begin{equation}\nU_{kl} \\equiv \\sum_{j=N/2+1}^{N}t^*_{lj} t_{jk} \n \\label{def:U}\n \\end{equation}\n and the whole interaction matrix reads (here $\\mathbb{T} \\equiv \\{ t_{jk} \\}$)\n \\begin{equation}\n \\label{def:J}\n \\mathbb J\\equiv \\left(\\begin{array}{ccc|ccc}\n \\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\phantom{()}&-\\mathbb{U} \\phantom{()}&\\phantom{()}&\\phantom{()}&{\\mathbb{T}}&\\phantom{()}\\\\\n\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\hline\n\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\phantom{()}& \\mathbb T^\\dagger&\\phantom{()}&\\phantom{()}& - \\mathbb{I} &\\phantom{()}\\\\\n\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}\\\\\n \\end{array}\\right)\n \\end{equation}\n \n Determining the electromagnetic complex amplitude configurations that minimize the {\\em cost function} ${\\cal H}$, Eq. (\\ref{eq:H_J}), means to maximize the overall distribution peaked around the solutions of the transmission Eqs. (\\ref{eq:transm}). As the variance $\\Delta^2\\to 0$, eventually, the initial set of Eqs. (\\ref{eq:transm}) are recovered. The ${\\cal H}$ function, thus, plays the role of an Hamiltonian and $\\Delta^2$ the role of a noise-inducing temperature. The exact numerical problem corresponds to the zero temperature limit of the statistical mechanical problem. Working with real data, though, which are noisy, a finite ``temperature''\n allows for a better representation of the ensemble of solutions to the sets of equations of continuous variables. \n \n\n \n \n Now, we can express every phasor in Eq. \\eqref{eq:z} as $E_k = A_k e^{\\imath \\phi_k}$. As a working hypothesis we will consider the intensities $A_k^2$ as either homogeneous or as \\textit{quenched} with respect to phases.\nThe first condition occurs, for instance, to the input intensities $|E^{\\rm in}_k|$ produced by a phase-only spatial light modulator (SLM) with homogeneous illumination \\cite{Popoff11}.\nWith \\textit{quenched} here we mean, instead, that the intensity of each mode is the same for every solution of Eq. \\eqref{eq:transm} at fixed $\\mathbb T$.\nWe stress that, including intensities in the model does not preclude the inference analysis but it is out of the focus of the present work and will be considered elsewhere. \n\nIf all intensities are uniform in input and in output, this amount to a constant rescaling for each one of the four sectors of matrix $\\mathbb J$ in Eq. (\\ref{def:J}) that will not change the properties of the matrices.\nFor instance, if the original transmission matrix is unitary, so it will be the rescaled one and the matrix $\\mathbb U$ will be diagonal.\nOtherwise, if intensities are \\textit{quenched}, i.e., they can be considered as constants in Eq. (\\ref{eq:transm}),\nthey are inhomogeneous with respect to phases. The generic Hamiltonian element will, therefore, rescale as \n \\begin{eqnarray}\n E^*_n J_{nm} E_m = J_{nm} A_n A_m e^{\\imath (\\phi_n-\\phi_m)} \\to J_{nm} e^{\\imath (\\phi_n-\\phi_m)}\n \\nonumber\n \\end{eqnarray}\n and the properties of the original $J_{nm}$ components are not conserved in the rescaled one. In particular, we have no argument, anymore, to possibly set the rescaled $U_{nm}\\propto \\delta_{nm}$.\n Eventually, we end up with the complex couplings $XY$ model, whose real-valued Hamiltonian is written as\n \\begin{eqnarray}\n \\mathcal{H}& = & - \\frac{1}{2} \\sum_{nm} J_{nm} e^{-\\imath (\\phi_n - \\phi_m)} + \\mbox{c.c.} \n \\label{eq:h_im}\n\\\\ &=& - \\frac{1}{2} \\sum_{nm} \\left[J^R_{nm} \\cos(\\phi_n - \\phi_m)+\n J^I_{nm}\\sin (\\phi_n - \\phi_m)\\right] \n \\nonumber\n \\end{eqnarray}\nwhere $J_{nm}^R$ and $J_{nm}^I$ are the real and imaginary parts of $J_{nm}$. Being $\\mathbb J$ Hermitian, $J^R_{nm}=J^R_{mn}$ is symmetric and $J_{nm}^I=-J_{mn}^I$ is skew-symmetric.\n\n\\begin{comment}\n\\textcolor{red}{\nF: comment about quenched:\nI think that to obtain the XY model, it is not necessary that the intensities are strictly quenched (that is also a quite unfeasible situation, I guess).\nIndeed eq (2) does not deal with the dynamics of the modes, but just connect the in and out ones.\nFor this, what it is necessary to have the XY model, it is that the intensities are always the same on the different samples\n(so that the matrix $t_{ij}$ is the same for different phase data). If the intensities are fixed, then they can be incorporated in $t_{ij}$ and eq (2) can be written just for phases as described. \\\\\n}\n\\end{comment}\n\n\n \\section{Pseudolikelihood Maximization}\n \\label{sec:plm}\nThe inverse problem consists in the reconstruction of the parameters $J_{nm}$ of the Hamiltonian, Eq. (\\ref{eq:h_im}). \nGiven a set of $M$ data configurations of $N$ spins\n $\\bm\\sigma = \\{ \\cos \\phi_i^{(\\mu)},\\sin \\phi_i^{(\\mu)} \\}$, $i = 1,\\dots,N$ and $\\mu=1,\\dots,M$, we want to \\emph{infer} the couplings:\n \\begin{eqnarray}\n\\bm \\sigma \\rightarrow \\mathbb{J} \n\\nonumber\n \\end{eqnarray}\n With this purpose in mind,\n in the rest of this section we implement the working equations for the techniques used. \n In order to test our methods, we generate the input data, i.e., the configurations, by Monte-Carlo simulations of the model.\n The joint probability distribution of the $N$ variables $\\bm{\\phi}\\equiv\\{\\phi_1,\\dots,\\phi_N\\}$, follows the Gibbs-Boltzmann distribution:\n \\begin{equation}\\label{eq:p_xy}\n P(\\bm{\\phi}) = \\frac{1}{Z} e^{-\\beta \\mathcal{H\\left(\\bm{\\phi}\\right)}} \\quad \\mbox{ where } \\quad Z = \\int \\prod_{k=1}^N d\\phi_k e^{-\\beta \\mathcal{H\\left(\\bm{\\phi}\\right)}} \n \\end{equation}\n and where we denote $\\beta=\\left( 2\\Delta^2 \\right)^{-1}$ with respect to Eq. (\\ref{def:Z}) formalism.\n In order to stick to usual statistical inference notation, in the following we will rescale the couplings by a factor $\\beta / 2$: $\\beta J_{ij}/2 \\rightarrow J_{ij}$. \n The main idea of the PLM is to work with the conditional probability distribution of one variable $\\phi_i$ given all other variables, \n $\\bm{\\phi}_{\\backslash i}$:\n \n \\begin{eqnarray}\n\t\\nonumber\n P(\\phi_i | \\bm{\\phi}_{\\backslash i}) &=& \\frac{1}{Z_i} \\exp \\left \\{ {H_i^x (\\bm{\\phi}_{\\backslash i})\n \t\\cos \\phi_i + H_i^y (\\bm{\\phi}_{\\backslash i}) \\sin \\phi_i } \\right \\}\n\t\\\\\n \\label{eq:marginal_xy}\n\t&=&\\frac{e^{H_i(\\bm{\\phi}_{\\backslash i}) \\cos{\\left(\\phi_i-\\alpha_i(\\bm{\\phi}_{\\backslash i})\\right)}}}{2 \\pi I_0(H_i)}\n \\end{eqnarray}\n where $H_i^x$ and $H_i^y$ are defined as\n \\begin{eqnarray}\n H_i^x (\\bm{\\phi}_{\\backslash i}) &=& \\sum_{j (\\neq i)} J^R_{ij} \\cos \\phi_j - \\sum_{j (\\neq i) } J_{ij}^{I} \\sin \\phi_j \\phantom{+ h^R_i} \\label{eq:26} \\\\\n H_i^y (\\bm{\\phi}_{\\backslash i}) &=& \\sum_{j (\\neq i)} J^R_{ij} \\sin \\phi_j + \\sum_{j (\\neq i) } J_{ij}^{I} \\cos \\phi_j \\phantom{ + h_i^{I} }\\label{eq:27}\n \\end{eqnarray}\nand $H_i= \\sqrt{(H_i^x)^2 + (H_i^y)^2}$, $\\alpha_i = \\arctan H_i^y/H_i^x$ and we introduced the modified Bessel function of the first kind:\n \\begin{equation}\n \\nonumber\n I_k(x) = \\frac{1}{2 \\pi}\\int_{0}^{2 \\pi} d \\phi e^{x \\cos{ \\phi}}\\cos{k \\phi}\n \\end{equation}\n \n Given $M$ observation samples $\\bm{\\phi}^{(\\mu)}=\\{\\phi^\\mu_1,\\ldots,\\phi^\\mu_N\\}$, $\\mu = 1,\\dots, M$, the\n pseudo-loglikelihood for the variable $i$ is given by the logarithm of Eq. (\\ref{eq:marginal_xy}),\n \\begin{eqnarray}\n \\label{eq:L_i}\n L_i &=& \\frac{1}{M} \\sum_{\\mu = 1}^M \\ln P(\\phi_i^{(\\mu)}|\\bm{\\phi}^{(\\mu)}_{\\backslash i})\n \\\\\n \\nonumber\n & =& \\frac{1}{M} \\sum_{\\mu = 1}^M \\left[ H_i^{(\\mu)} \\cos( \\phi_i^{(\\mu)} - \\alpha_i^{(\\mu)}) - \\ln 2 \\pi I_0\\left(H_i^{(\\mu)}\\right)\\right] \\, .\n \\end{eqnarray}\nThe underlying idea of PLM is that an approximation of the true parameters of the model is obtained for values that maximize the functions $L_i$.\nThe specific maximization scheme differentiates the different techniques.\n\n\n \n \n \\subsection{PLM with $l_2$ regularization}\n Especially for the case of sparse graphs, it is useful to add a regularizer, which prevents the maximization routine to move towards high values of \n $J_{ij}$ and $h_i$ without converging. We will adopt an $l_2$ regularization so that the Pseudolikelihood function (PLF) at site $i$ reads:\n \\begin{equation}\\label{eq:plf_i}\n {\\cal L}_i = L_i\n - \\lambda \\sum_{i \\neq j} \\left(J_{ij}^R\\right)^2 - \\lambda \\sum_{i \\neq j} \\left(J_{ij}^I\\right)^2 \n \\end{equation}\n with $\\lambda>0$.\n Note that the values of $\\lambda$ have to be chosen arbitrarily, but not too large, in order not to overcome $L_i$.\n The standard implementation of the PLM consists in maximizing each ${\\cal L}_i$, for $i=1\\dots N$, separately. The expected values of the couplings are then:\n \\begin{equation}\n \\{ J_{i j}^*\\}_{j\\in \\partial i} := \\mbox{arg max}_{ \\{ J_{ij} \\}}\n \\left[{\\cal L}_i\\right]\n \\end{equation}\n In this way, we obtain two estimates for the coupling $J_{ij}$, one from maximization of ${\\cal L}_i$, $J_{ij}^{(i)}$, and another one from ${\\cal L}_j$, say $J_{ij}^{(j)}$.\n Since the original Hamiltonian of the $XY$ model is Hermitian, we know that the real part of the couplings is symmetric while the imaginary part is skew-symmetric. \n \n The final estimate for $J_{ij}$ can then be obtained averaging the two results:\n \n \n \n \\begin{equation}\\label{eq:symm}\n J_{ij}^{\\rm inferred} = \\frac{J_{ij}^{(i)} + \\bar{J}_{ij}^{(j)}}{2} \n \\end{equation}\n where with $\\bar{J}$ we indicate the complex conjugate.\n It is worth noting that the pseudolikelihood $L_i$, Eq. \\eqref{eq:L_i}, is characterized by the\n following properties: (i) the normalization term of Eq.\\eqref{eq:marginal_xy} can be\n computed analytically at odd with the {\\em full} likelihood case that\n in general require a computational time which scales exponentially\n with the size of the systems; (ii) the $\\ell_2$-regularized pseudolikelihood\n defined in Eq.\\eqref{eq:plf_i} is strictly concave (i.e. it has a single\n maximizer)\\cite{Ravikumar10}; (iii) it is consistent, i.e. if $M$ samples are\n generated by a model $P(\\phi | J*)$ the maximizer tends to $J*$\n for $M\\rightarrow\\infty$\\cite{besag1975}. Note also that (iii) guarantees that \n $|J^{(i)}_{ij}-J^{(j)}_{ij}| \\rightarrow 0$ for $M\\rightarrow \\infty$.\n In Secs. \\ref{sec:res_reg}, \\ref{sec:res_dec} \n we report the results obtained and we analyze the performances of the PLM having taken the configurations from Monte-Carlo simulations of models whose details are known.\n \n\n \n \\subsection{PLM with decimation}\n Even though the PLM with $l_2$-regularization allows to dwell the inference towards the low temperature region and in the low sampling case with better performances that mean-field methods, in some situations some couplings are overestimated and not at all symmetric. Moreover, in the technique there is the bias of the $l_2$ regularizer.\n Trying to overcome these problems, Decelle and Ricci-Tersenghi introduced a new method \\cite{Decelle14}, known as PLM + decimation: the algorithm maximizes the sum of the $L_i$,\n \\begin{eqnarray}\n {\\cal L}\\equiv \\frac{1}{N}\\sum_{i=1}^N \\mbox{L}_i\n \\end{eqnarray} \n and, then, it recursively set to zero couplings which are estimated very small. We expect that as long as we are setting to zero couplings that are unnecessary to fit the data, there should be not much changing on ${\\cal L}$. Keeping on with decimation, a point is reached where ${\\cal L}$ decreases abruptly indicating that relevant couplings are being decimated and under-fitting is taking place.\n Let us define by $x$ the fraction of non-decimated couplings. To have a quantitative measure for the halt criterion of the decimation process, a tilted ${\\cal L}$ is defined as,\n \\begin{eqnarray}\n \\mathcal{L}_t &\\equiv& \\mathcal{L} - x \\mathcal{L}_{\\textup{max}} - (1-x) \\mathcal{L}_{\\textup{min}} \\label{$t$PLF} \n \\end{eqnarray}\n where \n \\begin{itemize}\n \\item $\\mathcal{L}_{\\textup{min}}$ is the pseudolikelyhood of a model with independent variables. In the XY case: $\\mathcal{L}_{\\textup{min}}=-\\ln{2 \\pi}$.\n \\item\n $\\mathcal{L}_{\\textup{max}}$ is the pseudolikelyhood in the fully-connected model and it is maximized over all the $N(N-1)/2$ possible couplings. \n \\end{itemize}\n At the first step, when $x=1$, $\\mathcal{L}$ takes value $\\mathcal{L}_{\\rm max}$ and $\\mathcal{L}_t=0$. On the last step, for an empty graph, i.e., $x=0$, $\\mathcal{L}$ takes the value $\\mathcal{L}_{\\rm min}$ and, hence, again $\\mathcal{L}_t =0$. \n In the intermediate steps, during the decimation procedure, as $x$ is decreasing from $1$ to $0$, one observes firstly that $\\mathcal{L}_t$ increases linearly and, then, it displays an abrupt decrease indicating that from this point on relevant couplings are being decimated\\cite{Decelle14}. In Fig. \\ref{Jor1-$t$PLF} we give an instance of this behavior for the 2D short-range XY model with ordered couplings. We notice that the maximum point of $\\mathcal{L}_t$ coincides with the minimum point of the reconstruction error, the latter defined as \n \\begin{eqnarray}\\label{eq:errj}\n \\mbox{err}_J \\equiv \\sqrt{\\frac{\\sum_{i<j} (J^{\\rm inferred}_{ij} -J^{\\rm true}_{ij})^2}{N(N-1)/2}} \\label{err}\n \\end{eqnarray}\n We stress that the ${\\cal L}_t$ maximum is obtained ignoring the underlying graph, while the err$_J$ minimum can be evaluated once the true graph has been reconstructed. \n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor1_dec_tPLF_new.eps}\n \t\\caption{The tilted likelyhood ${\\cal L}_t$ curve and the reconstruction error vs the number of decimated couplings for an ordered, real-valued J on 2D XY model with $N=64$ spins. The peak of ${\\cal L}_t$ coincides with the dip of the error.} \n \t\\label{Jor1-$t$PLF}\n \\end{figure} \n \n \n In the next sections we will show the results obtained on the $XY$ model analyzing the performances of the two methods and comparing them also with a mean-field method \\cite{Tyagi15}.\n \n \n \n \\section{Inferred couplings with PLM-$l_2$}\n \\label{sec:res_reg}\n \\subsection{$XY$ model with real-valued couplings}\n \n In order to obtain the vector of couplings, $J_{ij}^{\\rm inferred}$ the function $-\\mathcal{L}_i$ is minimized through the vector of derivatives ${\\partial \\mathcal{L}_i}/\\partial J_{ij}$. The process is repeated for all the couplings obtaining then a fully connected adjacency matrix. The results here presented are obtained with $\\lambda = 0.01$.\n For the minimization we have used the MATLAB routine \\emph{minFunc\\_2012}\\cite{min_func}. \n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor11_2D_l2_JR_soJR_TPJR}\n \t\\caption{Top panels: instances of single site coupling reconstruction for the case of $N=64$ XY spins on a 2D lattice with ordered $J$ (left column) and bimodal distributed $J$ (right column). \n \t\n \tBottom panels: sorted couplings.}\n \t\\label{PL-Jor1}\n \\end{figure}\n\n \nTo produce the data by means of numerical Monte Carlo simulations a system with $N=64$ spin variables is considered on a deterministic 2D lattice with periodic boundary conditions. \nEach spin has then connectivity $4$, i.e., we expect to infer an adjacency matrix with $N c = 256$ couplings different from zero. \nThe dynamics of the simulated model is based on the Metropolis algorithm and parallel tempering\\cite{earl05} is used to speed up the thermalization of the system.\nThe thermalization is tested looking at the average energy over logarithmic time windows and\nthe acquisition of independent configurations\nstarts only after the system is well thermalized.\n\n For the values of the couplings we considered two cases: an ordered case, indicated in the figure as $J$ ordered (e.g., left column of Fig. \\ref{PL-Jor1}) where the couplings can take values $J_{ij}=0,J$, with $J=1$, \n and a quenched disordered case, indicated in the figures as $J$ disordered (e.g., right column of Fig. \\ref{PL-Jor1})\n where the couplings can take also negative values, i.e., \n $J_{ij}=0,J,-J$, with a certain probability. The results here presented were obtained with bimodal distributed $J$s: \n \n $P(J_{ij}=J)=P(J_{ij}=-J)=1/2$. The performances of the PLM have shown not to depend on $P(J)$. \n \n We recall that in Sec. \\ref{sec:plm} we used the temperature-rescaled notation, i.e., $J_{ij}$ stands for $J_{ij}/T$. \n \n To analyze the performances of the PLM, in Fig. \\ref{PL-Jor1} the inferred couplings, $\\mathbb{J}^R_{\\rm inf}$, are shown on top of the original couplings, $\\mathbb{J}^R_{\\rm true}$.\n The first figure (from top) in the left column shows the $\\mathbb{J}^R_{\\rm inf}$ (black) and the $\\mathbb{J}^R_{\\rm tru}$ (green) for a given spin\n at temperature $T/J=0.7$ and number of samples $M=1024$. PLM appears to reconstruct the correct couplings, though zero couplings are always given a small inferred non-zero value. \n In the left column of Fig. \\ref{PL-Jor1}, both the $\\mathbb{J}^R_{\\rm{inf}}$ and the $\\mathbb{J}^R_{\\rm{tru}}$ are sorted in decreasing order and plotted on top of each other. \n We can clearly see that $\\mathbb{J}^R_{\\rm inf}$ reproduces the expected step function. Even though the jump is smeared, the difference between inferred couplings corresponding to the set of non-zero couplings \n and to the set of zero couplings can be clearly appreciated.\n Similarly, the plots in the right column of Fig. \\ref{PL-Jor1} show the results obtained for the case with bimodal disordered couplings, for the same working temperature and number of samples. \n In particular, note that the algorithm infers half positive and half negative couplings, as expected.\n \n \n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{Jor11_2D_l2_errJ_varT_varM}\n\\caption{Reconstruction error $\\mbox{err}_J$, cf. Eq. (\\ref{eq:errj}), plotted as a function of temperature (left) for three values of the number of samples $M$ and as a function $M$ (right) for three values of temperature in the ordered system, i.e., $J_{ij}=0,1$. \nThe system size is $N=64$.}\n\\label{PL-err-Jor1}\n\\end{figure}\n\nIn order to analyze the effects of the number of samples and of the temperature regimes, we plot in Fig. \\ref{PL-err-Jor1} the reconstruction error, Eq. (\\ref{err}), as a function of temperature for three different sample sizes $M=64,128$ and $512$. \nThe error is seen to sharply rise al low temperature, incidentally, in the ordered case, for $T<T_c \\sim 0.893$, which is the Kosterlitz-Thouless transition temperature of the 2XY model\\cite{Olsson92}. \n However, we can see that if only $M=64$ samples are considered, $\\mbox{err}_J$ remains high independently on the working temperature. \n In the right plot of Fig. \\ref{PL-err-Jor1}, $\\mbox{err}_J$ is plotted as a function of $M$ for three different working temperatures $T/J=0.4,0.7$ and $1.3$. As we expect, \n $\\mbox{err}_J$ decreases as $M$ increases. This effect was observed also with mean-field inference techniques on the same model\\cite{Tyagi15}.\n\nTo better understand the performances of the algorithms, in Fig. \\ref{PL-varTP-Jor1} we show several True Positive (TP) curves obtained for various values of $M$ at three different temperatures $T$. As $M$ is large and/or temperature is not too small, we are able to reconstruct correctly all the couplings present in the system (see bottom plots).\nThe True Positive curve displays how many times the inference method finds a true link of the original network as a function of the index of the vector of sorted absolute value of reconstructed couplings $J_{ij}^{\\rm inf}$. \nThe index $n_{(ij)}$ represents the related spin couples $(ij)$. The TP curve is obtained as follows: \nfirst the values $|J^{\\rm inf}_{ij}|$ are sorted in descending order and the spin pairs $(ij)$ are ordered according to the sorting position of $|J^{\\rm inf}_{ij}|$. Then,\n \ta cycle over the ordered set of pairs $(ij)$, indexed by $n_{(ij)}$, is performed, comparing with the original network coupling $J^{\\rm true}_{ij}$ and verifying whether it is zero or not. The true positive curve is computed as\n\\begin{equation}\n\\mbox{TP}[n_{(ij)}]= \\frac{\\mbox{TP}\\left[n_{(ij)}-1\\right] (n_{ij}-1)+ 1 -\\delta_{J^{\\rm true}_{ij},0}}{n_{(ij)}}\n\\end{equation}\nAs far as $J^{\\rm true}_{ij} \\neq 0$, TP$=1$. As soon as the true coupling of a given $(ij)$ couple in the sorted list is zero, the TP curve departs from one. \nIn our case, where the connectivity per spin of the original system is $c=4$ and there are $N=64$ spins, we know that we will have $256$ non-zero couplings. \n \tIf the inverse problem is successful, hence, we expect a steep decrease of the TP curve when $n_{ij}=256$ is overcome.\n\nIn Fig. \\ref{PL-varTP-Jor1}\nit is shown that, almost independently of $T/J$, the TP score improves as $M$ increases. Results are plotted for three different temperatures, $T=0.4,1$ and $2.2$, with increasing number of samples $M = 64, 128,512$ and $1024$ (clockwise). \nWe can clearly appreciate the improvement in temperature if the size of the data-set is not very large: for small $M$, $T=0.4$ performs better. \nWhen $M$ is high enough (e.g., $M=1024$), instead, the TP curves do not appear to be strongly influenced by the temperature.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{Jor11_2D_l2_TPJR_varT_varM}\n\t\\caption{TP curves for 2D short-range ordered $XY$ model with $N=64$ spins at three different values of $T/J$ with increasing - clockwise from top - $M$.}\n\t\\label{PL-varTP-Jor1} \n\\end{figure} \n\n\\subsection{$XY$ model with complex-valued couplings}\nFor the complex $XY$ we have to contemporary infer $2$ apart coupling matrices, $J^R_{i j}$ and $J^I_{i j}$. As before, a system of $N=64$ spins is considered on a 2D lattice.\nFor the couplings we have considered both ordered and bimodal disordered cases.\nIn Fig. \\ref{PL-Jor3}, a single row of the matrix $J$ (top) and the whole sorted couplings (bottom) are displayed for the ordered model (same legend as in Fig. \\ref{PL-Jor1}) for the real, $J^R$ (left column), and the imaginary part, $J^I$. \n\n\\begin{figure}[t!]\n\t\\centering\n\\includegraphics[width=1\\linewidth]{Jor3_l2_JRJI_soJRJI_TPJRJI}\n\t\\caption{Results related to the ordered complex XY model with $N=64$ spins on a 2D lattice. Top: instances of single site reconstruction for the real, JR (left column), and\n\t\tthe imaginary, JI (right column), part of $J_{ij}$. Bottom: sorted values of JR (left) and JI (right).}\n\t\t\n\t\\label{PL-Jor3}\n\\end{figure}\n \n \n \\section{PLM with Decimation}\n \\label{sec:res_dec}\n \n\n\\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor1_dec_tPLF_varT_varM}\n \t\\caption{Tilted Pseudolikelyhood, ${\\cal L}_t$, plotted as a function of decimated couplings. Top: Different ${\\cal L}_t$ curves obtained for different values of $M$ plotted on top of each other. Here $T=1.3$. The black line indicates the expected number of decimated couplings, $x^*=(N (N-1) - N c)/2=1888$. As we can see, as $M$ increases, the maximum point of ${\\cal L}_t$ approaches $x^*$. Bottom: Different ${\\cal L}_t$ curves obtained for different values of T with $M=2048$. We can see that, with this value of $M$, no differences can be appreciated on the maximum points of the different ${\\cal L}_t$ curves.}\n \t\\label{var-$t$PLF}\n \\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1\\linewidth]{Jor1_dec_tPLF_peak_statistics_varM_prob.eps}\n \t\\caption{Number of most likely decimated couplings, estimated by the maximum point of $\\mathcal{L}_t$, as a function of the number of samples $M$. We can clearly see that the maximum point of $\\mathcal{L}_t$ tends toward $x^*$, which is the right expected number of zero couplings in the system.} \n \t\\label{PLF_peak_statistics}\n \\end{figure}\n \n For the ordered real-valued XY model we show in Fig. \\ref{var-$t$PLF}, top panel, the outcome on the tilted pseudolikelyhood, $\\mathcal{L}_t$ Eq. \\eqref{$t$PLF}, of the progressive decimation: from a fully connected lattice down to an empty lattice. The figure shows the behaviour of $\\mathcal{L}_t$ for three different data sizes $M$. A clear data size dependence of the maximum point of $\\mathcal{L}_t$, signalling the most likely value for decimation, is shown. For small $M$ the most likely number of couplings is overestimated and for increasing $M$ it tends to the true value, as displayed in Fig. \\ref{PLF_peak_statistics}. In the bottom panel of Fig. \\ref{var-$t$PLF} we display instead different \n $\\mathcal{L}_t$ curves obtained for three different values of $T$.\n Even though the values of $\\mathcal{L}_t$ decrease with increasing temperature, the value of the most likely number of decimated couplings appears to be quite independent on $T$ with $M=2048$ number of samples.\nIn Fig. \\ref{fig:Lt_complex} we eventually display the tilted pseudolikelyhood for a 2D network with complex valued ordered couplings, where the decimation of the real and imaginary coupling matrices proceeds in parallel, that is, \nwhen a real coupling is small enough to be decimated its imaginary part is also decimated, and vice versa.\nOne can see that though the apart errors for the real and imaginary parts are different in absolute values, they display the same dip, to be compared with the maximum point of $\\mathcal{L}_t$.\n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor3_dec_tPLF_new}\n \t\\caption{Tilted Pseudolikelyhood, ${\\cal L}_t$, plotted with the reconstruction errors for the XY model with $N=64$ spins on a 2D lattice. These results refer to the case of ordered and complex valued couplings. The full (red) line indicates ${\\cal L}_t$. The dashed (green) \n \t\tand the dotted (blue) lines show the reconstruction errors (Eq. \\eqref{eq:errj}) obtained for the real and the imaginary couplings respectively. We can see that both ${\\rm err_{JR}}$ and ${\\rm err_{JI}}$ have a minimum at $x^*$.}\n \t\\label{fig:Lt_complex}\n \\end{figure}\n\n\\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor1_dec_JR_soJR_TPJR}\n \t\\caption{XY model on a 2D lattice with $N=64$ sites and real valued couplings. The graphs show the inferred (dashed black lines) and true couplings (full green lines) plotted on top of each other. The left and right columns refer to the\n \t\t cases of ordered and bimodal disordered couplings, respectively. Top figures: single site reconstruction, i.e., one row of the matrix $J$. Bottom figures: couplings are plotted sorted in descending order.} \n \t\\label{Jor1_dec}\n \\end{figure}\n \n\\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor3_dec_JRJI_soJRJI_TPJRJI}\n \t\\caption{XY model on a 2D lattice with $N=64$ sites and ordered complex-valued couplings.\n \t\tThe inferred and true couplings are plotted on top of each other. The left and right columns show the real and imaginary parts, respectively, of the couplings. Top figures refer to a single site reconstruction, i.e., one row of the matrix $J$. Bottom figures report the couplings sorted in descending order.}\n \t\\label{Jor3_dec}\n \\end{figure}\n \n \n\n\n\n\n\n\n\n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{MF_PL_Jor1_2D_TPJR_varT}\n \t\\caption{True Positive curves obtained with the three techniques: PLM with decimation, (blue) dotted line, PLM with $l_2$ regularization, (greed) dashed line, and mean-field, (red) full line. These results refer to real valued ordered couplings with $N=64$ spins on a 2D lattice. The temperature is here $T=0.7$ while the four graphs refer to different sample sizes: $M$ increases clockwise.}\n \t\\label{MF_PL_TP}\n \\end{figure}\n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{MF_PL_Jor1_2D_errJ_varT_varM}\n \t\\caption{Variation of reconstruction error, ${\\rm err_J}$, with respect to temperature as obtained with the three different techniques, see Fig. \\ref{MF_PL_TP}, for four different sample size: clockwise from top $M=512,1024, 2048$ and $4096$.} \n \t\\label{MF_PL_err}\n \\end{figure}\n \n Once the most likely network has been identified through the decimation procedure, we perform the same analysis displayed in Fig. \\ref{Jor1_dec} for ordered and then quenched disordered real-valued couplings\nand in Fig. \\ref{Jor3_dec} for complex-valued ordered couplings. In comparison to the results shown in Sec. \\ref{sec:res_reg},\n the PLM with decimation leads to rather cleaner results. In Figs. \\ref{MF_PL_err} and \\ref{MF_PL_TP} we compare the performances of the PLM with decimation in respect to ones of the PLM with $l_2$-regularization. These two techniques are also analysed in respect to a mean-field technique previously implemented on the same XY systems\\cite{Tyagi15}.\n \n For what concerns the network of connecting links, in Fig. \\ref{MF_PL_TP} we compare the TP curves obtained with the three techniques. The results refer to the case of ordered and real valued couplings, but similar behaviours were obtained for the other cases analysed. \n The four graphs are related to different sample sizes, with $M$ increasing clockwise. When $M$ is high enough, all techniques reproduce the true network. \n However, for lower values of $M$ the performances of the PLM with $l_2$ regularization and with decimation drastically overcome those ones of the previous mean field technique. \n In particular, for $M=256$ the PLM techniques still reproduce the original network while the mean-field method fails to find more than half of the couplings. \n When $M=128$, the network is clearly reconstructed only through the PLM with decimation while the PLM with $l_2$ regularization underestimates the couplings. \n Furthermore, we notice that the PLM method with decimation is able to clearly infer the network of interaction even when $M=N$ signalling that it could be considered also in the under-sampling regime $M<N$. \n \n \nIn Fig. \\ref{MF_PL_err} we compare the temperature behaviour of the reconstruction error.\nIn can be observed that for all temperatures and for all sample sizes the reconstruction error, ${\\rm err_J}$, (plotted here in log-scale) obtained with the PLM+decimation is always smaller than \nthat one obtained with the other techniques. The temperature behaviour of ${\\rm err_J}$ agrees with the one already observed for Ising spins in \\cite{Nguyen12b} and for XY spins in \\cite{Tyagi15} with a mean-field approach: ${\\rm err_J}$ displays a minimum around $T\\simeq 1$ and then it increases for very lower $T$; however,\n the error obtained with the PLM with decimation is several times smaller than the error estimated by the other methods.\n\n\n\n \n \n\n \n \\section{Conclusions}\n \\label{sec:conc}\n\n\nDifferent statistical inference methods have been applied to the inverse problem of the XY model.\nAfter a short review of techniques based on pseudo-likelihood and their formal generalization to the model we have tested their performances against data generated by means of Monte Carlo numerical simulations of known instances\nwith diluted, sparse, interactions.\n\nThe main outcome is that the best performances are obtained by means of the pseudo-likelihood method combined with decimation. Putting to zero (i.e., decimating) very weak bonds, this technique turns out to be very precise for problems whose real underlying interaction network is sparse, i.e., the number of couplings per variable does not scale with number of variables.\nThe PLM + decimation method is compared to the PLM + regularization method, with $\\ell_2$ regularization and to a mean-field-based method. The behavior of the quality of the network reconstruction is analyzed by looking at the overall sorted couplings and at the single site couplings, comparing them with the real network, and at the true positive curves in all three approaches. In the PLM +decimation method, moreover, the identification of the number of decimated bonds at which the tilted pseudo-likelihood is maximum allows for a precise estimate of the total number of bonds. Concerning this technique, it is also shown that the network with the most likely number of bonds is also the one of least reconstruction error, where not only the prediction of the presence of a bond is estimated but also its value.\n\nThe behavior of the inference quality in temperature and in the size of data samples is also investigated, basically confirming the low $T$ behavior hinted by Nguyen and Berg \\cite{Nguyen12b} for the Ising model. In temperature, in particular, the reconstruction error curve displays a minimum at a low temperature, close to the critical point in those cases in which a critical behavior occurs, and a sharp increase as temperature goes to zero. The decimation method, once again, appears to enhance this minimum of the reconstruction error of almost an order of magnitude with respect to other methods.\n \nThe techniques displayed and the results obtained in this work can be of use in any of the many systems whose theoretical representation is given by Eq. \\eqref{eq:HXY} or Eq. \\eqref{eq:h_im}, some of which are recalled in Sec. \\ref{sec:model}. In particular, a possible application can be the field of light waves propagation through random media and the corresponding problem of the reconstruction of an object seen through an opaque medium or a disordered optical fiber \\cite{Vellekoop07,Vellekoop08a,Vellekoop08b, Popoff10a,Akbulut11,Popoff11,Yilmaz13,Riboli14}.\n\n \t", "answers": ["It outperforms mean-field methods and the PLM with $l_2$ regularization in terms of reconstruction error and true positive rate."], "length": 6312, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "f3374a1971e3cfb4f241e04b82d5016d892e9e32a4aa2b53"}
{"input": "What are some reasons for the lack of data sharing in archaeobotany?", "context": "Sowing the Seeds of Future Research: Data Sharing, Citation and Reuse in Archaeobotany\nReading: Sowing the Seeds of Future Research: Data Sharing, Citation and Reuse in Archaeobotany\nUniversity of Oxford, GB\nLisa is a post-doctoral research fellow at All Souls College, University of Oxford. Her publications include the co-authored volume The Rural Economy of Roman Britain (Britannia Monographs, 2017). Her research interests are focussed on agricultural practices in the later prehistoric and Roman period and the utilisation of archaeobotanical data to investigate human-plant relationships.\nThe practices of data sharing, data citation and data reuse are all crucial aspects of the reproducibility of archaeological research. This article builds on the small number of studies reviewing data sharing and citation practices in archaeology, focussing on the data-rich sub-discipline of archaeobotany. Archaeobotany is a sub-discipline built on the time-intensive collection of data on archaeological plant remains, in order to investigate crop choice, crop husbandry, diet, vegetation and a wide range of other past human-plant relationships. Within archaeobotany, the level and form of data sharing is currently unknown. This article first reviews the form of data shared and the method of data sharing in 239 articles across 16 journals which present primary plant macrofossil studies. Second, it assesses data-citation in meta-analysis studies in 107 articles across 20 journals. Third, it assesses data reuse practices in archaeobotany, before exploring how these research practices can be improved to benefit the rigour and reuse of archaeobotanical research.\nKeywords: Archaeobotany, Data reuse, Data sharing, Open science\nHow to Cite: Lodwick, L., 2019. Sowing the Seeds of Future Research: Data Sharing, Citation and Reuse in Archaeobotany. Open Quaternary, 5(1), p.7. DOI: http://doi.org/10.5334/oq.62\nAccepted on 29 May 2019 Submitted on 25 Mar 2019\nArchaeology is a discipline built on the production and analysis of quantitative data pertaining to past human behaviour. As each archaeological deposit is a unique occurrence, ensuring that the data resulting from excavation and analysis are preserved and accessible is crucially important. Currently, there is a general perception of a low level of data sharing and reuse. Such a low level of data availability would prevent the assessment of research findings and the reuse of data in meta-analysis (Kansa & Kansa 2013; Moore & Richards 2015). As observed across scientific disciplines, there is a major problem in the reproduction of scientific findings, commonly known as the ‘replication crisis’ (Costello et al. 2013). A range of intersecting debates contribute to this, including access to academic findings (open access), open data, access to software and access to methodologies, which can be broadly grouped as open science practices. Without these, the way that scientific findings can be verified and built upon is impaired. Questions of reproducibility have been raised in recent years in archaeology, with considerations of a range of practices which can improve the reproducibility of findings, and a recent call for the application of open science principles to archaeology (Marwick et al. 2017). Discussion has so far focussed on access to grey literature (Evans 2015), data sharing (Atici et al. 2013), data citation practices (Marwick & Pilaar Birch 2018) and computational reproducibility (Marwick 2017), with a focus on lithics, zooarchaeological evidence, and archaeological site reports.\nQuantitative assessments of current levels of data sharing, data citation and reuse remain limited in archaeology. The focus of evaluation has been on the uptake of large-scale digital archives for the preservation and dissemination of digital data, such as the Archaeology Data Service (ADS), utilised by developer-led and research projects, and recommended for use by many research funders in the UK (Richards 2002; Wright and Richards 2018). Much less focus has been paid to the data-sharing practices of individuals or small-groups of university-based researchers who may be disseminating their research largely through journal articles. Recent work on the availability of data on lithics assemblages found a low level of data sharing (Marwick & Pilaar Birch 2018) and there are perceptions of low levels of data reuse (Huggett 2018; Kintigh et al. 2018). Within zooarchaeology numerous studies have explored issues of data sharing and reuse (Kansa & Kansa 2013, 2014), and the sub-discipline is seen as one of the most advanced areas of archaeology in regards to open science (Cooper & Green 2016: 273). Beyond zooarchaeology, however, explicit discussion has remained limited.\nThis paper assesses data sharing and reuse practices in archaeology through the case study of archaeobotany – a long established sub-discipline within archaeology which has well-established principles of data recording. Archaeobotany is an interesting case study for data sharing in archaeology as it straddles the division of archaeology between scientific and more traditional techniques. Quantitative data on archaeological plant remains are also of interest to a range of other fields, including ecology, environmental studies, biology and earth sciences. The key issues of data sharing and data reuse (Atici et al. 2013) have been touched upon in archaeobotany over the past decade within broader discussions on data quality (Van der Veen, Livarda & Hill 2007; Van der Veen, Hill & Livarda 2013). These earlier studies focussed on the quality and availability of archaeobotanical data from developer-funded excavations in Britain and Cultural Resource Management in North America (Vanderwarker et al. 2016: 156). However, no discussion of data-sharing and reuse in academic archaeobotany occurred. A recent review of digital methods in archaeobotany is the notable exception, with discussions of the challenges and methods of data sharing (Warinner & d’Alpoim Guedes 2014).\nCurrently, we have no evidence for the levels of data sharing and reuse within archaeobotany. This article provides the first quantitative assessment of 1) data publication in recent archaeobotanical journal articles 2) data citation in recent archaeobotanical meta-analysis 3) the reuse of archaeobotanical datasets, in order to assess whether practices need to change and how such changes can take place.\n2. Data Publication and Re-use Practices in Archaeobotany\n2.1. History of data production and publication\nArchaeobotanical data falls within the category of observational data in archaeology (Marwick & Pilaar Birch 2018). Archaeobotanical data is considered as the quantitative assessment of plant macrofossils present within a sample from a discrete archaeological context, which can include species identification, plant part, levels of identification (cf. – confer or “compares to”), and a range of quantification methods including count, minimum number of individuals, levels of abundance and weight (Popper 1988). Archaeobotanical data is usually entered into a two-way data table organised by sample number. Alongside the counts of individual taxa, other information is also necessary to interpret archaeobotanical data, including sample volume, flot volume, charcoal volume, flot weight, level of preservation, sample number, context number, feature number, feature type and period. Beyond taxonomic identifications, a range of other types of data are increasingly gathered on individual plant macrofossils (morphometric measurements, isotopic values, aDNA).\nArchaeobotanical training places a strong emphasis on recording data on a sample-by-sample basis (Jacomet & Kreuz 1999: 138139; Jones & Charles 2009; Pearsall 2016: 97107). Time-consuming methodologies utilised in the pursuit of accurate sample-level data recording include sub-sampling and splitting samples into size fractions and counting a statistically useful number of items per sample (Van der Veen & Fieller 1982). The creation of sample-level data means analysis is often undertaken on the basis of individual samples, for instance the assessment of crop-processing stages and weed ecological evidence for crop husbandry practices. The analysis of sample level data also enables archaeobotanical finds to be integrated alongside contextual evidence from archaeological sites. Requirements for the publication of this data are in place in some archaeological guidelines, for instance current Historic England guidelines for archaeological practice in England (Campbell, Moffett & Straker 2011: 8).\nFrom the earliest archaeobotanical reports, such as Reid’s work at Roman Silchester, the sample from which plant remains were recovered was noted (Lodwick 2017a), but often results were reported as a list of taxa, or long catalogues of detailed botanical descriptions with seed counts, such as Knörzer’s work at Neuss (Knörzer 1970). Early systematic archaeobotanical reports displayed data within in-text tables, for example Jones’s work at Ashville (Jones 1978) and the two-way data table has been the standard form of reporting archaeobotanical data ever since. Often data tables are presented within book chapters or appendices, but the financial, space and time constraints of book publishing are limiting. Furthermore, there is the perception that specialist data was not necessary for publication (Barker 2001). Hence, alternative methods of the dissemination of specialist archaeological data were pursued in the later twentieth century.\nFrom the 1980s, archaeobotanical data tables were often consigned to microfiche following a Council for British Archaeology and Department of Environment report (Moore & Richards 2015: 31), with the example of the excavation of Roman Colchester where the contents of all archaeobotanical samples were available on microfiche (Murphy 1992). An alternative in the 2000s was providing data tables on CD Rom as seen, for instance, in the CD accompanying the study of a Roman farmstead in the Upper Thames Valley (Robinson 2007) or the One Poultry excavations in London (Hill and Rowsome 2011). Meanwhile, the inception of the Archaeology Data Service, a digital repository for heritage data, in 1996 meant archaeological datasets were increasingly digitally archived, for instance the data from the Channel Tunnel Rail Link Project (Foreman 2018) or a recent large-scale research excavation at Silchester (University of Reading 2018). In these cases, archaeobotanical data is available to download as a .csv file.\nWhilst the data publication strategy of large excavations was shifting, the availability of data from post-excavation assessment reports has remained challenging. So-called ‘grey literature’ results from the initial evaluation stage of developer-funded investigations and accompanying post-excavation assessment often contain a semi-quantitative evaluation of archaeobotanical samples on a scale of abundance. Whilst paper reports were initially deposited with county Historic Environment Records, a process of digitisation focussing on the Roman period has meant many pdfs are now available through the ADS (Allen et al. 2018), whilst born-digital reports are now deposited through OASIS (Online AccesS to the Index of archaeological investigationS), as part of the reporting process (Evans 2015), althought the extent to which specialist appendices are included is variable.\nThese varying ‘publication’ strategies means archaeobotanical data is often available somewhere for recent developer-funded excavations and large-scale developer-funded excavations, even if much of this data is as a printed table or .pdf file (Evans 2015; Evans and Moore 2014). However, academic journals are typically perceived as the most high-status publication venue for archaeobotanical data, and a crucial publication venue for academics in order to comply with institutional requirements and the norms of career progression. Aside from the problem of access to pay-walled journals by those without institutional subscriptions to all journals, the publication of primary data alongside research articles faces various problems, from the outright lack of inclusion of data, to problematic curation of supplementary data and a lack of peer review of data (Costello et al. 2013; Warinner and d’Alpoim Guedes 2014: 155; Whitlock, 2011). The extent of these problems for archaeobotany is currently unknown. Given the growth in archaeobotanical data production as methodologies are introduced into many new regions and periods over the last decade, it is vital that we know whether the mass of new data being produced is made available and is being reused.\nRecent important advances within archaeobotanical data sharing have focussed on the construction of the ARBODAT database, developed by Angela Kreuz at the Kommission für Archäologische Landesforschung in Hessen. The database is used by a range of researchers in Germany, the Czech Republic, France and England (Kreuz & Schäfer 2002). Data sharing enabled by the use of this database has facilitated research on Neolithic agriculture in Austria, Bulgaria and Germany (Kreuz et al. 2005), and Bronze Age agriculture in Europe (Stika and Heiss 2012). The use of this database makes data integration between specialists easier due to the shared data structure and metadata description, but often the primary archaeobotanical data is not made publicly available.\n2.2. Meta-analysis in archaeobotany\nBeyond the need to preserve information, a key reason for the formal sharing of archaeobotanical data is in its reuse to facilitate subsequent research. There has been a long-standing concern within archaeobotany with the need to aggregate datasets and identify temporal and spatial patterns. The palaeobotanist Clement Reid maintained his own database of Quaternary plant records in the late nineteenth century (Reid 1899), which formed the foundation of Godwin’s Quaternary database (Godwin 1975). Mid-twentieth century studies of prehistoric plant use compiled lists of archaeobotanical materials incorporating full references and the location of the archive (Jessen & Helbaek 1944). The International Work Group for Palaeoethnobotany was itself founded in 1968 in part with the aim to compile archaeobotanical data, first realised through the publication of Progress in Old World Palaeoethnobotany (Van Zeist, Wasylikowa & Behre 1991), and subsequently through the publication of annual lists of new records of cultivated plants (Kroll 1997).\nTo take England as an example, regional reviews produced by state heritage authorities have provided catalogues of archaeobotanical datasets in particular time periods and regions (e.g. Murphy 1998). When one archaeobotanist has undertaken the majority of study within a region, pieces of synthesis within books have provided a relatively comprehensive review, for instance in the Thames Valley, UK (Lambrick & Robinson 2009). Over the last decade regional synthesis has occurred within several funded reviews which produced catalogues of sites with archaeobotanical data (Lodwick 2014; McKerracher 2018; Parks 2012) and a series of funded projects in France have enabled regional synthesis (Lepetz & Zech-Matterne 2017). However, many of these reviews are not accompanied by an available underlying database, and draw upon reports which are themselves hard to access.\nThrough the 1990s and 2000s, a series of databases were constructed in order to collate data from sites in a particular region and facilitate synthetic research. However, these databases have all placed the role of data archiving onto later projects specifically funded to collate data, rather than sourcing datasets at the time of publication. Such a model is unsustainable, and is unlikely to result in all available datasets being compiled. The Archaeobotanical Computer Database (ABCD), published in 1996 in the first issue of Internet Archaeology, contained much of the archaeobotanical data from Britain available at the time of publication, largely at the level of individual samples. The database was compiled between 1989 and 1994 and is still accessible through the accompanying online journal publication (Tomlinson & Hall 1996). The ABCD made major contributions to recent reviews of the Roman and Medieval periods (Van der Veen, Livarda & Hill 2008; Van der Veen, Hill & Livarda 2013). However, the database could only be centrally updated, with the online resource remaining a static version, lacking much of the new data produced subsequent to the implementation of PPG16 in 1990. The ADEMNES database, created through a research project undertaken at the Universities of Freiburg and Tübingen, contains data from 533 eastern Mediterranean and Near Eastern sites (Riehl & Kümmel 2005). Kroll has maintained the Archaeobotanical Literature Database to accompany the Vegetation History and Archaeobotany articles (Kroll 2005) now accessible as a database (Kirleis & Schmültz 2018). Numerous other databases have collated archaeobotanical studies, including the COMPAG project (Fuller et al. 2015), the Cultural Evolution of Neolithic Europe project (Colledge 2016), RADAR in the Netherlands (van Haaster and Brinkkemper 1995), BRAIN Botanical Records of Archaeobotany Italian Network (Mercuri et al. 2015) and CZADArchaeobotanical database of Czech Republic (CZAD 2019).\nThe majority of databases have a restricted regional coverage, whilst research-project driven period-specific databases provide overlapping content. Whilst there are a wide range of archaeobotanical databases available, few contain primary datasets (other than the ABCD) which can be downloaded as .csv files. Data which is most commonly available are bibliographic references per site, with some indications of mode of preservation, quantity of archaeobotanical data, and sometimes taxa present. The databases do not inter-relate to each other, and function primarily as bibliographic sources enabling researchers to find comparative sites or to identify published datasets which need to be re-tabulated prior to meta-analysis. The IWGP website curates a list of resources, but otherwise the resources are often disseminated through the archaeobotany jiscmail list.\nBeyond the aim of cataloguing archaeobotanical data within a region and period, meta-analysis is often used in archaeobotany to identify spatial and chronological trends in a range of past human activities, for instance crop choice, crop husbandry practices, plant food consumption, the trade in luxury foods or the use of plants in ritual. Meta-analysis can be undertaken on the basis of simple presence/absence data per site, but in order for such analysis to be rigorous and comparable, sample-level data must be utilised. For instance, sample-level data is required for meta-studies, in order to identify high-quality samples of unmixed crops for weed ecology analysis (Bogaard 2004), to assess the importance of context in the evaluation of wild plant foods (Wallace et al. 2019), or to use volumetric measurements as a proxy for scale (Lodwick 2017b). The reuse of archaeobotanical data also extends to include datasets used as “controls” in commonly used forms of statistical analysis, for instance Jones’s weed data from Amorgos, Greece, which is utilised as a control group in discriminant analysis of crop-processing stage (Jones 1984), and ethnographic observations of crop items in different crop-processing stages (Jones 1990).\n2.3. Open data principles and solutions\nDebates over issues of data publication and meta-analysis have been on-going across scientific disciplines over the last decade (Editors 2009), and have been summarised within principles of open science, as recently set out in relation to archaeology (Marwick et al. 2017). Open Data is one of the three core principles for promoting transparency in social science (Miguel et al. 2014). The FAIR principles, developed by representatives from academia, industry, funding agencies, industry and publishers, provide four principles which data sharing should meet for use by both humans and machines – Findability, Accessibility, Interoperability, and Reusability (Wilkinson et al. 2016). A recent report assessing the adoption and impact of FAIR principles across academia in the UK included archaeology as a case study (Allen and Hartland 2018: 46). It reported how the ADS was often used to archive data, but that “The journal itself provides the “story” about the data, the layer that describes what the data is, how it was collected and what the author thinks it means.” The report also raises the problem that smaller projects may not have the funding to utilise the ADS, meaning that other repositories are utilised. Increasingly, archaeological data is made available through a wide range of data repositories (OSF, Mendeley Data, Zenodo, Open Context), university data repositories (e.g. ORA-Data), or social networking sites for academics (Academia.edu, ResearchGate). More widely in archaeology, some have observed that archaeological data is rarely published (Kintigh et al. 2014), and recent reviews have reported low levels of data sharing (Huggett 2018; Marwick & Pilaar Birch 2018). A closely related issue is that of data reuse. Responsible reuse of primary data encourages the sharing of primary data (Atici et al. 2013), but levels of data reuse in archaeology are thought to remain low (Huggett 2018). Principles for responsible data citation in archaeology have recently been developed summarising how datasets should be cited (Marwick & Pilaar Birch 2018).\nIn order to assess the current status of data sharing, citation and data re-use in archaeobotany, a review was undertaken of the publication of primary data and the publication of meta-analysis in major archaeological journals over the last ten years, building on recent pilot studies within archaeology (Marwick & Pilaar Birch 2018). The review of academic journals provided a contrast to recent assessments of archaeobotanical data deriving from developer-funded archaeology (Lodwick 2017c; Van der Veen, Hill & Livarda 2013). Journal articles have been selected as the focus of this study as the provision of online supplementary materials in the majority of journals and the ability to insert hyperlinks to persistent identifiers (eg a DOI) to link to datasets available elsewhere should not limit the publication of data and references. Much archaeobotanical data is also published elsewhere, especially from projects not based in the university sector, that is commercial or community archaeology in the UK. Archaeobotanical datasets emanating from this research are more commonly published through monographs, county journal articles, and unpublished (or grey literature) reports, but these are beyond the scope of the current review.\nAll journal articles were included which represent the principle reporting of a new archaeobotanical assemblage. The selected journals fall within three groups. First, what is considered the specialist archaeobotanical journal (Vegetation History and Archaeobotany (VHA)). Second, archaeological science journals (Archaeological and Anthropological Sciences, Environmental Archaeology, The Holocene, Journal of Archaeological Science (JAS), Journal of Archaeological Science: Reports (JASR), Journal of Ethnobiology, Quaternary International, Journal of Wetland Archaeology), which can be considered as specialist sub-disciplinary journals which should be maintaining data-quality. Third, general archaeology journals (Antiquity, Journal of Field Archaeology, Oxford Journal of Archaeology, Journal of Anthropological Archaeology, Journal of World Prehistory). Finally, the broader cross-disciplinary journals PLoS One and Proceedings of the National Academy of Sciences (PNAS) were included. Published articles from the past ten years (20092018) have been analysed in order to assess the availability of plant macrofossil data. This ten-year period brackets the period where most archaeological journals have moved online and adopted supplementary materials.\nData citation in synthetic studies has been assessed in the same range of publications. The extent of data reuse ranges from the analysis of whole sample data to the presence/absence of individual crops. The location of a data citation has been assessed in the same range of publications, with the addition of journals where occasional research incorporating archaeobotanical data is featured (Britannia, Journal of Archaeological Research, Ethnobiology Letters, Medieval Archaeology, Proceedings of the Prehistoric Society, World Archaeology). The underlying dataset for the analysis is available in Lodwick 2019.\n4.1. Primary data sharing\nHere, the location of primary archaeobotanical data, that is sample level counts of macroscopic plant remains, was assessed for 239 journal articles across 16 journals (Lodwick 2019 Table 1). Figure 1 shows the results grouped by journal. Overall, only 56% of articles shared their primary data. In, Antiquity, JAS, JASR, PLOS One, Quaternary International and VHA, the highest proportion of publications did not include their primary data, that is to say that the sample-by-sample counts of plant macrofossils was not available. This level of data is comparable to the findings of other pilot studies in archaeology. Marwick and Pilaar Birch found a data sharing rate of 53% from 48 articles published in Journal of Archaeological Science in FebMay 2017 (Marwick & Pilaar Birch 2018: 7), and confirm previous assertions that data is often withheld in archaeology (Kansa 2012: 499). This is better than some disciplines, with a 9% data sharing rate on publication found across high impact journal science publications (n = 500) (Alsheikh-Ali et al. 2011) and 13% in biology, chemistry, mathematics and physics (n = 4370) (Womack 2015), yet still indicates that nearly half of articles did not include primary data. Primary archaeobotanical data is more likely to be shared in archaeobotanical and archaeological science journals than general archaeology journals. However, within the primary archaeobotanical journal, VHA, 51% of articles do not include their primary data (Figure 1).\nChart showing the location of primary archaeobotanical data by journal in primary archaeobotanical data publications.\nWhere primary data was not shared, the data which was available ranged from summary statistics, typically counts or frequencies, reported either by site, site phase, or feature group. Figure 2 summarises these results by year, showing that there is a gradient within articles not sharing their full ‘raw’ data, from those only provided sample counts on one aspect of the archaeobotanical assemblage, to those only presenting data graphically or within discussion. Beyond full data, the most common form of data shared is either summary counts per site or summary counts per feature or phase. Whilst this data does enable some level of reuse, the results of any sample-level data analysis presented within an article cannot be verified, and the data cannot be reused for crop-processing or weed ecology analysis which requires sample level data. Furthermore, such data would have been collected on a sample-by-sample basis, but this information is lost from the resulting publication.\nChart showing the form of archaeobotanical data shared by year in primary archaeobotanical data publications.\nThe forms in which data are made available vary across journals. The sharing of primary data within an article remains the most common data sharing form in archaeobotany (Figure 1). Data tables in text require manual handling to extract data, in journals such as VHA, whilst in other journals in-text tables can be downloaded as .csv files. These however would not be citable as a separate dataset. Supplementary datasets are the third most common form of data sharing. Indeed, the use of electronic supplementary material has been advocated recently for by some journals, such as the Journal of Archaeological Science (Torrence, Martinón-Torres & Rehren 2015). Microsoft Excel spreadsheets are the most common form of supplementary data, followed by .pdfs and then word documents (Figure 1). Both .xlsx and .docx are proprietary file formats, and not recommended for long term archiving or open science principles. There is no indication of improvement over the last decade in the form of data sharing. In 2018, 50% of articles did not share their primary data, and where the data was shared, it was in proprietary forms (.docx, .xlsx) or those that do not easily facilitate data reuse (.pdf) (Figure 3).\nChart showing the location of archaeobotanical data from 20092018 in primary archaeobotanical data publications.\nJust one of the articles included in this review incorporated a dataset archived in a repository (Farahani 2018), in contrast to the substantial growth in data repositories across academic disciplines (Marcial & Hemminger 2010). Other examples provide the underlying data for monograph publications, such as that of the archaeobotanical data from Gordion, Turkey (Marston 2017a, 2017b), Silchester, UK (Lodwick 2018; University of Reading 2018) and Vaihingen, Germany (Bogaard 2011a; Bogaard, 2011b).\nSeveral of the journals that have been assessed have research data policies. In the case of Vegetation History and Archaeobotany, sufficient papers have been surveyed to assess the impact of the research data policy on the availability of data. Figure 4 show the proportion of data sharing formats through time just for VHA (note the small sample size). The introduction of a research data policy in 2016 encouraging data sharing in repositories has not resulted in any datasets being shared in that format. Of the 10 articles published in PLOS One after the introduction of a clear research data policy in 2014, 4 did not contain primary data. However, elsewhere, journals with no research data policy, such as Antiquity, has one of the lower levels of data sharing (Figure 1).\nChart showing the location of primary archaeobotanical data in Vegetation History and Archaeobotany.\nThere are various reasons for why a primary dataset may be lacking. The option of providing supplementary datasets has been available in many of the journals here since before the start of the surveyed period (e.g. Vegetation History and Archaeobotany in 2004), and so cannot be a reason for the absence of data publication in this journal while it may be a reason in other journals. Reasons suggested for a lack of data sharing within archaeology include technological limitations, and resistance amongst some archaeologists to making their data available due to cautions of exposing data to scrutiny, lost opportunities of analysis before others use it and loss of ‘capital’ of data (Moore & Richards 2015: 3435). Furthermore, control over how data tables is presented (taxa ordering, summary data presented) may also contribute to the preferential publishing of data within journal articles. Another factor to consider is the emphasis on the creation of new data through archaeological research (Huvila 2016). The creation of a new archaeobotanical dataset through primary analysis is a key form of training in archaeobotany, and the perception of the value of the reuse of other previously published archaeobotanical journals may be low, hence not encouraging the sharing of well-documented datasets. Excellent exams of data reuse have resulted in influential studies (Bogaard 2004; Riehl 2008; Wallace et al. 2019), and would hopefully encourage further data sharing in the future.\nGiven that there are numerous examples of meta-analysis which do take place in archaeobotany, it seems likely that the prevalent form of data sharing is through informal data sharing between individual specialists. However, this does not improve access to data in the long term, and is inefficient and time consuming, with large potential for data errors (Kansa & Kansa 2013), and relies on personal networks, which are likely to exclude some researchers. The absence of primary data in many archaeobotanical publications thus inhibits the verification of patterns observed within a dataset, and strongly limits the re-use potential of a dataset.\n4.2. Data citation\nOne of the common arguments for increasing data sharing is an associated increase in the citation of the articles which have data available. Here, the data citation practices of meta-analyses of plant macrofossil data undertaken over the last decade have been reviewed. 20 journals were consulted, including a wider range of period-specific journals, and 107 articles were assessed (Lodwick 2019 Table 2). Data citation was assessed as ‘in text’ or ‘in table’ to refer to when the citation and the bibliographic reference were within the article, as ‘in supplementary data’ when the citation and reference were within the supplementary materials, and as ‘no citation’ when no citation and reference was provided.\n21% of articles (n = 22) did not contain any citations to the underlying studies. 16% (n = 17) contained citations within supplementary data files. 50% of articles (n = 53) contained a citation within a table within the main article, and 14% (n = 15) contained citations within the main text. For the 21% of articles without data citations, the results of these studies could not be reproduced without consulting individual authors. The papers supplying the underlying data also received no credit for producing these datasets. Where articles contain citations within the main article (in text or table), full credit is provided to the underlying studies, a citation link is created through systems such as google scholar, and the study can be easily built upon in the future. Where the citation is provided within supplementary data, the original studies do receive attribution, but are not linked to so easily.\nThrough time, there is a steady decrease in the proportion of studies without citations to the underlying data, whereby of the 17 meta-analysis articles published in 2018, only one had no data citations. In comparison, in 2009, 3 out of 8 meta-analysis articles contained no data citation (Figure 6). Overall this is a more positive outlook on the reuse of published data, but the consistent presence of articles lacking data citation indicates that improvements are needed. Reasons for a lack of data citation may include restrictions on word counts imposed by journals, a lack of technical knowledge in making large databases available, or the wish to hold on to a dataset to optimise usage. Considering the type of journal (Figure 5), levels of data citation are worse in general archaeology journals, with sub-disciplinary journals showing slightly better levels of data citation. In particular VHA has a lack of consistency in where data citations are located.\nChart showing the location of data citations in meta-analysis journal articles by journal type.\nChart showing the location of data citations in meta-analysis journal articles from 20092018.\n4.3. Reuse of archived archaeobotanical datasets\nThe majority of data citations assessed in the previous section are to articles or book chapters rather than data-sets. The ADS currently hosts 66 data archives which have been tagged as containing plant macro data, deriving mainly from developer-funded excavations but also some research excavations. However, in some of these the plant macro data is contained within a pdf. As, the archiving of archaeobotanical datasets in data repositories is still at an early stage, the reuse of these datasets is assessed here on a case-by-case basis. The archaeobotanical dataset from the Neolithic site of Vaihingen, Germany (Bogaard 2011b) has not been cited on google scholar. Metrics are provided through the ADS, showing this dataset has been downloaded 56 times with 477 individual visits (as of 25/2/19). The archaeobotanical dataset from Gordion by Marston has no citations on Google Scholar (Marston 2017b), neither does the Giza botanical database (Malleson & Miracle 2018), but these are both very recently archived datasets. In contrast, the Roman Rural Settlement Project dataset, which includes site-level archaeobotanical data, has received greater levels of use, with 12 citations in Google Scholar, over 40,000 file downloads, and over 35,000 visits (Allen et al. 2018) and the archaeobotanical computer database (Tomlinson & Hall 1996) has been cited 44 times, and is the major dataset underpinning other highly-cited studies (Van der Veen, Livarda & Hill 2008; Van der Veen, Hill & Livarda 2013). Whilst there is clearly precedence for the reuse of archaeobotanical databases, current data citation practices within archaeobotany do not yet appear to be formally citing individual datasets, meaning an assessment of the reuse of archived archaeobotanical datasets is challenging.\n5. Steps Forward\nThis review of data sharing, citation, and reuse practices in archaeobotany has found medium levels of data sharing, good levels of data citation, but so far limited levels of reuse of archived data sets. This picture is similar across archaeology, in part attributed to the status of archaeology as a small-science, where data-sharing takes place ad-hoc (Marwick & Pilaar Birch 2018). Here, recommendations are discussed for improving these data practices within archaeobotany, of applicability more widely in archaeology.\nClearly an important step is improving the sharing of plant macrofossil data. Given the reasonable small size of most archaeobotanical datasets (a .csv file < 1mb), and a lack of ethical conflicts, there seems to be few reasons why the majority of archaeobotanical data couldn’t be shared. In the case of developer-funded derived data, issues of commercial confidentiality could limit the sharing of data. A key stage is establishing why levels of data sharing are not higher. Issues within archaeobotany may include the conflict between having to publish results within excavation monographs, which may take some time to be published, and have limited visibility due to high purchase costs and no digital access, and the need to publish journal articles for career progression within academia. The production of an archaeobotanical dataset is very time-consuming, and interim publication on notable aspects of an assemblage may be considered as a necessary publication strategy. More broadly, one important aspect is issues of equity in access to digital archiving resources (Wright & Richards 2018), such as differential access to funds, training and knowledge. A recent study in Sweden found that we need to know concerns, needs, and wishes of archaeologists in order to improve preservation of archaeological data (Huvila 2016), especially when control of ones data may be linked to perceptions of job security. In order to make improvements in data sharing and reuse across archaeology, we need improved training in data sharing and the reuse of data in higher education (Touchon & McCoy 2016; Cook et al. 2018), improved training in data management (Faniel et al. 2018), and crucially, the necessary software skills to make the reuse of archived datasets attainable (Kansa & Kansa 2014: 91). Examples of good practice in archaeobotany are the Vaihingen and Gordion datasets which demonstrate how datasets can be archived in data repositories to accompany a monograph (Bogaard 2011b; Marston 2017b), whilst Farahani (2018) provides an excellent example of a journal article, where the primary data is supplied as a .csv in a cited data repository along with the R script for the analysis.\nIn tandem with the need to encourage authors to share their data, is the need for journals to create and implement research data policies. Given the existence of research data policies in many of the journals included here, this reflects other findings of the poor enforcement of data policies by journals (Marwick & Pilaar Birch 2018), supporting arguments that journals should not be relied upon to make data accessible, and data should instead by deposited in digital repositries. In order to implement change in data sharing, there is a role to play for learned societies and academic organisation in lobbying funding bodies, prioritising data sharing in research projects. A key step is through journal editorial boards, and the enforcement of any pre-existing research data policies (Nosek et al. 2015). Revi", "answers": ["Technological limitations, resistance to exposing data to scrutiny, and desire to hold onto data for personal use."], "length": 6097, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "aeb6cb26b11fc386727a529761d9d233ec7ba8dea9800b0f"}
{"input": "What was the club known as before being officially renamed FC Urartu?", "context": "Football Club Urartu (, translated Futbolayin Akumb Urartu), commonly known as Urartu, is an Armenian professional football team based in the capital Yerevan that currently plays in the Armenian Premier League. The club won the Armenian Cup three times, in 1992, 2007 and 2016. In 20132014, they won the Armenian Premier League for the first time in their history.\n\nIn early 2016, the Russia-based Armenian businessman Dzhevan Cheloyants became a co-owner of the club after purchasing the major part of the club shares. The club was known as FC Banants until 1 August 2019, when it was officially renamed FC Urartu.\n\nHistory\n\nKotayk\nUrartu FC were founded as FC Banants by Sarkis Israelyan on 21 January 1992 in the village of Kotayk, representing the Kotayk Province. He named the club after his native village of Banants (currently known as Bayan). Between 1992 and 1995, the club was commonly referred to as Banants Kotayk. During the 1992 season, the club won the first Armenian Cup. At the end of the 1995 transitional season, Banants suffered a financial crisis. The club owners decided that it was better to merge the club with FC Kotayk of Abovyan, rather than disband it. In 2001, Banants demerged from FC Kotayk, and was moved from Abovyan to the capital Yerevan.\n\nYerevan\n\nFC Banants was relocated to Yerevan in 2001. At the beginning of 2003, Banants merged with FC Spartak Yerevan, but was able to limit the name of the new merger to FC Banants. Spartak became Banants's youth academy and later changed the name to Banants-2. Because of the merger, Banants acquired many players from Spartak Yerevan, including Samvel Melkonyan. After the merger, Banants took a more serious approach and have finished highly in the league table ever since. The club managed to lift the Armenian Cup in 2007.\nExperience is making way for youth for the 2008 and 2009 seasons. The departures of most of the experienced players have left the club's future to the youth. Along with two Ukrainian players, Ugandan international, Noah Kasule, has been signed.\n\nThe club headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan.\n\nDomestic\n\nEuropean\n\nStadium\n\nThe construction of the Banants Stadium was launched in 2006 in the Malatia-Sebastia District of Yerevan, with the assistance of the FIFA goal programme. It was officially opened in 2008 with a capacity of 3,600 seats. Further developments were implemented later in 2011, when the playing pitch was modernized and the capacity of the stadium was increased up to 4,860 seats (2,760 at the northern stand, 1,500 at the southern stand and 600 at the western stand).\n\nTraining centre/academy\nBanants Training Centre is the club's academy base located in the Malatia-Sebastia District of Yerevan. In addition to the main stadium, the centre houses 3 full-size training pitches, mini football pitches as well as an indoor facility. The current technical director of the academy is the former Russian footballer Ilshat Faizulin.\n\nFans\nThe most active group of fans is the South West Ultras fan club, mainly composed of residents from several neighbourhoods within the Malatia-Sebastia District of Yerevan, since the club is a de facto representer of the district. Members of the fan club benefit from events organized by the club and many facilities of the Banants training centre, such as the mini football pitch, the club store and other entertainments.\n\nAchievements\n Armenian Premier League\n Winner (1): 2013–14.\n Runner-up (5): 2003, 2006, 2007, 2010, 2018.\n\n Armenian Cup\n Winner (3): 1992, 2007, 2016.\n Runner-up (6): 2003, 2004, 2008, 2009, 2010, 2021–22\n\n Armenian Supercup\n Winner (1): 2014.\n Runner-up (5): 2004, 2007, 2009, 2010, 2016.\n\nCurrent squad\n\nOut on loan\n\nPersonnel\n\nTechnical staff\n\nManagement\n\nUrartu-2\n\nFC Banants' reserve squad play as FC Banants-2 in the Armenian First League. They play their home games at the training field with artificial turf of the Urartu Training Centre.\n\nManagerial history\n Varuzhan Sukiasyan (199294)\n Poghos Galstyan (July 1, 1996 – June 30, 1998)\n Oganes Zanazanyan (200105)\n Ashot Barseghyan (200506)\n Nikolay Kiselyov (200607)\n Jan Poštulka (2007)\n Nikolay Kostov (July 1, 2007 – April 8, 2008)\n Nedelcho Matushev (April 8, 2008 – June 30, 2008)\n Kim Splidsboel (2008)\n Armen Gyulbudaghyants (Jan 1, 2009 – Dec 1, 2009)\n Ashot Barseghyan (interim) (2009)\n Stevica Kuzmanovski (Jan 1, 2010 – Dec 31, 2010)\n Rafael Nazaryan (Jan 1, 2011 – Jan 15, 2012)\n Volodymyr Pyatenko (Jan 17, 2013 – June 30, 2013)\n Zsolt Hornyák (July 1, 2013 – May 30, 2015)\n Aram Voskanyan (July 1, 2015 – Oct 11, 2015)\n Tito Ramallo (Oct 12, 2015 – Oct 3, 2016)\n Artur Voskanyan (Oct 3, 2016 – Aug 11, 2018)\n Ilshat Faizulin (Aug 12, 2018 –Nov 24, 2019)\n Aleksandr Grigoryan (Nov 25, 2019 –Mar 10, 2021)\n Robert Arzumanyan (10 March 202124 June 2022)\n Dmitri Gunko (27 June 2022–)\n\nReferences\n\nExternal links\n Official website \n Banants at Weltfussball.de \n\n \nUrartu\nUrartu\nUrartu\nUrartu", "answers": ["FC Banants."], "length": 818, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "c4f2dfb06f56a185067d2f147c4d846aa0b895f1968eda12"}
{"input": "What is the proposed approach in this research paper?", "context": "\\section{Introduction}\n\\label{sec:introduction}\n\nProbabilistic models have proven to be very useful in a lot of applications in signal processing where signal estimation is needed \\cite{rabiner1989tutorial,arulampalam2002tutorial,ji2008bayesian}. Some of their advantages are that 1) they force the designer to specify all the assumptions of the model, 2) they provide a clear separation between the model and the algorithm used to solve it, and 3) they usually provide some measure of uncertainty about the estimation.\n\nOn the other hand, adaptive filtering is a standard approach in estimation problems when the input is received as a stream of data that is potentially non-stationary. This approach is widely understood and applied to several problems such as echo cancellation \\cite{gilloire1992adaptive}, noise cancellation \\cite{nelson1991active}, and channel equalization \\cite{falconer2002frequency}.\n\nAlthough these two approaches share some underlying relations, there are very few connections in the literature. The first important attempt in the signal processing community to relate these two fields was the connection between a linear Gaussian state-space model (i.e. Kalman filter) and the RLS filter, by Sayed and Kailath \\cite{sayed1994state} and then by Haykin \\emph{et al.} \\cite{haykin1997adaptive}. The RLS adaptive filtering algorithm emerges naturally when one defines a particular state-space model (SSM) and then performs exact inference in that model. This approach was later exploited in \\cite{van2012kernel} to design a kernel RLS algorithm based on Gaussian processes.\n\nA first attempt to approximate the LMS filter from a probabilistic perspective was presented in \\cite{park2014probabilistic}, focusing on a kernel-based implementation. The algorithm of \\cite{park2014probabilistic} makes use of a Maximum a Posteriori (MAP) estimate as an approximation for the predictive step. However, this approximation does not preserve the estimate of the uncertainty in each step, therefore degrading the performance of the algorithm.\n\nIn this work, we provide a similar connection between state-space models and least-mean-squares (LMS). Our approach is based on approximating the posterior distribution with an isotropic Gaussian distribution. We show how the computation of this approximated posterior leads to a linear-complexity algorithm, comparable to the standard LMS. Similar approaches have already been developed for a variety of problems such as channel equalization using recurrent RBF neural networks \\cite{cid1994recurrent}, or Bayesian forecasting \\cite{harrison1999bayesian}. Here, we show the usefulness of this probabilistic approach for adaptive filtering.\n\nThe probabilistic perspective we adopt throughout this work presents two main advantages. Firstly, a novel LMS algorithm with adaptable step size emerges naturally with this approach, making it suitable for both stationary and non-stationary environments. The proposed algorithm has less free parameters than previous LMS algorithms with variable step size \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}, and its parameters are easier to be tuned w.r.t. these algorithms and standard LMS. Secondly, the use of a probabilistic model provides us with an estimate of the error variance, which is useful in many applications.\n\nExperiments with simulated and real data show the advantages of the presented approach with respect to previous works. However, we remark that the main contribution of this paper is that it opens the door to introduce more Bayesian machine learning techniques, such as variational inference and Monte Carlo sampling methods \\cite{barber2012bayesian}, to adaptive filtering.\\\\\n\n\n\\section{Probabilistic Model}\n\nThroughout this work, we assume the observation model to be linear-Gaussian with the following distribution,\n\n\\begin{equation}\np(y_k|{\\bf w}_k) = \\mathcal{N}(y_k;{\\bf x}_k^T {\\bf w}_k , \\sigma_n^2),\n\\label{eq:mess_eq}\n\\end{equation}\nwhere $\\sigma_n^2$ is the variance of the observation noise, ${\\bf x}_k$ is the regression vector and ${\\bf w}_k$ is the parameter vector to be sequentially estimated, both $M$-dimensional column vectors.\n\n\nIn a non-stationary scenario, ${\\bf w}_k$ follows a dynamic process. In particular, we consider a diffusion process (random-walk model) with variance $\\sigma_d^2$ for this parameter vector:\n\n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;{\\bf w}_{k-1}, \\sigma_d^2 {\\bf I}),\n\\label{eq:trans_eq}\n\\end{equation}\nwhere $\\bf I$ denotes the identity matrix. In order to initiate the recursion, we assume the following prior distribution on ${\\bf w}_k$\n\n\\begin{equation}\np({\\bf w}_0)= \\mathcal{N}({\\bf w}_0;0, \\sigma_d^2{\\bf I}).\\nonumber\n\\end{equation}\n\n\\section{Exact inference in this model: Revisiting the RLS filter}\n\nGiven the described probabilistic SSM, we would like to infer the posterior probability distribution $p({\\bf w}_k|y_{1:k})$.\nSince all involved distributions are Gaussian, one can perform exact inference, leveraging the probability rules in a straightforward manner. The resulting probability distribution is\n\\begin{equation}\np({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k}, \\boldsymbol\\Sigma_{k}), \\nonumber\n\\end{equation}\nin which the mean vector ${\\bf\\boldsymbol\\mu}_{k}$ is given by\n\\begin{equation}\n{\\bf\\boldsymbol\\mu}_k = {\\bf\\boldsymbol\\mu}_{k-1} + {\\bf K}_k (y_k - {\\bf x}_k^T {\\bf\\boldsymbol\\mu}_{k-1}){\\bf x}_k, \\nonumber\n\\end{equation}\nwhere we have introduced the auxiliary variable\n\\begin{equation}\n{\\bf K}_k = \\frac{ \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right)}{{\\bf x}_k^T \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right) {\\bf x}_k + \\sigma_n^2}, \\nonumber\n\\end{equation}\nand the covariance matrix $\\boldsymbol\\Sigma_k$ is obtained as\n\\begin{equation}\n\\boldsymbol\\Sigma_k = \\left( {\\bf I} - {\\bf K}_k{\\bf x}_k {\\bf x}_k^T \\right) ( \\boldsymbol\\Sigma_{k-1} +\\sigma_d^2), \\nonumber\n\\end{equation}\nNote that the mode of $p({\\bf w}_k|y_{1:k})$, i.e. the maximum-a-posteriori estimate (MAP), coincides with the RLS adaptive rule\n\\begin{equation}\n{{\\bf w}}_k^{(RLS)} = {{\\bf w}}_{k-1}^{(RLS)} + {\\bf K}_k (y_k - {\\bf x}_k^T {{\\bf w}}_{k-1}^{(RLS)}){\\bf x}_k .\n\\label{eq:prob_rls}\n\\end{equation}\nThis rule is similar to the one introduced in \\cite{haykin1997adaptive}.\n\nFinally, note that the covariance matrix $\\boldsymbol\\Sigma_k$ is a measure of the uncertainty of the estimate ${\\bf w}_k$ conditioned on the observed data $y_{1:k}$. Nevertheless, for many applications a single scalar summarizing the variance of the estimate could prove to be sufficiently useful. In the next section, we show how such a scalar is obtained naturally when $p({\\bf w}_k|y_{1:k})$ is approximated with an isotropic Gaussian distribution. We also show that this approximation leads to an LMS-like estimation.\n \n\n\n\\section{Approximating the posterior distribution: LMS filter }\n\nThe proposed approach consists in approximating the posterior distribution $p({\\bf w}_k|y_{1:k})$, in general a multivariate Gaussian distribution with a full covariance matrix, by an isotropic spherical Gaussian distribution \n\n\\begin{equation}\n\\label{eq:aprox_post}\n\\hat{p}({\\bf w}_{k}|y_{1:k})=\\mathcal{N}({\\bf w}_{k};{\\bf \\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_{k}^2 {\\bf I} ).\n\\end{equation}\n\nIn order to estimate the mean and covariance of the approximate distribution $\\hat{p}({\\bf w}_{k}|y_{1:k})$, we propose to select those that minimize the Kullback-Leibler divergence with respect to the original distribution, i.e., \n\n\\begin{equation}\n\\{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k\\}=\\arg \\displaystyle{ \\min_{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k}} \\{ D_{KL}\\left(p({\\bf w}_{k}|y_{1:k}))\\| \\hat{p}({\\bf w}_{k}|y_{1:k})\\right) \\}. \\nonumber\n\\end{equation}\n\nThe derivation of the corresponding minimization problem can be found in Appendix A. In particular, the optimal mean and the covariance are found as\n\\begin{equation}\n{\\hat{\\boldsymbol\\mu}}_{k} = {\\boldsymbol\\mu}_{k};~~~~~~ \\hat{\\sigma}_{k}^2 = \\frac{{\\sf Tr}\\{ \\boldsymbol\\Sigma_k\\} }{M}.\n\\label{eq:sigma_hat}\n\\end{equation}\n\n\nWe now show that by using \\eqref{eq:aprox_post} in the recursive predictive and filtering expressions we obtain an LMS-like adaptive rule. First, let us assume that we have an approximate posterior distribution at $k-1$, $\\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) = \\mathcal{N}({\\bf w}_{k-1};\\hat{\\bf\\boldsymbol\\mu}_{k-1}, \\hat{\\sigma}_{k-1}^2 {\\bf I} )$. Since all involved distributions are Gaussian, the predictive distribution\nis obtained as %\n\\begin{eqnarray}\n\\hat{p}({\\bf w}_k|y_{1:k-1}) &=& \\int p({\\bf w}_k|{\\bf w}_{k-1}) \\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) d{\\bf w}_{k-1} \\nonumber\\\\\n&=& \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k|k-1}, \\boldsymbol\\Sigma_{k|k-1}), \n\\label{eq:approx_pred}\n\\end{eqnarray}\nwhere the mean vector and covariance matrix are given by\n\\begin{eqnarray}\n\\hat{\\bf\\boldsymbol\\mu}_{k|k-1} &=& \\hat{\\bf\\boldsymbol\\mu}_{k-1} \\nonumber \\\\\n\\hat{\\boldsymbol\\Sigma}_{k|k-1} &=& (\\hat{\\sigma}_{k-1}^2 + \\sigma_d^2 ){\\bf I}\\nonumber.\n\\end{eqnarray}\n\nFrom \\eqref{eq:approx_pred}, the posterior distribution at time $k$ can be computed using Bayes' Theorem and standard Gaussian manipulations (see for instance \\cite[Ch. 4]{murphy2012machine}). Then, we approximate the posterior $p({\\bf w}_k|y_{1:k})$ with an isotropic Gaussian,\n\\begin{equation}\n\\hat{p}({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k ; {\\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_k^2 {\\bf I} ),\\nonumber\n\\end{equation}\nwhere \n\\begin{eqnarray}\n{\\hat{\\boldsymbol\\mu}}_{k} &= & {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2} (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k \\nonumber \\\\\n&=& {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\eta_k (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k . \n\\label{eq:prob_lms}\n\\end{eqnarray}\nNote that, instead of a gain matrix ${\\bf K}_k$ as in Eq.~\\eqref{eq:prob_rls}, we now have a scalar gain $\\eta_k$ that operates as a variable step size.\n\n\nFinally, to obtain the posterior variance, which is our measure of uncertainty, we apply \\eqref{eq:sigma_hat} and the trick ${\\sf Tr}\\{{\\bf x}_k{\\bf x}_k^T\\}= {\\bf x}_k^T{\\bf x}_k= \\|{\\bf x}_k \\|^2$,\n\n\\begin{eqnarray}\n\\hat{\\sigma}_k^2 &=& \\frac{{\\sf Tr}(\\boldsymbol\\Sigma_k)}{M} \\\\\n&=& \\frac{1}{M}{\\sf Tr}\\left\\{ \\left( {\\bf I} - \\eta_k {\\bf x}_k {\\bf x}_k^T \\right) (\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2)\\right\\} \\\\\n&=& \\left(1 - \\frac{\\eta_k \\|{\\bf x}_k\\|^2}{M}\\right)(\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2).\n\\label{eq:sig_k}\n\\end{eqnarray}\nIf MAP estimation is performed, we obtain an adaptable step-size LMS estimation\n\n\\begin{equation}\n{\\bf w}_{k}^{(LMS)} = {\\bf w}_{k-1}^{(LMS)} + \\eta_k (y_k - {\\bf x}_k^T {\\bf w}_{k-1}^{(LMS)}){\\bf x}_k, \t\n\\label{eq:lms}\n\\end{equation}\nwith\n\\begin{equation}\n\\eta_k = \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2}.\\nonumber\n\\end{equation}\nAt this point, several interesting remarks can be made:\n\n\\begin{itemize}\n\n\\item The adaptive rule \\eqref{eq:lms} has linear complexity since it does not require us to compute the full matrix $\\boldsymbol\\Sigma_k$.\n\n\\item For a stationary model, we have $\\sigma_d^2=0$ in \\eqref{eq:prob_lms} and \\eqref{eq:sig_k}. In this case, the algorithm remains valid and both the step size and the error variance, $\\hat{\\sigma}_{k}$, vanish over time $k$. \n\n\\item Finally, the proposed adaptable step-size LMS has only two parameters, $\\sigma_d^2$ and $\\sigma_n^2$, (and only one, $\\sigma_n^2$, in stationary scenarios) in contrast to other variable step-size algorithms \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}. More interestingly, both $\\sigma_d^2$ and $\\sigma_n^2$ have a clear underlying physical meaning, and they can be estimated in many cases. We will comment more about this in the next section. \n\\end{itemize}\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nWe evaluate the performance of the proposed algorithm in both stationary and tracking experiments. In the first experiment, we estimate a fixed vector ${\\bf w}^{o}$ of dimension $M=50$. The entries of the vector are independently and uniformly chosen in the range $[-1,1]$. Then, the vector is normalized so that $\\|{\\bf w}^o\\|=1$. Regressors $\\boldsymbol{x}_{k}$ are zero-mean Gaussian vectors with identity covariance matrix. The additive noise variance is such that the SNR is $20$ dB. We compare our algorithm with standard RLS and three other LMS-based algorithms: LMS, NLMS \\cite{sayed2008adaptive}, VSS-LMS \\cite{shin2004variable}.\\footnote{The used parameters for each algorithm are: for RLS $\\lambda=1$, $\\epsilon^{-1}=0.01$; for LMS $\\mu=0.01$; for NLMS $\\mu=0.5$; and for VSS-LMS $\\mu_{max}=1$, $\\alpha=0.95$, $C=1e-4$.} The probabilistic LMS algorithm in \\cite{park2014probabilistic} is not simulated because it is not suitable for stationary environments.\n\nIn stationary environments, the proposed algorithm has only one parameter, $\\sigma^2_n$. We simulate both the scenario where we have perfectly knowledge of the amount of noise (probLMS1) and the case where the value $\\sigma^2_n$ is $100$ times smaller than the actual value (probLMS2). The Mean-Square Deviation (${\\sf MSD} = {\\mathbb E} \\| {\\bf w}_0 - {\\bf w}_k \\|^2$), averaged out over $50$ independent simulations, is presented in Fig. \\ref{fig:msd_statationary}.\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{results_stationary_MSD}}\n\\end{minipage}\n\\caption{Performance in terms of MSD of probabilistic LMS with both optimal (probLMS1) and suboptimal (probLMS2) compared to LMS, NLMS, VS-LMS, and RLS.}\n\\label{fig:msd_statationary}\n\\end{figure}\n\nThe performance of probabilistic LMS is close to RLS (obviously at a much lower computational cost) and largely outperforms previous variable step-size LMS algorithms proposed in the literature. Note that, when the model is stationary, i.e. $\\sigma^2_d=0$ in \\eqref{eq:trans_eq}, both the uncertainty $\\hat{\\sigma}^2_k$, and the adaptive step size $\\eta_k$, vanish over time. This implies that the error tends to zero when $k$ goes to infinity. Fig. \\ref{fig:msd_statationary} also shows that the proposed approach is not very sensitive to a bad choice of its only parameter, as demonstrated by the good results of probLMS2, which uses a $\\sigma^2_n$ that is $100$ times smaller than the optimal value. \n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{fig2_final}}\n\\end{minipage}\n\\caption{Real part of one coefficient of the measured and estimated channel in experiment two. The shaded area represents two standard deviations from the prediction {(the mean of the posterior distribution)}.}\n\\label{fig_2}\n\\end{figure}\n\n\n\\begin{table}[ht]\n\\begin{footnotesize}\n\\setlength{\\tabcolsep}{2pt}\n\\def1.5mm{1.5mm}\n\\begin{center}\n\\begin{tabular}{|l@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|}\n\\hline\nMethod & LMS & NLMS & LMS-2013 & VSSNLMS & probLMS & RLS \\\\\n\\hline\n\\hline\nMSD (dB) &-28.45 &-21.07 &-14.36 &-26.90 &-28.36 &-25.97\\\\\n\\hline \n\\end{tabular}\n\\end{center}\n\\caption{Steady-state MSD of the different algorithms for the tracking of a real MISO channel.}\n\\label{tab:table_MSD}\n\\end{footnotesize}\n\n\\end{table}\n\\newpage\nIn a second experiment, we test the tracking capabilities of the proposed algorithm with {real} data of a wireless MISO channel acquired in a realistic indoor scenario. More details on the setup can be found in \\cite{gutierrez2011frequency}. Fig. \\ref{fig_2} shows the real part of one of the channels, and the estimate of the proposed algorithm. The shaded area represents the estimated uncertainty for each prediction, i.e. $\\hat{\\mu}_k\\pm2\\hat{\\sigma}_k$. Since the experimental setup does not allow us to obtain the optimal values for the parameters, we fix these parameters to their values that optimize the steady-state mean square deviation (MSD). \\hbox{Table \\ref{tab:table_MSD}} shows this steady-state MSD of the estimate of the MISO channel with different methods. As can be seen, the best tracking performance is obtained by standard LMS and the proposed method. \n\n\n\n\n\n\\section{Conclusions and Opened Extensions}\n\\label{sec:conclusions}\n\n{We have presented a probabilistic interpretation of the least-mean-square filter. The resulting algorithm is an adaptable step-size LMS that performs well both in stationary and tracking scenarios. Moreover, it has fewer free parameters than previous approaches and these parameters have a clear physical meaning. Finally, as stated in the introduction, one of the advantages of having a probabilistic model is that it is easily extensible:}\n\n\\begin{itemize}\n\\item If, instead of using an isotropic Gaussian distribution in the approximation, we used a Gaussian with diagonal covariance matrix, we would obtain a similar algorithm with different step sizes and measures of uncertainty, for each component of ${\\bf w}_k$. Although this model can be more descriptive, it needs more parameters to be tuned, and the parallelism with LMS vanishes.\n\\item Similarly, if we substitute the transition model of \\eqref{eq:trans_eq} by an Ornstein-Uhlenbeck process, \n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;\\lambda {\\bf w}_{k-1}, \\sigma_d^2), \\nonumber\n\\label{eq:trans_eq_lambda}\n\\end{equation}\na similar algorithm is obtained but with a forgetting factor $\\lambda$ multiplying ${\\bf w}_{k-1}^{(LMS)}$ in \\eqref{eq:lms}. This algorithm may have improved performance under such a kind of autoregresive dynamics of ${\\bf w}_{k}$, though, again, the connection with standard LMS becomes dimmer.\n\n\\item As in \\cite{park2014probabilistic}, the measurement model \\eqref{eq:mess_eq} can be changed to obtain similar adaptive algorithms for classification, ordinal regression, and Dirichlet regression for compositional data. \n\n\\item A similar approximation technique could be applied to more complex dynamical models, i.e. switching dynamical models \\cite{barber2010graphical}. The derivation of efficient adaptive algorithms that explicitly take into account a switch in the dynamics of the parameters of interest is a non-trivial and open problem, though the proposed approach could be useful.\n\n\\item Finally, like standard LMS, this algorithm can be kernelized for its application in estimation under non-linear scenarios.\n\n\\end{itemize}\n\n\n\\begin{appendices}\n\n\\section{KL divergence between a general gaussian distribution and an isotropic gaussian}\n\\label{sec:kl}\n\n We want to approximate $p_{{\\bf x}_1}(x) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_1,\\boldsymbol\\Sigma_1)$ by $p_{{\\bf x}_2}({\\bf x}) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_2,\\sigma_2^2 {\\bf I})$. In order to do so, we have to compute the parameters of $p_{{\\bf x}_2}({\\bf x})$, $\\boldsymbol\\mu_2$ and $\\sigma_2^2$, that minimize the following Kullback-Leibler divergence,\n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) &=&\\int_{-\\infty}^{\\infty} p_{{\\bf x}_1}({\\bf x}) \\ln{\\frac{p_{{\\bf x}_1}({\\bf x})}{p_{{\\bf x}_2}({\\bf x})}}d{\\bf x} \\nonumber \\\\\n&= & \\frac{1}{2} \\{ -M + {\\sf Tr}(\\sigma_2^{-2} {\\bf I}\\cdot \\boldsymbol\\Sigma_1^{-1}) \\nonumber \\\\\n & & + (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 )^T \\sigma^{-2}_2{\\bf I} (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 ) \\nonumber \\\\\n & & + \\ln \\frac{{\\sigma_2^2}^M}{\\det\\boldsymbol\\Sigma_1} \\}. \n\\label{eq:divergence}\n\\end{eqnarray}\nUsing symmetry arguments, we obtain \n\\begin{equation}\n\\boldsymbol\\mu_2^{*} =\\arg \\displaystyle{ \\min_{\\boldsymbol\\mu_2}} \\{ D_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) \\} = \\boldsymbol\\mu_1.\n\\end{equation}\nThen, \\eqref{eq:divergence} gets simplified into \n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) = \\frac{1}{2}\\lbrace { -M + {\\sf Tr}(\\frac{\\boldsymbol\\Sigma_1}{\\sigma_2^{2}}) + \\ln \\frac{\\sigma_2^{2M}}{\\det\\boldsymbol\\Sigma_1}}\\rbrace.\n\\end{eqnarray}\nThe variance $\\sigma_2^2$ is computed in order to minimize this Kullback-Leibler divergence as\n\n\\begin{eqnarray}\n\\sigma_2^{2*} &=& \\arg\\min_{\\sigma_2^2} D_{KL}(P_{x_1}\\| P_{x_2}) \\nonumber \\\\\n &=& \\arg\\min_{\\sigma_2^2}\\{ \\sigma_2^{-2}{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\} + M\\ln \\sigma_2^{2} \\} .\n\\end{eqnarray}\nDeriving and making it equal zero leads to\n\n\\begin{equation}\n\\frac{\\partial}{\\partial \\sigma_2^2} \\left[ \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{\\sigma_2^{2}} + M \\ln \\sigma_2^{2} \\right] = \\left. {\\frac{M}{\\sigma_2^{2}}-\\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{(\\sigma_2^{2})^2}}\\right|_{\\sigma_2^{2}=\\sigma_2^{2*}}\\left. =0 \\right. .\n\\nonumber\n\\end{equation}\nFinally, since the divergence has a single extremum in $R_+$,\n\\begin{equation}\n\\sigma_2^{2*} = \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{M}.\n\\end{equation}\n\n\n\n\n\\end{appendices}\n\n\\vfill\n\\clearpage\n\n\\bibliographystyle{IEEEbib}\n", "answers": ["This research paper proposed an approach based on approximating the posterior distribution with an isotropic Gaussian distribution."], "length": 2556, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "394cb48c037481d97cdf1dbd7adef475061b9e77235842e2"}