|
|
{: , : endogenized\, : [], : 6900, : , : , : null, : }
|
|
|
{: , : Electric\Thunderer of the Nile\protectors\of amber\like amber\amber\electric\electricity\On Physical Lines of Force\his discovery of the law of the photoelectric effect\cat's-whisker detector\" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.\nThe solid-state device came into its own with the invention of the transistor in 1947. Common solid-state devices include transistors, microprocessor chips, and RAM. A specialized type of RAM called flash RAM is used in USB flash drives and more recently, solid-state drives to replace mechanically rotating magnetic disc hard disk drives. Solid state devices became prevalent in the 1950s and the 1960s, during the transition from vacuum tubes to semiconductor diodes, transistors, integrated circuit (IC) and the light-emitting diode (LED).\nThe presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity.:457 A lightweight ball suspended from a string can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.\nThe force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them.:35 The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.\nStudy has shown that the origin of charge is from certain types of subatomic particles which have the property of electric charge. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. The most familiar carriers of electrical charge are the electron and proton. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire.:2–5 The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.\nThe charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin. The amount of charge is usually given the symbol Q and expressed in coulombs; each electron carries the same charge of approximately −1.6022×10−19 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10−19 coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.\nThe movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.\nBy historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.\nThe process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second,:17 the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.\nCurrent causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.:23–24 One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.\nIn engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.:11 If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave.:206–07 Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance.:223–25 These properties however can become important when circuitry is subjected to transients, such as when first energised.\nThe concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.\nA hollow conducting body carries all its charge on its outer surface. The field is therefore zero at all places inside the body.:88 This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects.\nThe principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.\nA pair of AA cells. The + sign indicates the polarity of the potential difference between the battery terminals.\nThe concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity.:494–98 This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated.:494–98 The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.\nFor practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable.\nElectric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, otherwise this would produce a force that will move the charge carriers to even the potential of the surface.\nØrsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's slightly obscure words were that \"the electric conflict acts in a revolving manner.\" The force also depended on the direction of the current, for if the flow was reversed, then the force did too.\nØrsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.\nThis relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.\nExperimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.\nItalian physicist Alessandro Volta showing his \"battery\" to French emperor Napoleon Bonaparte in the early 19th century.\nThe ability of chemical reactions to produce electricity, and conversely the ability of electricity to drive chemical reactions has a wide array of uses.\nElectrochemistry has always been an important part of electricity. From the initial invention of the Voltaic pile, electrochemical cells have evolved into the many different types of batteries, electroplating and electrolysis cells. Aluminium is produced in vast quantities this way, and many portable devices are electrically powered using rechargeable cells.\nA basic electric circuit. The voltage source V on the left drives a current I around the circuit, delivering electrical energy into the resistor R. From the resistor, the current returns to the source, completing the circuit.\nAn electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.\nElectric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.\nElectricity generation is often done with electric generators, but can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.\nElectronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, optoelectronics, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible and electronics is widely used in information processing, telecommunications, and signal processing. The ability of electronic devices to act as switches makes digital information processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.\nToday, most electronic devices use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of solid state physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering.\nThus, the work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents, and via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.\nEarly 20th-century alternator made in Budapest, Hungary, in the power generating hall of a hydroelectric station (photograph by Prokudin-Gorsky, 1905–1915).\nIn the 6th century BC, the Greek philosopher Thales of Miletus experimented with amber rods and these experiments were the first studies into the production of electrical energy. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electrical energy. The battery is a versatile and very common power source which is ideally suited to many applications, but its energy storage is finite, and once discharged it must be disposed of or recharged. For large electrical demands electrical energy must be generated and transmitted continuously over conductive transmission lines.\nElectrical power is usually generated by electro-mechanical generators driven by steam produced from fossil fuel combustion, or the heat released from nuclear reactions; or from other sources such as kinetic energy extracted from wind or flowing water. The modern steam turbine invented by Sir Charles Parsons in 1884 today generates about 80 percent of the electric power in the world using a variety of heat sources. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.\nSince electrical energy cannot easily be stored in quantities large enough to meet demands on a national scale, at all times exactly as much must be produced as is required. This requires electricity utilities to make careful predictions of their electrical loads, and maintain constant co-ordination with their power stations. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses.\nElectricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.\nThe resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate.\nElectricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first intercontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.\nThe effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership.\nElectronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain several billion miniaturised transistors in a region only a few centimetres square.\nA voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock is referred to as electrocution. Electrocution is still the means of judicial execution in some jurisdictions, though its use has become rarer in recent times.\nElectricity is not a human invention, and may be observed in several forms in nature, a prominent manifestation of which is lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is thought to arise from a natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when subjected to external pressure. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal, and when a piezoelectric material is subjected to an electric field, a small change in physical dimensions takes place.\n§Bioelectrogenesis in microbial life is a prominent phenomenon in soils and sediment ecology resulting from anaerobic respiration. The microbial fuel cell mimics this ubiquitous natural phenomenon.\nSome organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system, and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants.\nIn the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. \ or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.\nAs the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who \"finger death at their gloves' end as they piece and repiece the living wires\Wichita Lineman\Lives of Eminent Philosophers, Book 1 Chapter 1 \De Animus (On the Soul) Book 1 Part 2 (B4 verso)\Electricity in the age of Enlightenment\The Big Jump from the Legs of a Frog\Ueber den Einfluss des ultravioletten Lichtes auf die electrische Entladung\The Nobel Prize in Physics 1921\The repulsive force between two small spheres charged with the same type of electricity is inversely proportional to the square of the distance between the centres of the two spheres.\Lab Note #105 EMI Reduction – Unsuppressed vs. Suppressed\The Bumpy Road to Energy Deregulation\, : [], : 6202, : , : , : null, : }
|
|
|
{: , : , : [], : 2748, : , : , : null, : }
|
|
|
{: , : u}ller matrix. If, on the other hand, we know that the source is polarized and the observation is made on the same polarization, one can use a scalar model and adopt Jones calculus \\cite{Goodman85,Popoff10a,Akbulut11}:\n \\begin{eqnarray}\n E^{\\rm out}_k = \\sum_{i=1}^{N_I} t_{ki} E^{\\rm in}_i \\qquad \\forall~ k=1,\\ldots,N_O\n \\label{eq:transm}\n \\end{eqnarray}\n We recall that the elements of the transmission matrix are random complex coefficients\\cite{Popoff10a}. For the case of completely unpolarized modes, we can also use a scalar model similar to Eq. \\eqref{eq:transm}, but whose variables are the intensities of the outgoing/incoming fields, rather than the fields themselves.\\\\ \nIn the following, for simplicity, we will consider Eq. (\\ref{eq:transm}) as our starting point,\nwhere $E^{\\rm out}_k$, $E^{\\rm in}_i$ and $t_{ki}$ are all complex scalars. \nIf Eq. \\eqref{eq:transm} holds for any $k$, we can write:\n \\begin{eqnarray}\n \\int \\prod_{k=1}^{N_O} dE^{\\rm out}_k \\prod_{k=1}^{N_O}\\delta\\left(E^{\\rm out}_k - \\sum_{j=1}^{N_I} t_{kj} E^{\\rm in}_j \\right) = 1\n \\nonumber\n \\\\\n \\label{eq:deltas}\n \\end{eqnarray}\n\n Observed data are a noisy representation of the true values of the fields. Therefore, in inference problems it is statistically more meaningful to take that noise into account in a probabilistic way, \n rather than looking at the precise solutions of the exact equations (whose parameters are unknown). \n To this aim we can introduce Gaussian distributions whose limit for zero variance are the Dirac deltas in Eq. (\\ref{eq:deltas}).\n Moreover, we move to consider the ensemble of all possible solutions of Eq. (\\ref{eq:transm}) at given $\\mathbb{T}$, looking at all configurations of input fields. We, thus, define the function:\n \n \\begin{eqnarray}\n Z &\\equiv &\\int_{{\\cal S}_{\\rm in}} \\prod_{j=1}^{N_I} dE^{\\rm in}_j \\int_{{\\cal S}_{\\rm out}}\\prod_{k=1}^{N_O} dE^{\\rm out}_k \n \\label{def:Z}\n\\\\\n \\times\n &&\\prod_{k=1}^{N_O}\n \\frac{1}{\\sqrt{2\\pi \\Delta^2}} \\exp\\left\\{-\\frac{1}{2 \\Delta^2}\\left|\n E^{\\rm out}_k -\\sum_{j=1}^{N_I} t_{kj} E^{\\rm in}_j\\right|^2\n\\right\\} \n\\nonumber\n \\end{eqnarray}\n We stress that the integral of Eq. \\eqref{def:Z} is not exactly a Gaussian integral. Indeed, starting from Eq. \\eqref{eq:deltas}, two constraints on the electromagnetic field intensities must be taken into account. \n\n The space of solutions is delimited by the total power ${\\cal P}$ received by system, i.e., \n ${\\cal S}_{\\rm in}: \\{E^{\\rm in} |\\sum_k I^{\\rm in}_k = \\mathcal{P}\\}$, also implying a constraint on the total amount of energy that is transmitted through the medium, i. e., \n ${\\cal S}_{\\rm out}:\\{E^{\\rm out} |\\sum_k I^{\\rm out}_k=c\\mathcal{P}\\}$, where the attenuation factor $c<1$ accounts for total losses.\n As we will see more in details in the following, being interested in inferring the transmission matrix through the PLM, we can omit to explicitly include these terms in Eq. \\eqref{eq:H_J} since they do not depend on $\\mathbb{T}$ not adding any information on the gradients with respect to the elements of $\\mathbb{T}$.\n \n Taking the same number of incoming and outcoming channels, $N_I=N_O=N/2$, and ordering the input fields in the first $N/2$ mode indices and the output fields in the last $N/2$ indices, we can drop the ``in'' and ``out'' superscripts and formally write $Z$ as a partition function\n \\begin{eqnarray}\n \\label{eq:z}\n && Z =\\int_{\\mathcal S} \\prod_{j=1}^{N} dE_j \\left( \\frac{1}{\\sqrt{2\\pi \\Delta^2}} \\right)^{N/2} \n \\hspace*{-.4cm} \\exp\\left\\{\n -\\frac{ {\\cal H} [\\{E\\};\\mathbb{T}] }{2\\Delta^2}\n \\right\\}\n \\\\\n&&{\\cal H} [\\{E\\};\\mathbb{T}] =\n- \\sum_{k=1}^{N/2}\\sum_{j=N/2+1}^{N} \\left[E^*_j t_{jk} E_k + E_j t^*_{kj} E_k^* \n\\right]\n \\nonumber\n\\\\\n&&\\qquad\\qquad \\qquad + \\sum_{j=N/2+1}^{N} |E_j|^2+ \\sum_{k,l}^{1,N/2}E_k\nU_{kl} E_l^*\n \\nonumber\n \\\\\n \\label{eq:H_J}\n &&\\hspace*{1.88cm } = - \\sum_{nm}^{1,N} E_n J_{nm} E_m^*\n \\end{eqnarray}\n where ${\\cal H}$ is a real-valued function by construction, we have introduced the effective input-input coupling matrix\n\\begin{equation}\nU_{kl} \\equiv \\sum_{j=N/2+1}^{N}t^*_{lj} t_{jk} \n \\label{def:U}\n \\end{equation}\n and the whole interaction matrix reads (here $\\mathbb{T} \\equiv \\{ t_{jk} \\}$)\n \\begin{equation}\n \\label{def:J}\n \\mathbb J\\equiv \\left(\\begin{array}{ccc|ccc}\n \\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\phantom{()}&-\\mathbb{U} \\phantom{()}&\\phantom{()}&\\phantom{()}&{\\mathbb{T}}&\\phantom{()}\\\\\n\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\hline\n\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\phantom{()}& \\mathbb T^\\dagger&\\phantom{()}&\\phantom{()}& - \\mathbb{I} &\\phantom{()}\\\\\n\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}\\\\\n \\end{array}\\right)\n \\end{equation}\n \n Determining the electromagnetic complex amplitude configurations that minimize the {\\em cost function} ${\\cal H}$, Eq. (\\ref{eq:H_J}), means to maximize the overall distribution peaked around the solutions of the transmission Eqs. (\\ref{eq:transm}). As the variance $\\Delta^2\\to 0$, eventually, the initial set of Eqs. (\\ref{eq:transm}) are recovered. The ${\\cal H}$ function, thus, plays the role of an Hamiltonian and $\\Delta^2$ the role of a noise-inducing temperature. The exact numerical problem corresponds to the zero temperature limit of the statistical mechanical problem. Working with real data, though, which are noisy, a finite ``temperature''\n allows for a better representation of the ensemble of solutions to the sets of equations of continuous variables. \n \n\n \n \n Now, we can express every phasor in Eq. \\eqref{eq:z} as $E_k = A_k e^{\\imath \\phi_k}$. As a working hypothesis we will consider the intensities $A_k^2$ as either homogeneous or as \\textit{quenched} with respect to phases.\nThe first condition occurs, for instance, to the input intensities $|E^{\\rm in}_k|$ produced by a phase-only spatial light modulator (SLM) with homogeneous illumination \\cite{Popoff11}.\nWith \\textit{quenched} here we mean, instead, that the intensity of each mode is the same for every solution of Eq. \\eqref{eq:transm} at fixed $\\mathbb T$.\nWe stress that, including intensities in the model does not preclude the inference analysis but it is out of the focus of the present work and will be considered elsewhere. \n\nIf all intensities are uniform in input and in output, this amount to a constant rescaling for each one of the four sectors of matrix $\\mathbb J$ in Eq. (\\ref{def:J}) that will not change the properties of the matrices.\nFor instance, if the original transmission matrix is unitary, so it will be the rescaled one and the matrix $\\mathbb U$ will be diagonal.\nOtherwise, if intensities are \\textit{quenched}, i.e., they can be considered as constants in Eq. (\\ref{eq:transm}),\nthey are inhomogeneous with respect to phases. The generic Hamiltonian element will, therefore, rescale as \n \\begin{eqnarray}\n E^*_n J_{nm} E_m = J_{nm} A_n A_m e^{\\imath (\\phi_n-\\phi_m)} \\to J_{nm} e^{\\imath (\\phi_n-\\phi_m)}\n \\nonumber\n \\end{eqnarray}\n and the properties of the original $J_{nm}$ components are not conserved in the rescaled one. In particular, we have no argument, anymore, to possibly set the rescaled $U_{nm}\\propto \\delta_{nm}$.\n Eventually, we end up with the complex couplings $XY$ model, whose real-valued Hamiltonian is written as\n \\begin{eqnarray}\n \\mathcal{H}& = & - \\frac{1}{2} \\sum_{nm} J_{nm} e^{-\\imath (\\phi_n - \\phi_m)} + \\mbox{c.c.} \n \\label{eq:h_im}\n\\\\ &=& - \\frac{1}{2} \\sum_{nm} \\left[J^R_{nm} \\cos(\\phi_n - \\phi_m)+\n J^I_{nm}\\sin (\\phi_n - \\phi_m)\\right] \n \\nonumber\n \\end{eqnarray}\nwhere $J_{nm}^R$ and $J_{nm}^I$ are the real and imaginary parts of $J_{nm}$. Being $\\mathbb J$ Hermitian, $J^R_{nm}=J^R_{mn}$ is symmetric and $J_{nm}^I=-J_{mn}^I$ is skew-symmetric.\n\n\\begin{comment}\n\\textcolor{red}{\nF: comment about quenched:\nI think that to obtain the XY model, it is not necessary that the intensities are strictly quenched (that is also a quite unfeasible situation, I guess).\nIndeed eq (2) does not deal with the dynamics of the modes, but just connect the in and out ones.\nFor this, what it is necessary to have the XY model, it is that the intensities are always the same on the different samples\n(so that the matrix $t_{ij}$ is the same for different phase data). If the intensities are fixed, then they can be incorporated in $t_{ij}$ and eq (2) can be written just for phases as described. \\\\\n}\n\\end{comment}\n\n\n \\section{Pseudolikelihood Maximization}\n \\label{sec:plm}\nThe inverse problem consists in the reconstruction of the parameters $J_{nm}$ of the Hamiltonian, Eq. (\\ref{eq:h_im}). \nGiven a set of $M$ data configurations of $N$ spins\n $\\bm\\sigma = \\{ \\cos \\phi_i^{(\\mu)},\\sin \\phi_i^{(\\mu)} \\}$, $i = 1,\\dots,N$ and $\\mu=1,\\dots,M$, we want to \\emph{infer} the couplings:\n \\begin{eqnarray}\n\\bm \\sigma \\rightarrow \\mathbb{J} \n\\nonumber\n \\end{eqnarray}\n With this purpose in mind,\n in the rest of this section we implement the working equations for the techniques used. \n In order to test our methods, we generate the input data, i.e., the configurations, by Monte-Carlo simulations of the model.\n The joint probability distribution of the $N$ variables $\\bm{\\phi}\\equiv\\{\\phi_1,\\dots,\\phi_N\\}$, follows the Gibbs-Boltzmann distribution:\n \\begin{equation}\\label{eq:p_xy}\n P(\\bm{\\phi}) = \\frac{1}{Z} e^{-\\beta \\mathcal{H\\left(\\bm{\\phi}\\right)}} \\quad \\mbox{ where } \\quad Z = \\int \\prod_{k=1}^N d\\phi_k e^{-\\beta \\mathcal{H\\left(\\bm{\\phi}\\right)}} \n \\end{equation}\n and where we denote $\\beta=\\left( 2\\Delta^2 \\right)^{-1}$ with respect to Eq. (\\ref{def:Z}) formalism.\n In order to stick to usual statistical inference notation, in the following we will rescale the couplings by a factor $\\beta / 2$: $\\beta J_{ij}/2 \\rightarrow J_{ij}$. \n The main idea of the PLM is to work with the conditional probability distribution of one variable $\\phi_i$ given all other variables, \n $\\bm{\\phi}_{\\backslash i}$:\n \n \\begin{eqnarray}\n\t\\nonumber\n P(\\phi_i | \\bm{\\phi}_{\\backslash i}) &=& \\frac{1}{Z_i} \\exp \\left \\{ {H_i^x (\\bm{\\phi}_{\\backslash i})\n \t\\cos \\phi_i + H_i^y (\\bm{\\phi}_{\\backslash i}) \\sin \\phi_i } \\right \\}\n\t\\\\\n \\label{eq:marginal_xy}\n\t&=&\\frac{e^{H_i(\\bm{\\phi}_{\\backslash i}) \\cos{\\left(\\phi_i-\\alpha_i(\\bm{\\phi}_{\\backslash i})\\right)}}}{2 \\pi I_0(H_i)}\n \\end{eqnarray}\n where $H_i^x$ and $H_i^y$ are defined as\n \\begin{eqnarray}\n H_i^x (\\bm{\\phi}_{\\backslash i}) &=& \\sum_{j (\\neq i)} J^R_{ij} \\cos \\phi_j - \\sum_{j (\\neq i) } J_{ij}^{I} \\sin \\phi_j \\phantom{+ h^R_i} \\label{eq:26} \\\\\n H_i^y (\\bm{\\phi}_{\\backslash i}) &=& \\sum_{j (\\neq i)} J^R_{ij} \\sin \\phi_j + \\sum_{j (\\neq i) } J_{ij}^{I} \\cos \\phi_j \\phantom{ + h_i^{I} }\\label{eq:27}\n \\end{eqnarray}\nand $H_i= \\sqrt{(H_i^x)^2 + (H_i^y)^2}$, $\\alpha_i = \\arctan H_i^y/H_i^x$ and we introduced the modified Bessel function of the first kind:\n \\begin{equation}\n \\nonumber\n I_k(x) = \\frac{1}{2 \\pi}\\int_{0}^{2 \\pi} d \\phi e^{x \\cos{ \\phi}}\\cos{k \\phi}\n \\end{equation}\n \n Given $M$ observation samples $\\bm{\\phi}^{(\\mu)}=\\{\\phi^\\mu_1,\\ldots,\\phi^\\mu_N\\}$, $\\mu = 1,\\dots, M$, the\n pseudo-loglikelihood for the variable $i$ is given by the logarithm of Eq. (\\ref{eq:marginal_xy}),\n \\begin{eqnarray}\n \\label{eq:L_i}\n L_i &=& \\frac{1}{M} \\sum_{\\mu = 1}^M \\ln P(\\phi_i^{(\\mu)}|\\bm{\\phi}^{(\\mu)}_{\\backslash i})\n \\\\\n \\nonumber\n & =& \\frac{1}{M} \\sum_{\\mu = 1}^M \\left[ H_i^{(\\mu)} \\cos( \\phi_i^{(\\mu)} - \\alpha_i^{(\\mu)}) - \\ln 2 \\pi I_0\\left(H_i^{(\\mu)}\\right)\\right] \\, .\n \\end{eqnarray}\nThe underlying idea of PLM is that an approximation of the true parameters of the model is obtained for values that maximize the functions $L_i$.\nThe specific maximization scheme differentiates the different techniques.\n\n\n \n \n \\subsection{PLM with $l_2$ regularization}\n Especially for the case of sparse graphs, it is useful to add a regularizer, which prevents the maximization routine to move towards high values of \n $J_{ij}$ and $h_i$ without converging. We will adopt an $l_2$ regularization so that the Pseudolikelihood function (PLF) at site $i$ reads:\n \\begin{equation}\\label{eq:plf_i}\n {\\cal L}_i = L_i\n - \\lambda \\sum_{i \\neq j} \\left(J_{ij}^R\\right)^2 - \\lambda \\sum_{i \\neq j} \\left(J_{ij}^I\\right)^2 \n \\end{equation}\n with $\\lambda>0$.\n Note that the values of $\\lambda$ have to be chosen arbitrarily, but not too large, in order not to overcome $L_i$.\n The standard implementation of the PLM consists in maximizing each ${\\cal L}_i$, for $i=1\\dots N$, separately. The expected values of the couplings are then:\n \\begin{equation}\n \\{ J_{i j}^*\\}_{j\\in \\partial i} := \\mbox{arg max}_{ \\{ J_{ij} \\}}\n \\left[{\\cal L}_i\\right]\n \\end{equation}\n In this way, we obtain two estimates for the coupling $J_{ij}$, one from maximization of ${\\cal L}_i$, $J_{ij}^{(i)}$, and another one from ${\\cal L}_j$, say $J_{ij}^{(j)}$.\n Since the original Hamiltonian of the $XY$ model is Hermitian, we know that the real part of the couplings is symmetric while the imaginary part is skew-symmetric. \n \n The final estimate for $J_{ij}$ can then be obtained averaging the two results:\n \n \n \n \\begin{equation}\\label{eq:symm}\n J_{ij}^{\\rm inferred} = \\frac{J_{ij}^{(i)} + \\bar{J}_{ij}^{(j)}}{2} \n \\end{equation}\n where with $\\bar{J}$ we indicate the complex conjugate.\n It is worth noting that the pseudolikelihood $L_i$, Eq. \\eqref{eq:L_i}, is characterized by the\n following properties: (i) the normalization term of Eq.\\eqref{eq:marginal_xy} can be\n computed analytically at odd with the {\\em full} likelihood case that\n in general require a computational time which scales exponentially\n with the size of the systems; (ii) the $\\ell_2$-regularized pseudolikelihood\n defined in Eq.\\eqref{eq:plf_i} is strictly concave (i.e. it has a single\n maximizer)\\cite{Ravikumar10}; (iii) it is consistent, i.e. if $M$ samples are\n generated by a model $P(\\phi | J*)$ the maximizer tends to $J*$\n for $M\\rightarrow\\infty$\\cite{besag1975}. Note also that (iii) guarantees that \n $|J^{(i)}_{ij}-J^{(j)}_{ij}| \\rightarrow 0$ for $M\\rightarrow \\infty$.\n In Secs. \\ref{sec:res_reg}, \\ref{sec:res_dec} \n we report the results obtained and we analyze the performances of the PLM having taken the configurations from Monte-Carlo simulations of models whose details are known.\n \n\n \n \\subsection{PLM with decimation}\n Even though the PLM with $l_2$-regularization allows to dwell the inference towards the low temperature region and in the low sampling case with better performances that mean-field methods, in some situations some couplings are overestimated and not at all symmetric. Moreover, in the technique there is the bias of the $l_2$ regularizer.\n Trying to overcome these problems, Decelle and Ricci-Tersenghi introduced a new method \\cite{Decelle14}, known as PLM + decimation: the algorithm maximizes the sum of the $L_i$,\n \\begin{eqnarray}\n {\\cal L}\\equiv \\frac{1}{N}\\sum_{i=1}^N \\mbox{L}_i\n \\end{eqnarray} \n and, then, it recursively set to zero couplings which are estimated very small. We expect that as long as we are setting to zero couplings that are unnecessary to fit the data, there should be not much changing on ${\\cal L}$. Keeping on with decimation, a point is reached where ${\\cal L}$ decreases abruptly indicating that relevant couplings are being decimated and under-fitting is taking place.\n Let us define by $x$ the fraction of non-decimated couplings. To have a quantitative measure for the halt criterion of the decimation process, a tilted ${\\cal L}$ is defined as,\n \\begin{eqnarray}\n \\mathcal{L}_t &\\equiv& \\mathcal{L} - x \\mathcal{L}_{\\textup{max}} - (1-x) \\mathcal{L}_{\\textup{min}} \\label{$t$PLF} \n \\end{eqnarray}\n where \n \\begin{itemize}\n \\item $\\mathcal{L}_{\\textup{min}}$ is the pseudolikelyhood of a model with independent variables. In the XY case: $\\mathcal{L}_{\\textup{min}}=-\\ln{2 \\pi}$.\n \\item\n $\\mathcal{L}_{\\textup{max}}$ is the pseudolikelyhood in the fully-connected model and it is maximized over all the $N(N-1)/2$ possible couplings. \n \\end{itemize}\n At the first step, when $x=1$, $\\mathcal{L}$ takes value $\\mathcal{L}_{\\rm max}$ and $\\mathcal{L}_t=0$. On the last step, for an empty graph, i.e., $x=0$, $\\mathcal{L}$ takes the value $\\mathcal{L}_{\\rm min}$ and, hence, again $\\mathcal{L}_t =0$. \n In the intermediate steps, during the decimation procedure, as $x$ is decreasing from $1$ to $0$, one observes firstly that $\\mathcal{L}_t$ increases linearly and, then, it displays an abrupt decrease indicating that from this point on relevant couplings are being decimated\\cite{Decelle14}. In Fig. \\ref{Jor1-$t$PLF} we give an instance of this behavior for the 2D short-range XY model with ordered couplings. We notice that the maximum point of $\\mathcal{L}_t$ coincides with the minimum point of the reconstruction error, the latter defined as \n \\begin{eqnarray}\\label{eq:errj}\n \\mbox{err}_J \\equiv \\sqrt{\\frac{\\sum_{i<j} (J^{\\rm inferred}_{ij} -J^{\\rm true}_{ij})^2}{N(N-1)/2}} \\label{err}\n \\end{eqnarray}\n We stress that the ${\\cal L}_t$ maximum is obtained ignoring the underlying graph, while the err$_J$ minimum can be evaluated once the true graph has been reconstructed. \n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor1_dec_tPLF_new.eps}\n \t\\caption{The tilted likelyhood ${\\cal L}_t$ curve and the reconstruction error vs the number of decimated couplings for an ordered, real-valued J on 2D XY model with $N=64$ spins. The peak of ${\\cal L}_t$ coincides with the dip of the error.} \n \t\\label{Jor1-$t$PLF}\n \\end{figure} \n \n \n In the next sections we will show the results obtained on the $XY$ model analyzing the performances of the two methods and comparing them also with a mean-field method \\cite{Tyagi15}.\n \n \n \n \\section{Inferred couplings with PLM-$l_2$}\n \\label{sec:res_reg}\n \\subsection{$XY$ model with real-valued couplings}\n \n In order to obtain the vector of couplings, $J_{ij}^{\\rm inferred}$ the function $-\\mathcal{L}_i$ is minimized through the vector of derivatives ${\\partial \\mathcal{L}_i}/\\partial J_{ij}$. The process is repeated for all the couplings obtaining then a fully connected adjacency matrix. The results here presented are obtained with $\\lambda = 0.01$.\n For the minimization we have used the MATLAB routine \\emph{minFunc\\_2012}\\cite{min_func}. \n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor11_2D_l2_JR_soJR_TPJR}\n \t\\caption{Top panels: instances of single site coupling reconstruction for the case of $N=64$ XY spins on a 2D lattice with ordered $J$ (left column) and bimodal distributed $J$ (right column). \n \t\n \tBottom panels: sorted couplings.}\n \t\\label{PL-Jor1}\n \\end{figure}\n\n \nTo produce the data by means of numerical Monte Carlo simulations a system with $N=64$ spin variables is considered on a deterministic 2D lattice with periodic boundary conditions. \nEach spin has then connectivity $4$, i.e., we expect to infer an adjacency matrix with $N c = 256$ couplings different from zero. \nThe dynamics of the simulated model is based on the Metropolis algorithm and parallel tempering\\cite{earl05} is used to speed up the thermalization of the system.\nThe thermalization is tested looking at the average energy over logarithmic time windows and\nthe acquisition of independent configurations\nstarts only after the system is well thermalized.\n\n For the values of the couplings we considered two cases: an ordered case, indicated in the figure as $J$ ordered (e.g., left column of Fig. \\ref{PL-Jor1}) where the couplings can take values $J_{ij}=0,J$, with $J=1$, \n and a quenched disordered case, indicated in the figures as $J$ disordered (e.g., right column of Fig. \\ref{PL-Jor1})\n where the couplings can take also negative values, i.e., \n $J_{ij}=0,J,-J$, with a certain probability. The results here presented were obtained with bimodal distributed $J$s: \n \n $P(J_{ij}=J)=P(J_{ij}=-J)=1/2$. The performances of the PLM have shown not to depend on $P(J)$. \n \n We recall that in Sec. \\ref{sec:plm} we used the temperature-rescaled notation, i.e., $J_{ij}$ stands for $J_{ij}/T$. \n \n To analyze the performances of the PLM, in Fig. \\ref{PL-Jor1} the inferred couplings, $\\mathbb{J}^R_{\\rm inf}$, are shown on top of the original couplings, $\\mathbb{J}^R_{\\rm true}$.\n The first figure (from top) in the left column shows the $\\mathbb{J}^R_{\\rm inf}$ (black) and the $\\mathbb{J}^R_{\\rm tru}$ (green) for a given spin\n at temperature $T/J=0.7$ and number of samples $M=1024$. PLM appears to reconstruct the correct couplings, though zero couplings are always given a small inferred non-zero value. \n In the left column of Fig. \\ref{PL-Jor1}, both the $\\mathbb{J}^R_{\\rm{inf}}$ and the $\\mathbb{J}^R_{\\rm{tru}}$ are sorted in decreasing order and plotted on top of each other. \n We can clearly see that $\\mathbb{J}^R_{\\rm inf}$ reproduces the expected step function. Even though the jump is smeared, the difference between inferred couplings corresponding to the set of non-zero couplings \n and to the set of zero couplings can be clearly appreciated.\n Similarly, the plots in the right column of Fig. \\ref{PL-Jor1} show the results obtained for the case with bimodal disordered couplings, for the same working temperature and number of samples. \n In particular, note that the algorithm infers half positive and half negative couplings, as expected.\n \n \n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{Jor11_2D_l2_errJ_varT_varM}\n\\caption{Reconstruction error $\\mbox{err}_J$, cf. Eq. (\\ref{eq:errj}), plotted as a function of temperature (left) for three values of the number of samples $M$ and as a function $M$ (right) for three values of temperature in the ordered system, i.e., $J_{ij}=0,1$. \nThe system size is $N=64$.}\n\\label{PL-err-Jor1}\n\\end{figure}\n\nIn order to analyze the effects of the number of samples and of the temperature regimes, we plot in Fig. \\ref{PL-err-Jor1} the reconstruction error, Eq. (\\ref{err}), as a function of temperature for three different sample sizes $M=64,128$ and $512$. \nThe error is seen to sharply rise al low temperature, incidentally, in the ordered case, for $T<T_c \\sim 0.893$, which is the Kosterlitz-Thouless transition temperature of the 2XY model\\cite{Olsson92}. \n However, we can see that if only $M=64$ samples are considered, $\\mbox{err}_J$ remains high independently on the working temperature. \n In the right plot of Fig. \\ref{PL-err-Jor1}, $\\mbox{err}_J$ is plotted as a function of $M$ for three different working temperatures $T/J=0.4,0.7$ and $1.3$. As we expect, \n $\\mbox{err}_J$ decreases as $M$ increases. This effect was observed also with mean-field inference techniques on the same model\\cite{Tyagi15}.\n\nTo better understand the performances of the algorithms, in Fig. \\ref{PL-varTP-Jor1} we show several True Positive (TP) curves obtained for various values of $M$ at three different temperatures $T$. As $M$ is large and/or temperature is not too small, we are able to reconstruct correctly all the couplings present in the system (see bottom plots).\nThe True Positive curve displays how many times the inference method finds a true link of the original network as a function of the index of the vector of sorted absolute value of reconstructed couplings $J_{ij}^{\\rm inf}$. \nThe index $n_{(ij)}$ represents the related spin couples $(ij)$. The TP curve is obtained as follows: \nfirst the values $|J^{\\rm inf}_{ij}|$ are sorted in descending order and the spin pairs $(ij)$ are ordered according to the sorting position of $|J^{\\rm inf}_{ij}|$. Then,\n \ta cycle over the ordered set of pairs $(ij)$, indexed by $n_{(ij)}$, is performed, comparing with the original network coupling $J^{\\rm true}_{ij}$ and verifying whether it is zero or not. The true positive curve is computed as\n\\begin{equation}\n\\mbox{TP}[n_{(ij)}]= \\frac{\\mbox{TP}\\left[n_{(ij)}-1\\right] (n_{ij}-1)+ 1 -\\delta_{J^{\\rm true}_{ij},0}}{n_{(ij)}}\n\\end{equation}\nAs far as $J^{\\rm true}_{ij} \\neq 0$, TP$=1$. As soon as the true coupling of a given $(ij)$ couple in the sorted list is zero, the TP curve departs from one. \nIn our case, where the connectivity per spin of the original system is $c=4$ and there are $N=64$ spins, we know that we will have $256$ non-zero couplings. \n \tIf the inverse problem is successful, hence, we expect a steep decrease of the TP curve when $n_{ij}=256$ is overcome.\n\nIn Fig. \\ref{PL-varTP-Jor1}\nit is shown that, almost independently of $T/J$, the TP score improves as $M$ increases. Results are plotted for three different temperatures, $T=0.4,1$ and $2.2$, with increasing number of samples $M = 64, 128,512$ and $1024$ (clockwise). \nWe can clearly appreciate the improvement in temperature if the size of the data-set is not very large: for small $M$, $T=0.4$ performs better. \nWhen $M$ is high enough (e.g., $M=1024$), instead, the TP curves do not appear to be strongly influenced by the temperature.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{Jor11_2D_l2_TPJR_varT_varM}\n\t\\caption{TP curves for 2D short-range ordered $XY$ model with $N=64$ spins at three different values of $T/J$ with increasing - clockwise from top - $M$.}\n\t\\label{PL-varTP-Jor1} \n\\end{figure} \n\n\\subsection{$XY$ model with complex-valued couplings}\nFor the complex $XY$ we have to contemporary infer $2$ apart coupling matrices, $J^R_{i j}$ and $J^I_{i j}$. As before, a system of $N=64$ spins is considered on a 2D lattice.\nFor the couplings we have considered both ordered and bimodal disordered cases.\nIn Fig. \\ref{PL-Jor3}, a single row of the matrix $J$ (top) and the whole sorted couplings (bottom) are displayed for the ordered model (same legend as in Fig. \\ref{PL-Jor1}) for the real, $J^R$ (left column), and the imaginary part, $J^I$. \n\n\\begin{figure}[t!]\n\t\\centering\n\\includegraphics[width=1\\linewidth]{Jor3_l2_JRJI_soJRJI_TPJRJI}\n\t\\caption{Results related to the ordered complex XY model with $N=64$ spins on a 2D lattice. Top: instances of single site reconstruction for the real, JR (left column), and\n\t\tthe imaginary, JI (right column), part of $J_{ij}$. Bottom: sorted values of JR (left) and JI (right).}\n\t\t\n\t\\label{PL-Jor3}\n\\end{figure}\n \n \n \\section{PLM with Decimation}\n \\label{sec:res_dec}\n \n\n\\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor1_dec_tPLF_varT_varM}\n \t\\caption{Tilted Pseudolikelyhood, ${\\cal L}_t$, plotted as a function of decimated couplings. Top: Different ${\\cal L}_t$ curves obtained for different values of $M$ plotted on top of each other. Here $T=1.3$. The black line indicates the expected number of decimated couplings, $x^*=(N (N-1) - N c)/2=1888$. As we can see, as $M$ increases, the maximum point of ${\\cal L}_t$ approaches $x^*$. Bottom: Different ${\\cal L}_t$ curves obtained for different values of T with $M=2048$. We can see that, with this value of $M$, no differences can be appreciated on the maximum points of the different ${\\cal L}_t$ curves.}\n \t\\label{var-$t$PLF}\n \\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1\\linewidth]{Jor1_dec_tPLF_peak_statistics_varM_prob.eps}\n \t\\caption{Number of most likely decimated couplings, estimated by the maximum point of $\\mathcal{L}_t$, as a function of the number of samples $M$. We can clearly see that the maximum point of $\\mathcal{L}_t$ tends toward $x^*$, which is the right expected number of zero couplings in the system.} \n \t\\label{PLF_peak_statistics}\n \\end{figure}\n \n For the ordered real-valued XY model we show in Fig. \\ref{var-$t$PLF}, top panel, the outcome on the tilted pseudolikelyhood, $\\mathcal{L}_t$ Eq. \\eqref{$t$PLF}, of the progressive decimation: from a fully connected lattice down to an empty lattice. The figure shows the behaviour of $\\mathcal{L}_t$ for three different data sizes $M$. A clear data size dependence of the maximum point of $\\mathcal{L}_t$, signalling the most likely value for decimation, is shown. For small $M$ the most likely number of couplings is overestimated and for increasing $M$ it tends to the true value, as displayed in Fig. \\ref{PLF_peak_statistics}. In the bottom panel of Fig. \\ref{var-$t$PLF} we display instead different \n $\\mathcal{L}_t$ curves obtained for three different values of $T$.\n Even though the values of $\\mathcal{L}_t$ decrease with increasing temperature, the value of the most likely number of decimated couplings appears to be quite independent on $T$ with $M=2048$ number of samples.\nIn Fig. \\ref{fig:Lt_complex} we eventually display the tilted pseudolikelyhood for a 2D network with complex valued ordered couplings, where the decimation of the real and imaginary coupling matrices proceeds in parallel, that is, \nwhen a real coupling is small enough to be decimated its imaginary part is also decimated, and vice versa.\nOne can see that though the apart errors for the real and imaginary parts are different in absolute values, they display the same dip, to be compared with the maximum point of $\\mathcal{L}_t$.\n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor3_dec_tPLF_new}\n \t\\caption{Tilted Pseudolikelyhood, ${\\cal L}_t$, plotted with the reconstruction errors for the XY model with $N=64$ spins on a 2D lattice. These results refer to the case of ordered and complex valued couplings. The full (red) line indicates ${\\cal L}_t$. The dashed (green) \n \t\tand the dotted (blue) lines show the reconstruction errors (Eq. \\eqref{eq:errj}) obtained for the real and the imaginary couplings respectively. We can see that both ${\\rm err_{JR}}$ and ${\\rm err_{JI}}$ have a minimum at $x^*$.}\n \t\\label{fig:Lt_complex}\n \\end{figure}\n\n\\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor1_dec_JR_soJR_TPJR}\n \t\\caption{XY model on a 2D lattice with $N=64$ sites and real valued couplings. The graphs show the inferred (dashed black lines) and true couplings (full green lines) plotted on top of each other. The left and right columns refer to the\n \t\t cases of ordered and bimodal disordered couplings, respectively. Top figures: single site reconstruction, i.e., one row of the matrix $J$. Bottom figures: couplings are plotted sorted in descending order.} \n \t\\label{Jor1_dec}\n \\end{figure}\n \n\\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{Jor3_dec_JRJI_soJRJI_TPJRJI}\n \t\\caption{XY model on a 2D lattice with $N=64$ sites and ordered complex-valued couplings.\n \t\tThe inferred and true couplings are plotted on top of each other. The left and right columns show the real and imaginary parts, respectively, of the couplings. Top figures refer to a single site reconstruction, i.e., one row of the matrix $J$. Bottom figures report the couplings sorted in descending order.}\n \t\\label{Jor3_dec}\n \\end{figure}\n \n \n\n\n\n\n\n\n\n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{MF_PL_Jor1_2D_TPJR_varT}\n \t\\caption{True Positive curves obtained with the three techniques: PLM with decimation, (blue) dotted line, PLM with $l_2$ regularization, (greed) dashed line, and mean-field, (red) full line. These results refer to real valued ordered couplings with $N=64$ spins on a 2D lattice. The temperature is here $T=0.7$ while the four graphs refer to different sample sizes: $M$ increases clockwise.}\n \t\\label{MF_PL_TP}\n \\end{figure}\n \n \\begin{figure}[t!]\n \t\\centering\n \t\\includegraphics[width=1\\linewidth]{MF_PL_Jor1_2D_errJ_varT_varM}\n \t\\caption{Variation of reconstruction error, ${\\rm err_J}$, with respect to temperature as obtained with the three different techniques, see Fig. \\ref{MF_PL_TP}, for four different sample size: clockwise from top $M=512,1024, 2048$ and $4096$.} \n \t\\label{MF_PL_err}\n \\end{figure}\n \n Once the most likely network has been identified through the decimation procedure, we perform the same analysis displayed in Fig. \\ref{Jor1_dec} for ordered and then quenched disordered real-valued couplings\nand in Fig. \\ref{Jor3_dec} for complex-valued ordered couplings. In comparison to the results shown in Sec. \\ref{sec:res_reg},\n the PLM with decimation leads to rather cleaner results. In Figs. \\ref{MF_PL_err} and \\ref{MF_PL_TP} we compare the performances of the PLM with decimation in respect to ones of the PLM with $l_2$-regularization. These two techniques are also analysed in respect to a mean-field technique previously implemented on the same XY systems\\cite{Tyagi15}.\n \n For what concerns the network of connecting links, in Fig. \\ref{MF_PL_TP} we compare the TP curves obtained with the three techniques. The results refer to the case of ordered and real valued couplings, but similar behaviours were obtained for the other cases analysed. \n The four graphs are related to different sample sizes, with $M$ increasing clockwise. When $M$ is high enough, all techniques reproduce the true network. \n However, for lower values of $M$ the performances of the PLM with $l_2$ regularization and with decimation drastically overcome those ones of the previous mean field technique. \n In particular, for $M=256$ the PLM techniques still reproduce the original network while the mean-field method fails to find more than half of the couplings. \n When $M=128$, the network is clearly reconstructed only through the PLM with decimation while the PLM with $l_2$ regularization underestimates the couplings. \n Furthermore, we notice that the PLM method with decimation is able to clearly infer the network of interaction even when $M=N$ signalling that it could be considered also in the under-sampling regime $M<N$. \n \n \nIn Fig. \\ref{MF_PL_err} we compare the temperature behaviour of the reconstruction error.\nIn can be observed that for all temperatures and for all sample sizes the reconstruction error, ${\\rm err_J}$, (plotted here in log-scale) obtained with the PLM+decimation is always smaller than \nthat one obtained with the other techniques. The temperature behaviour of ${\\rm err_J}$ agrees with the one already observed for Ising spins in \\cite{Nguyen12b} and for XY spins in \\cite{Tyagi15} with a mean-field approach: ${\\rm err_J}$ displays a minimum around $T\\simeq 1$ and then it increases for very lower $T$; however,\n the error obtained with the PLM with decimation is several times smaller than the error estimated by the other methods.\n\n\n\n \n \n\n \n \\section{Conclusions}\n \\label{sec:conc}\n\n\nDifferent statistical inference methods have been applied to the inverse problem of the XY model.\nAfter a short review of techniques based on pseudo-likelihood and their formal generalization to the model we have tested their performances against data generated by means of Monte Carlo numerical simulations of known instances\nwith diluted, sparse, interactions.\n\nThe main outcome is that the best performances are obtained by means of the pseudo-likelihood method combined with decimation. Putting to zero (i.e., decimating) very weak bonds, this technique turns out to be very precise for problems whose real underlying interaction network is sparse, i.e., the number of couplings per variable does not scale with number of variables.\nThe PLM + decimation method is compared to the PLM + regularization method, with $\\ell_2$ regularization and to a mean-field-based method. The behavior of the quality of the network reconstruction is analyzed by looking at the overall sorted couplings and at the single site couplings, comparing them with the real network, and at the true positive curves in all three approaches. In the PLM +decimation method, moreover, the identification of the number of decimated bonds at which the tilted pseudo-likelihood is maximum allows for a precise estimate of the total number of bonds. Concerning this technique, it is also shown that the network with the most likely number of bonds is also the one of least reconstruction error, where not only the prediction of the presence of a bond is estimated but also its value.\n\nThe behavior of the inference quality in temperature and in the size of data samples is also investigated, basically confirming the low $T$ behavior hinted by Nguyen and Berg \\cite{Nguyen12b} for the Ising model. In temperature, in particular, the reconstruction error curve displays a minimum at a low temperature, close to the critical point in those cases in which a critical behavior occurs, and a sharp increase as temperature goes to zero. The decimation method, once again, appears to enhance this minimum of the reconstruction error of almost an order of magnitude with respect to other methods.\n \nThe techniques displayed and the results obtained in this work can be of use in any of the many systems whose theoretical representation is given by Eq. \\eqref{eq:HXY} or Eq. \\eqref{eq:h_im}, some of which are recalled in Sec. \\ref{sec:model}. In particular, a possible application can be the field of light waves propagation through random media and the corresponding problem of the reconstruction of an object seen through an opaque medium or a disordered optical fiber \\cite{Vellekoop07,Vellekoop08a,Vellekoop08b, Popoff10a,Akbulut11,Popoff11,Yilmaz13,Riboli14}.\n\n \tanswersIt outperforms mean-field methods and the PLM with $l_2$ regularization in terms of reconstruction error and true positive rate.lengthdatasetmultifieldqa_enlanguageenall_classes_idf3374a1971e3cfb4f241e04b82d5016d892e9e32a4aa2b53 |
|
|
inputWhat are some reasons for the lack of data sharing in archaeobotany?contextSowing the Seeds of Future Research: Data Sharing, Citation and Reuse in Archaeobotany\nReading: Sowing the Seeds of Future Research: Data Sharing, Citation and Reuse in Archaeobotany\nUniversity of Oxford, GB\nLisa is a post-doctoral research fellow at All Souls College, University of Oxford. Her publications include the co-authored volume The Rural Economy of Roman Britain (Britannia Monographs, 2017). Her research interests are focussed on agricultural practices in the later prehistoric and Roman period and the utilisation of archaeobotanical data to investigate human-plant relationships.\nThe practices of data sharing, data citation and data reuse are all crucial aspects of the reproducibility of archaeological research. This article builds on the small number of studies reviewing data sharing and citation practices in archaeology, focussing on the data-rich sub-discipline of archaeobotany. Archaeobotany is a sub-discipline built on the time-intensive collection of data on archaeological plant remains, in order to investigate crop choice, crop husbandry, diet, vegetation and a wide range of other past human-plant relationships. Within archaeobotany, the level and form of data sharing is currently unknown. This article first reviews the form of data shared and the method of data sharing in 239 articles across 16 journals which present primary plant macrofossil studies. Second, it assesses data-citation in meta-analysis studies in 107 articles across 20 journals. Third, it assesses data reuse practices in archaeobotany, before exploring how these research practices can be improved to benefit the rigour and reuse of archaeobotanical research.\nKeywords: Archaeobotany, Data reuse, Data sharing, Open science\nHow to Cite: Lodwick, L., 2019. Sowing the Seeds of Future Research: Data Sharing, Citation and Reuse in Archaeobotany. Open Quaternary, 5(1), p.7. DOI: http://doi.org/10.5334/oq.62\nAccepted on 29 May 2019 Submitted on 25 Mar 2019\nArchaeology is a discipline built on the production and analysis of quantitative data pertaining to past human behaviour. As each archaeological deposit is a unique occurrence, ensuring that the data resulting from excavation and analysis are preserved and accessible is crucially important. Currently, there is a general perception of a low level of data sharing and reuse. Such a low level of data availability would prevent the assessment of research findings and the reuse of data in meta-analysis (Kansa & Kansa 2013; Moore & Richards 2015). As observed across scientific disciplines, there is a major problem in the reproduction of scientific findings, commonly known as the ‘replication crisis’ (Costello et al. 2013). A range of intersecting debates contribute to this, including access to academic findings (open access), open data, access to software and access to methodologies, which can be broadly grouped as open science practices. Without these, the way that scientific findings can be verified and built upon is impaired. Questions of reproducibility have been raised in recent years in archaeology, with considerations of a range of practices which can improve the reproducibility of findings, and a recent call for the application of open science principles to archaeology (Marwick et al. 2017). Discussion has so far focussed on access to grey literature (Evans 2015), data sharing (Atici et al. 2013), data citation practices (Marwick & Pilaar Birch 2018) and computational reproducibility (Marwick 2017), with a focus on lithics, zooarchaeological evidence, and archaeological site reports.\nQuantitative assessments of current levels of data sharing, data citation and reuse remain limited in archaeology. The focus of evaluation has been on the uptake of large-scale digital archives for the preservation and dissemination of digital data, such as the Archaeology Data Service (ADS), utilised by developer-led and research projects, and recommended for use by many research funders in the UK (Richards 2002; Wright and Richards 2018). Much less focus has been paid to the data-sharing practices of individuals or small-groups of university-based researchers who may be disseminating their research largely through journal articles. Recent work on the availability of data on lithics assemblages found a low level of data sharing (Marwick & Pilaar Birch 2018) and there are perceptions of low levels of data reuse (Huggett 2018; Kintigh et al. 2018). Within zooarchaeology numerous studies have explored issues of data sharing and reuse (Kansa & Kansa 2013, 2014), and the sub-discipline is seen as one of the most advanced areas of archaeology in regards to open science (Cooper & Green 2016: 273). Beyond zooarchaeology, however, explicit discussion has remained limited.\nThis paper assesses data sharing and reuse practices in archaeology through the case study of archaeobotany – a long established sub-discipline within archaeology which has well-established principles of data recording. Archaeobotany is an interesting case study for data sharing in archaeology as it straddles the division of archaeology between scientific and more traditional techniques. Quantitative data on archaeological plant remains are also of interest to a range of other fields, including ecology, environmental studies, biology and earth sciences. The key issues of data sharing and data reuse (Atici et al. 2013) have been touched upon in archaeobotany over the past decade within broader discussions on data quality (Van der Veen, Livarda & Hill 2007; Van der Veen, Hill & Livarda 2013). These earlier studies focussed on the quality and availability of archaeobotanical data from developer-funded excavations in Britain and Cultural Resource Management in North America (Vanderwarker et al. 2016: 156). However, no discussion of data-sharing and reuse in academic archaeobotany occurred. A recent review of digital methods in archaeobotany is the notable exception, with discussions of the challenges and methods of data sharing (Warinner & d’Alpoim Guedes 2014).\nCurrently, we have no evidence for the levels of data sharing and reuse within archaeobotany. This article provides the first quantitative assessment of 1) data publication in recent archaeobotanical journal articles 2) data citation in recent archaeobotanical meta-analysis 3) the reuse of archaeobotanical datasets, in order to assess whether practices need to change and how such changes can take place.\n2. Data Publication and Re-use Practices in Archaeobotany\n2.1. History of data production and publication\nArchaeobotanical data falls within the category of observational data in archaeology (Marwick & Pilaar Birch 2018). Archaeobotanical data is considered as the quantitative assessment of plant macrofossils present within a sample from a discrete archaeological context, which can include species identification, plant part, levels of identification (cf. – confer or “compares to”), and a range of quantification methods including count, minimum number of individuals, levels of abundance and weight (Popper 1988). Archaeobotanical data is usually entered into a two-way data table organised by sample number. Alongside the counts of individual taxa, other information is also necessary to interpret archaeobotanical data, including sample volume, flot volume, charcoal volume, flot weight, level of preservation, sample number, context number, feature number, feature type and period. Beyond taxonomic identifications, a range of other types of data are increasingly gathered on individual plant macrofossils (morphometric measurements, isotopic values, aDNA).\nArchaeobotanical training places a strong emphasis on recording data on a sample-by-sample basis (Jacomet & Kreuz 1999: 138–139; Jones & Charles 2009; Pearsall 2016: 97–107). Time-consuming methodologies utilised in the pursuit of accurate sample-level data recording include sub-sampling and splitting samples into size fractions and counting a statistically useful number of items per sample (Van der Veen & Fieller 1982). The creation of sample-level data means analysis is often undertaken on the basis of individual samples, for instance the assessment of crop-processing stages and weed ecological evidence for crop husbandry practices. The analysis of sample level data also enables archaeobotanical finds to be integrated alongside contextual evidence from archaeological sites. Requirements for the publication of this data are in place in some archaeological guidelines, for instance current Historic England guidelines for archaeological practice in England (Campbell, Moffett & Straker 2011: 8).\nFrom the earliest archaeobotanical reports, such as Reid’s work at Roman Silchester, the sample from which plant remains were recovered was noted (Lodwick 2017a), but often results were reported as a list of taxa, or long catalogues of detailed botanical descriptions with seed counts, such as Knörzer’s work at Neuss (Knörzer 1970). Early systematic archaeobotanical reports displayed data within in-text tables, for example Jones’s work at Ashville (Jones 1978) and the two-way data table has been the standard form of reporting archaeobotanical data ever since. Often data tables are presented within book chapters or appendices, but the financial, space and time constraints of book publishing are limiting. Furthermore, there is the perception that specialist data was not necessary for publication (Barker 2001). Hence, alternative methods of the dissemination of specialist archaeological data were pursued in the later twentieth century.\nFrom the 1980s, archaeobotanical data tables were often consigned to microfiche following a Council for British Archaeology and Department of Environment report (Moore & Richards 2015: 31), with the example of the excavation of Roman Colchester where the contents of all archaeobotanical samples were available on microfiche (Murphy 1992). An alternative in the 2000s was providing data tables on CD Rom as seen, for instance, in the CD accompanying the study of a Roman farmstead in the Upper Thames Valley (Robinson 2007) or the One Poultry excavations in London (Hill and Rowsome 2011). Meanwhile, the inception of the Archaeology Data Service, a digital repository for heritage data, in 1996 meant archaeological datasets were increasingly digitally archived, for instance the data from the Channel Tunnel Rail Link Project (Foreman 2018) or a recent large-scale research excavation at Silchester (University of Reading 2018). In these cases, archaeobotanical data is available to download as a .csv file.\nWhilst the data publication strategy of large excavations was shifting, the availability of data from post-excavation assessment reports has remained challenging. So-called ‘grey literature’ results from the initial evaluation stage of developer-funded investigations and accompanying post-excavation assessment often contain a semi-quantitative evaluation of archaeobotanical samples on a scale of abundance. Whilst paper reports were initially deposited with county Historic Environment Records, a process of digitisation focussing on the Roman period has meant many pdfs are now available through the ADS (Allen et al. 2018), whilst born-digital reports are now deposited through OASIS (Online AccesS to the Index of archaeological investigationS), as part of the reporting process (Evans 2015), althought the extent to which specialist appendices are included is variable.\nThese varying ‘publication’ strategies means archaeobotanical data is often available somewhere for recent developer-funded excavations and large-scale developer-funded excavations, even if much of this data is as a printed table or .pdf file (Evans 2015; Evans and Moore 2014). However, academic journals are typically perceived as the most high-status publication venue for archaeobotanical data, and a crucial publication venue for academics in order to comply with institutional requirements and the norms of career progression. Aside from the problem of access to pay-walled journals by those without institutional subscriptions to all journals, the publication of primary data alongside research articles faces various problems, from the outright lack of inclusion of data, to problematic curation of supplementary data and a lack of peer review of data (Costello et al. 2013; Warinner and d’Alpoim Guedes 2014: 155; Whitlock, 2011). The extent of these problems for archaeobotany is currently unknown. Given the growth in archaeobotanical data production as methodologies are introduced into many new regions and periods over the last decade, it is vital that we know whether the mass of new data being produced is made available and is being reused.\nRecent important advances within archaeobotanical data sharing have focussed on the construction of the ARBODAT database, developed by Angela Kreuz at the Kommission für Archäologische Landesforschung in Hessen. The database is used by a range of researchers in Germany, the Czech Republic, France and England (Kreuz & Schäfer 2002). Data sharing enabled by the use of this database has facilitated research on Neolithic agriculture in Austria, Bulgaria and Germany (Kreuz et al. 2005), and Bronze Age agriculture in Europe (Stika and Heiss 2012). The use of this database makes data integration between specialists easier due to the shared data structure and metadata description, but often the primary archaeobotanical data is not made publicly available.\n2.2. Meta-analysis in archaeobotany\nBeyond the need to preserve information, a key reason for the formal sharing of archaeobotanical data is in its reuse to facilitate subsequent research. There has been a long-standing concern within archaeobotany with the need to aggregate datasets and identify temporal and spatial patterns. The palaeobotanist Clement Reid maintained his own database of Quaternary plant records in the late nineteenth century (Reid 1899), which formed the foundation of Godwin’s Quaternary database (Godwin 1975). Mid-twentieth century studies of prehistoric plant use compiled lists of archaeobotanical materials incorporating full references and the location of the archive (Jessen & Helbaek 1944). The International Work Group for Palaeoethnobotany was itself founded in 1968 in part with the aim to compile archaeobotanical data, first realised through the publication of Progress in Old World Palaeoethnobotany (Van Zeist, Wasylikowa & Behre 1991), and subsequently through the publication of annual lists of new records of cultivated plants (Kroll 1997).\nTo take England as an example, regional reviews produced by state heritage authorities have provided catalogues of archaeobotanical datasets in particular time periods and regions (e.g. Murphy 1998). When one archaeobotanist has undertaken the majority of study within a region, pieces of synthesis within books have provided a relatively comprehensive review, for instance in the Thames Valley, UK (Lambrick & Robinson 2009). Over the last decade regional synthesis has occurred within several funded reviews which produced catalogues of sites with archaeobotanical data (Lodwick 2014; McKerracher 2018; Parks 2012) and a series of funded projects in France have enabled regional synthesis (Lepetz & Zech-Matterne 2017). However, many of these reviews are not accompanied by an available underlying database, and draw upon reports which are themselves hard to access.\nThrough the 1990s and 2000s, a series of databases were constructed in order to collate data from sites in a particular region and facilitate synthetic research. However, these databases have all placed the role of data archiving onto later projects specifically funded to collate data, rather than sourcing datasets at the time of publication. Such a model is unsustainable, and is unlikely to result in all available datasets being compiled. The Archaeobotanical Computer Database (ABCD), published in 1996 in the first issue of Internet Archaeology, contained much of the archaeobotanical data from Britain available at the time of publication, largely at the level of individual samples. The database was compiled between 1989 and 1994 and is still accessible through the accompanying online journal publication (Tomlinson & Hall 1996). The ABCD made major contributions to recent reviews of the Roman and Medieval periods (Van der Veen, Livarda & Hill 2008; Van der Veen, Hill & Livarda 2013). However, the database could only be centrally updated, with the online resource remaining a static version, lacking much of the new data produced subsequent to the implementation of PPG16 in 1990. The ADEMNES database, created through a research project undertaken at the Universities of Freiburg and Tübingen, contains data from 533 eastern Mediterranean and Near Eastern sites (Riehl & Kümmel 2005). Kroll has maintained the Archaeobotanical Literature Database to accompany the Vegetation History and Archaeobotany articles (Kroll 2005) now accessible as a database (Kirleis & Schmültz 2018). Numerous other databases have collated archaeobotanical studies, including the COMPAG project (Fuller et al. 2015), the Cultural Evolution of Neolithic Europe project (Colledge 2016), RADAR in the Netherlands (van Haaster and Brinkkemper 1995), BRAIN Botanical Records of Archaeobotany Italian Network (Mercuri et al. 2015) and CZAD – Archaeobotanical database of Czech Republic (CZAD 2019).\nThe majority of databases have a restricted regional coverage, whilst research-project driven period-specific databases provide overlapping content. Whilst there are a wide range of archaeobotanical databases available, few contain primary datasets (other than the ABCD) which can be downloaded as .csv files. Data which is most commonly available are bibliographic references per site, with some indications of mode of preservation, quantity of archaeobotanical data, and sometimes taxa present. The databases do not inter-relate to each other, and function primarily as bibliographic sources enabling researchers to find comparative sites or to identify published datasets which need to be re-tabulated prior to meta-analysis. The IWGP website curates a list of resources, but otherwise the resources are often disseminated through the archaeobotany jiscmail list.\nBeyond the aim of cataloguing archaeobotanical data within a region and period, meta-analysis is often used in archaeobotany to identify spatial and chronological trends in a range of past human activities, for instance crop choice, crop husbandry practices, plant food consumption, the trade in luxury foods or the use of plants in ritual. Meta-analysis can be undertaken on the basis of simple presence/absence data per site, but in order for such analysis to be rigorous and comparable, sample-level data must be utilised. For instance, sample-level data is required for meta-studies, in order to identify high-quality samples of unmixed crops for weed ecology analysis (Bogaard 2004), to assess the importance of context in the evaluation of wild plant foods (Wallace et al. 2019), or to use volumetric measurements as a proxy for scale (Lodwick 2017b). The reuse of archaeobotanical data also extends to include datasets used as “controls” in commonly used forms of statistical analysis, for instance Jones’s weed data from Amorgos, Greece, which is utilised as a control group in discriminant analysis of crop-processing stage (Jones 1984), and ethnographic observations of crop items in different crop-processing stages (Jones 1990).\n2.3. Open data principles and solutions\nDebates over issues of data publication and meta-analysis have been on-going across scientific disciplines over the last decade (Editors 2009), and have been summarised within principles of open science, as recently set out in relation to archaeology (Marwick et al. 2017). Open Data is one of the three core principles for promoting transparency in social science (Miguel et al. 2014). The FAIR principles, developed by representatives from academia, industry, funding agencies, industry and publishers, provide four principles which data sharing should meet for use by both humans and machines – Findability, Accessibility, Interoperability, and Reusability (Wilkinson et al. 2016). A recent report assessing the adoption and impact of FAIR principles across academia in the UK included archaeology as a case study (Allen and Hartland 2018: 46). It reported how the ADS was often used to archive data, but that “The journal itself provides the “story” about the data, the layer that describes what the data is, how it was collected and what the author thinks it means.” The report also raises the problem that smaller projects may not have the funding to utilise the ADS, meaning that other repositories are utilised. Increasingly, archaeological data is made available through a wide range of data repositories (OSF, Mendeley Data, Zenodo, Open Context), university data repositories (e.g. ORA-Data), or social networking sites for academics (Academia.edu, ResearchGate). More widely in archaeology, some have observed that archaeological data is rarely published (Kintigh et al. 2014), and recent reviews have reported low levels of data sharing (Huggett 2018; Marwick & Pilaar Birch 2018). A closely related issue is that of data reuse. Responsible reuse of primary data encourages the sharing of primary data (Atici et al. 2013), but levels of data reuse in archaeology are thought to remain low (Huggett 2018). Principles for responsible data citation in archaeology have recently been developed summarising how datasets should be cited (Marwick & Pilaar Birch 2018).\nIn order to assess the current status of data sharing, citation and data re-use in archaeobotany, a review was undertaken of the publication of primary data and the publication of meta-analysis in major archaeological journals over the last ten years, building on recent pilot studies within archaeology (Marwick & Pilaar Birch 2018). The review of academic journals provided a contrast to recent assessments of archaeobotanical data deriving from developer-funded archaeology (Lodwick 2017c; Van der Veen, Hill & Livarda 2013). Journal articles have been selected as the focus of this study as the provision of online supplementary materials in the majority of journals and the ability to insert hyperlinks to persistent identifiers (eg a DOI) to link to datasets available elsewhere should not limit the publication of data and references. Much archaeobotanical data is also published elsewhere, especially from projects not based in the university sector, that is commercial or community archaeology in the UK. Archaeobotanical datasets emanating from this research are more commonly published through monographs, county journal articles, and unpublished (or grey literature) reports, but these are beyond the scope of the current review.\nAll journal articles were included which represent the principle reporting of a new archaeobotanical assemblage. The selected journals fall within three groups. First, what is considered the specialist archaeobotanical journal (Vegetation History and Archaeobotany (VHA)). Second, archaeological science journals (Archaeological and Anthropological Sciences, Environmental Archaeology, The Holocene, Journal of Archaeological Science (JAS), Journal of Archaeological Science: Reports (JASR), Journal of Ethnobiology, Quaternary International, Journal of Wetland Archaeology), which can be considered as specialist sub-disciplinary journals which should be maintaining data-quality. Third, general archaeology journals (Antiquity, Journal of Field Archaeology, Oxford Journal of Archaeology, Journal of Anthropological Archaeology, Journal of World Prehistory). Finally, the broader cross-disciplinary journals PLoS One and Proceedings of the National Academy of Sciences (PNAS) were included. Published articles from the past ten years (2009–2018) have been analysed in order to assess the availability of plant macrofossil data. This ten-year period brackets the period where most archaeological journals have moved online and adopted supplementary materials.\nData citation in synthetic studies has been assessed in the same range of publications. The extent of data reuse ranges from the analysis of whole sample data to the presence/absence of individual crops. The location of a data citation has been assessed in the same range of publications, with the addition of journals where occasional research incorporating archaeobotanical data is featured (Britannia, Journal of Archaeological Research, Ethnobiology Letters, Medieval Archaeology, Proceedings of the Prehistoric Society, World Archaeology). The underlying dataset for the analysis is available in Lodwick 2019.\n4.1. Primary data sharing\nHere, the location of primary archaeobotanical data, that is sample level counts of macroscopic plant remains, was assessed for 239 journal articles across 16 journals (Lodwick 2019 Table 1). Figure 1 shows the results grouped by journal. Overall, only 56% of articles shared their primary data. In, Antiquity, JAS, JASR, PLOS One, Quaternary International and VHA, the highest proportion of publications did not include their primary data, that is to say that the sample-by-sample counts of plant macrofossils was not available. This level of data is comparable to the findings of other pilot studies in archaeology. Marwick and Pilaar Birch found a data sharing rate of 53% from 48 articles published in Journal of Archaeological Science in Feb – May 2017 (Marwick & Pilaar Birch 2018: 7), and confirm previous assertions that data is often withheld in archaeology (Kansa 2012: 499). This is better than some disciplines, with a 9% data sharing rate on publication found across high impact journal science publications (n = 500) (Alsheikh-Ali et al. 2011) and 13% in biology, chemistry, mathematics and physics (n = 4370) (Womack 2015), yet still indicates that nearly half of articles did not include primary data. Primary archaeobotanical data is more likely to be shared in archaeobotanical and archaeological science journals than general archaeology journals. However, within the primary archaeobotanical journal, VHA, 51% of articles do not include their primary data (Figure 1).\nChart showing the location of primary archaeobotanical data by journal in primary archaeobotanical data publications.\nWhere primary data was not shared, the data which was available ranged from summary statistics, typically counts or frequencies, reported either by site, site phase, or feature group. Figure 2 summarises these results by year, showing that there is a gradient within articles not sharing their full ‘raw’ data, from those only provided sample counts on one aspect of the archaeobotanical assemblage, to those only presenting data graphically or within discussion. Beyond full data, the most common form of data shared is either summary counts per site or summary counts per feature or phase. Whilst this data does enable some level of reuse, the results of any sample-level data analysis presented within an article cannot be verified, and the data cannot be reused for crop-processing or weed ecology analysis which requires sample level data. Furthermore, such data would have been collected on a sample-by-sample basis, but this information is lost from the resulting publication.\nChart showing the form of archaeobotanical data shared by year in primary archaeobotanical data publications.\nThe forms in which data are made available vary across journals. The sharing of primary data within an article remains the most common data sharing form in archaeobotany (Figure 1). Data tables in text require manual handling to extract data, in journals such as VHA, whilst in other journals in-text tables can be downloaded as .csv files. These however would not be citable as a separate dataset. Supplementary datasets are the third most common form of data sharing. Indeed, the use of electronic supplementary material has been advocated recently for by some journals, such as the Journal of Archaeological Science (Torrence, Martinón-Torres & Rehren 2015). Microsoft Excel spreadsheets are the most common form of supplementary data, followed by .pdfs and then word documents (Figure 1). Both .xlsx and .docx are proprietary file formats, and not recommended for long term archiving or open science principles. There is no indication of improvement over the last decade in the form of data sharing. In 2018, 50% of articles did not share their primary data, and where the data was shared, it was in proprietary forms (.docx, .xlsx) or those that do not easily facilitate data reuse (.pdf) (Figure 3).\nChart showing the location of archaeobotanical data from 2009–2018 in primary archaeobotanical data publications.\nJust one of the articles included in this review incorporated a dataset archived in a repository (Farahani 2018), in contrast to the substantial growth in data repositories across academic disciplines (Marcial & Hemminger 2010). Other examples provide the underlying data for monograph publications, such as that of the archaeobotanical data from Gordion, Turkey (Marston 2017a, 2017b), Silchester, UK (Lodwick 2018; University of Reading 2018) and Vaihingen, Germany (Bogaard 2011a; Bogaard, 2011b).\nSeveral of the journals that have been assessed have research data policies. In the case of Vegetation History and Archaeobotany, sufficient papers have been surveyed to assess the impact of the research data policy on the availability of data. Figure 4 show the proportion of data sharing formats through time just for VHA (note the small sample size). The introduction of a research data policy in 2016 encouraging data sharing in repositories has not resulted in any datasets being shared in that format. Of the 10 articles published in PLOS One after the introduction of a clear research data policy in 2014, 4 did not contain primary data. However, elsewhere, journals with no research data policy, such as Antiquity, has one of the lower levels of data sharing (Figure 1).\nChart showing the location of primary archaeobotanical data in Vegetation History and Archaeobotany.\nThere are various reasons for why a primary dataset may be lacking. The option of providing supplementary datasets has been available in many of the journals here since before the start of the surveyed period (e.g. Vegetation History and Archaeobotany in 2004), and so cannot be a reason for the absence of data publication in this journal while it may be a reason in other journals. Reasons suggested for a lack of data sharing within archaeology include technological limitations, and resistance amongst some archaeologists to making their data available due to cautions of exposing data to scrutiny, lost opportunities of analysis before others use it and loss of ‘capital’ of data (Moore & Richards 2015: 34–35). Furthermore, control over how data tables is presented (taxa ordering, summary data presented) may also contribute to the preferential publishing of data within journal articles. Another factor to consider is the emphasis on the creation of new data through archaeological research (Huvila 2016). The creation of a new archaeobotanical dataset through primary analysis is a key form of training in archaeobotany, and the perception of the value of the reuse of other previously published archaeobotanical journals may be low, hence not encouraging the sharing of well-documented datasets. Excellent exams of data reuse have resulted in influential studies (Bogaard 2004; Riehl 2008; Wallace et al. 2019), and would hopefully encourage further data sharing in the future.\nGiven that there are numerous examples of meta-analysis which do take place in archaeobotany, it seems likely that the prevalent form of data sharing is through informal data sharing between individual specialists. However, this does not improve access to data in the long term, and is inefficient and time consuming, with large potential for data errors (Kansa & Kansa 2013), and relies on personal networks, which are likely to exclude some researchers. The absence of primary data in many archaeobotanical publications thus inhibits the verification of patterns observed within a dataset, and strongly limits the re-use potential of a dataset.\n4.2. Data citation\nOne of the common arguments for increasing data sharing is an associated increase in the citation of the articles which have data available. Here, the data citation practices of meta-analyses of plant macrofossil data undertaken over the last decade have been reviewed. 20 journals were consulted, including a wider range of period-specific journals, and 107 articles were assessed (Lodwick 2019 Table 2). Data citation was assessed as ‘in text’ or ‘in table’ to refer to when the citation and the bibliographic reference were within the article, as ‘in supplementary data’ when the citation and reference were within the supplementary materials, and as ‘no citation’ when no citation and reference was provided.\n21% of articles (n = 22) did not contain any citations to the underlying studies. 16% (n = 17) contained citations within supplementary data files. 50% of articles (n = 53) contained a citation within a table within the main article, and 14% (n = 15) contained citations within the main text. For the 21% of articles without data citations, the results of these studies could not be reproduced without consulting individual authors. The papers supplying the underlying data also received no credit for producing these datasets. Where articles contain citations within the main article (in text or table), full credit is provided to the underlying studies, a citation link is created through systems such as google scholar, and the study can be easily built upon in the future. Where the citation is provided within supplementary data, the original studies do receive attribution, but are not linked to so easily.\nThrough time, there is a steady decrease in the proportion of studies without citations to the underlying data, whereby of the 17 meta-analysis articles published in 2018, only one had no data citations. In comparison, in 2009, 3 out of 8 meta-analysis articles contained no data citation (Figure 6). Overall this is a more positive outlook on the reuse of published data, but the consistent presence of articles lacking data citation indicates that improvements are needed. Reasons for a lack of data citation may include restrictions on word counts imposed by journals, a lack of technical knowledge in making large databases available, or the wish to hold on to a dataset to optimise usage. Considering the type of journal (Figure 5), levels of data citation are worse in general archaeology journals, with sub-disciplinary journals showing slightly better levels of data citation. In particular VHA has a lack of consistency in where data citations are located.\nChart showing the location of data citations in meta-analysis journal articles by journal type.\nChart showing the location of data citations in meta-analysis journal articles from 2009–2018.\n4.3. Reuse of archived archaeobotanical datasets\nThe majority of data citations assessed in the previous section are to articles or book chapters rather than data-sets. The ADS currently hosts 66 data archives which have been tagged as containing plant macro data, deriving mainly from developer-funded excavations but also some research excavations. However, in some of these the plant macro data is contained within a pdf. As, the archiving of archaeobotanical datasets in data repositories is still at an early stage, the reuse of these datasets is assessed here on a case-by-case basis. The archaeobotanical dataset from the Neolithic site of Vaihingen, Germany (Bogaard 2011b) has not been cited on google scholar. Metrics are provided through the ADS, showing this dataset has been downloaded 56 times with 477 individual visits (as of 25/2/19). The archaeobotanical dataset from Gordion by Marston has no citations on Google Scholar (Marston 2017b), neither does the Giza botanical database (Malleson & Miracle 2018), but these are both very recently archived datasets. In contrast, the Roman Rural Settlement Project dataset, which includes site-level archaeobotanical data, has received greater levels of use, with 12 citations in Google Scholar, over 40,000 file downloads, and over 35,000 visits (Allen et al. 2018) and the archaeobotanical computer database (Tomlinson & Hall 1996) has been cited 44 times, and is the major dataset underpinning other highly-cited studies (Van der Veen, Livarda & Hill 2008; Van der Veen, Hill & Livarda 2013). Whilst there is clearly precedence for the reuse of archaeobotanical databases, current data citation practices within archaeobotany do not yet appear to be formally citing individual datasets, meaning an assessment of the reuse of archived archaeobotanical datasets is challenging.\n5. Steps Forward\nThis review of data sharing, citation, and reuse practices in archaeobotany has found medium levels of data sharing, good levels of data citation, but so far limited levels of reuse of archived data sets. This picture is similar across archaeology, in part attributed to the status of archaeology as a small-science, where data-sharing takes place ad-hoc (Marwick & Pilaar Birch 2018). Here, recommendations are discussed for improving these data practices within archaeobotany, of applicability more widely in archaeology.\nClearly an important step is improving the sharing of plant macrofossil data. Given the reasonable small size of most archaeobotanical datasets (a .csv file < 1mb), and a lack of ethical conflicts, there seems to be few reasons why the majority of archaeobotanical data couldn’t be shared. In the case of developer-funded derived data, issues of commercial confidentiality could limit the sharing of data. A key stage is establishing why levels of data sharing are not higher. Issues within archaeobotany may include the conflict between having to publish results within excavation monographs, which may take some time to be published, and have limited visibility due to high purchase costs and no digital access, and the need to publish journal articles for career progression within academia. The production of an archaeobotanical dataset is very time-consuming, and interim publication on notable aspects of an assemblage may be considered as a necessary publication strategy. More broadly, one important aspect is issues of equity in access to digital archiving resources (Wright & Richards 2018), such as differential access to funds, training and knowledge. A recent study in Sweden found that we need to know concerns, needs, and wishes of archaeologists in order to improve preservation of archaeological data (Huvila 2016), especially when control of ones data may be linked to perceptions of job security. In order to make improvements in data sharing and reuse across archaeology, we need improved training in data sharing and the reuse of data in higher education (Touchon & McCoy 2016; Cook et al. 2018), improved training in data management (Faniel et al. 2018), and crucially, the necessary software skills to make the reuse of archived datasets attainable (Kansa & Kansa 2014: 91). Examples of good practice in archaeobotany are the Vaihingen and Gordion datasets which demonstrate how datasets can be archived in data repositories to accompany a monograph (Bogaard 2011b; Marston 2017b), whilst Farahani (2018) provides an excellent example of a journal article, where the primary data is supplied as a .csv in a cited data repository along with the R script for the analysis.\nIn tandem with the need to encourage authors to share their data, is the need for journals to create and implement research data policies. Given the existence of research data policies in many of the journals included here, this reflects other findings of the poor enforcement of data policies by journals (Marwick & Pilaar Birch 2018), supporting arguments that journals should not be relied upon to make data accessible, and data should instead by deposited in digital repositries. In order to implement change in data sharing, there is a role to play for learned societies and academic organisation in lobbying funding bodies, prioritising data sharing in research projects. A key step is through journal editorial boards, and the enforcement of any pre-existing research data policies (Nosek et al. 2015). RevianswersTechnological limitations, resistance to exposing data to scrutiny, and desire to hold onto data for personal use.lengthdatasetmultifieldqa_enlanguageenall_classes_idaeb6cb26b11fc386727a529761d9d233ec7ba8dea9800b0f |
|
|
inputWhat was the club known as before being officially renamed FC Urartu?contextFootball Club Urartu (, translated Futbolayin Akumb Urartu), commonly known as Urartu, is an Armenian professional football team based in the capital Yerevan that currently plays in the Armenian Premier League. The club won the Armenian Cup three times, in 1992, 2007 and 2016. In 2013–2014, they won the Armenian Premier League for the first time in their history.\n\nIn early 2016, the Russia-based Armenian businessman Dzhevan Cheloyants became a co-owner of the club after purchasing the major part of the club shares. The club was known as FC Banants until 1 August 2019, when it was officially renamed FC Urartu.\n\nHistory\n\nKotayk\nUrartu FC were founded as FC Banants by Sarkis Israelyan on 21 January 1992 in the village of Kotayk, representing the Kotayk Province. He named the club after his native village of Banants (currently known as Bayan). Between 1992 and 1995, the club was commonly referred to as Banants Kotayk. During the 1992 season, the club won the first Armenian Cup. At the end of the 1995 transitional season, Banants suffered a financial crisis. The club owners decided that it was better to merge the club with FC Kotayk of Abovyan, rather than disband it. In 2001, Banants demerged from FC Kotayk, and was moved from Abovyan to the capital Yerevan.\n\nYerevan\n\nFC Banants was relocated to Yerevan in 2001. At the beginning of 2003, Banants merged with FC Spartak Yerevan, but was able to limit the name of the new merger to FC Banants. Spartak became Banants's youth academy and later changed the name to Banants-2. Because of the merger, Banants acquired many players from Spartak Yerevan, including Samvel Melkonyan. After the merger, Banants took a more serious approach and have finished highly in the league table ever since. The club managed to lift the Armenian Cup in 2007.\nExperience is making way for youth for the 2008 and 2009 seasons. The departures of most of the experienced players have left the club's future to the youth. Along with two Ukrainian players, Ugandan international, Noah Kasule, has been signed.\n\nThe club headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan.\n\nDomestic\n\nEuropean\n\nStadium\n\nThe construction of the Banants Stadium was launched in 2006 in the Malatia-Sebastia District of Yerevan, with the assistance of the FIFA goal programme. It was officially opened in 2008 with a capacity of 3,600 seats. Further developments were implemented later in 2011, when the playing pitch was modernized and the capacity of the stadium was increased up to 4,860 seats (2,760 at the northern stand, 1,500 at the southern stand and 600 at the western stand).\n\nTraining centre/academy\nBanants Training Centre is the club's academy base located in the Malatia-Sebastia District of Yerevan. In addition to the main stadium, the centre houses 3 full-size training pitches, mini football pitches as well as an indoor facility. The current technical director of the academy is the former Russian footballer Ilshat Faizulin.\n\nFans\nThe most active group of fans is the South West Ultras fan club, mainly composed of residents from several neighbourhoods within the Malatia-Sebastia District of Yerevan, since the club is a de facto representer of the district. Members of the fan club benefit from events organized by the club and many facilities of the Banants training centre, such as the mini football pitch, the club store and other entertainments.\n\nAchievements\n Armenian Premier League\n Winner (1): 2013–14.\n Runner-up (5): 2003, 2006, 2007, 2010, 2018.\n\n Armenian Cup\n Winner (3): 1992, 2007, 2016.\n Runner-up (6): 2003, 2004, 2008, 2009, 2010, 2021–22\n\n Armenian Supercup\n Winner (1): 2014.\n Runner-up (5): 2004, 2007, 2009, 2010, 2016.\n\nCurrent squad\n\nOut on loan\n\nPersonnel\n\nTechnical staff\n\nManagement\n\nUrartu-2\n\nFC Banants' reserve squad play as FC Banants-2 in the Armenian First League. They play their home games at the training field with artificial turf of the Urartu Training Centre.\n\nManagerial history\n Varuzhan Sukiasyan (1992–94)\n Poghos Galstyan (July 1, 1996 – June 30, 1998)\n Oganes Zanazanyan (2001–05)\n Ashot Barseghyan (2005–06)\n Nikolay Kiselyov (2006–07)\n Jan Poštulka (2007)\n Nikolay Kostov (July 1, 2007 – April 8, 2008)\n Nedelcho Matushev (April 8, 2008 – June 30, 2008)\n Kim Splidsboel (2008)\n Armen Gyulbudaghyants (Jan 1, 2009 – Dec 1, 2009)\n Ashot Barseghyan (interim) (2009)\n Stevica Kuzmanovski (Jan 1, 2010 – Dec 31, 2010)\n Rafael Nazaryan (Jan 1, 2011 – Jan 15, 2012)\n Volodymyr Pyatenko (Jan 17, 2013 – June 30, 2013)\n Zsolt Hornyák (July 1, 2013 – May 30, 2015)\n Aram Voskanyan (July 1, 2015 – Oct 11, 2015)\n Tito Ramallo (Oct 12, 2015 – Oct 3, 2016)\n Artur Voskanyan (Oct 3, 2016 – Aug 11, 2018)\n Ilshat Faizulin (Aug 12, 2018 –Nov 24, 2019)\n Aleksandr Grigoryan (Nov 25, 2019 –Mar 10, 2021)\n Robert Arzumanyan (10 March 2021–24 June 2022)\n Dmitri Gunko (27 June 2022–)\n\nReferences\n\nExternal links\n Official website \n Banants at Weltfussball.de \n\n \nUrartu\nUrartu\nUrartu\nUrartuanswersFC Banants.lengthdatasetmultifieldqa_enlanguageenall_classes_idc4f2dfb06f56a185067d2f147c4d846aa0b895f1968eda12 |
|
|
inputWhat is the proposed approach in this research paper?context\\section{Introduction}\n\\label{sec:introduction}\n\nProbabilistic models have proven to be very useful in a lot of applications in signal processing where signal estimation is needed \\cite{rabiner1989tutorial,arulampalam2002tutorial,ji2008bayesian}. Some of their advantages are that 1) they force the designer to specify all the assumptions of the model, 2) they provide a clear separation between the model and the algorithm used to solve it, and 3) they usually provide some measure of uncertainty about the estimation.\n\nOn the other hand, adaptive filtering is a standard approach in estimation problems when the input is received as a stream of data that is potentially non-stationary. This approach is widely understood and applied to several problems such as echo cancellation \\cite{gilloire1992adaptive}, noise cancellation \\cite{nelson1991active}, and channel equalization \\cite{falconer2002frequency}.\n\nAlthough these two approaches share some underlying relations, there are very few connections in the literature. The first important attempt in the signal processing community to relate these two fields was the connection between a linear Gaussian state-space model (i.e. Kalman filter) and the RLS filter, by Sayed and Kailath \\cite{sayed1994state} and then by Haykin \\emph{et al.} \\cite{haykin1997adaptive}. The RLS adaptive filtering algorithm emerges naturally when one defines a particular state-space model (SSM) and then performs exact inference in that model. This approach was later exploited in \\cite{van2012kernel} to design a kernel RLS algorithm based on Gaussian processes.\n\nA first attempt to approximate the LMS filter from a probabilistic perspective was presented in \\cite{park2014probabilistic}, focusing on a kernel-based implementation. The algorithm of \\cite{park2014probabilistic} makes use of a Maximum a Posteriori (MAP) estimate as an approximation for the predictive step. However, this approximation does not preserve the estimate of the uncertainty in each step, therefore degrading the performance of the algorithm.\n\nIn this work, we provide a similar connection between state-space models and least-mean-squares (LMS). Our approach is based on approximating the posterior distribution with an isotropic Gaussian distribution. We show how the computation of this approximated posterior leads to a linear-complexity algorithm, comparable to the standard LMS. Similar approaches have already been developed for a variety of problems such as channel equalization using recurrent RBF neural networks \\cite{cid1994recurrent}, or Bayesian forecasting \\cite{harrison1999bayesian}. Here, we show the usefulness of this probabilistic approach for adaptive filtering.\n\nThe probabilistic perspective we adopt throughout this work presents two main advantages. Firstly, a novel LMS algorithm with adaptable step size emerges naturally with this approach, making it suitable for both stationary and non-stationary environments. The proposed algorithm has less free parameters than previous LMS algorithms with variable step size \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}, and its parameters are easier to be tuned w.r.t. these algorithms and standard LMS. Secondly, the use of a probabilistic model provides us with an estimate of the error variance, which is useful in many applications.\n\nExperiments with simulated and real data show the advantages of the presented approach with respect to previous works. However, we remark that the main contribution of this paper is that it opens the door to introduce more Bayesian machine learning techniques, such as variational inference and Monte Carlo sampling methods \\cite{barber2012bayesian}, to adaptive filtering.\\\\\n\n\n\\section{Probabilistic Model}\n\nThroughout this work, we assume the observation model to be linear-Gaussian with the following distribution,\n\n\\begin{equation}\np(y_k|{\\bf w}_k) = \\mathcal{N}(y_k;{\\bf x}_k^T {\\bf w}_k , \\sigma_n^2),\n\\label{eq:mess_eq}\n\\end{equation}\nwhere $\\sigma_n^2$ is the variance of the observation noise, ${\\bf x}_k$ is the regression vector and ${\\bf w}_k$ is the parameter vector to be sequentially estimated, both $M$-dimensional column vectors.\n\n\nIn a non-stationary scenario, ${\\bf w}_k$ follows a dynamic process. In particular, we consider a diffusion process (random-walk model) with variance $\\sigma_d^2$ for this parameter vector:\n\n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;{\\bf w}_{k-1}, \\sigma_d^2 {\\bf I}),\n\\label{eq:trans_eq}\n\\end{equation}\nwhere $\\bf I$ denotes the identity matrix. In order to initiate the recursion, we assume the following prior distribution on ${\\bf w}_k$\n\n\\begin{equation}\np({\\bf w}_0)= \\mathcal{N}({\\bf w}_0;0, \\sigma_d^2{\\bf I}).\\nonumber\n\\end{equation}\n\n\\section{Exact inference in this model: Revisiting the RLS filter}\n\nGiven the described probabilistic SSM, we would like to infer the posterior probability distribution $p({\\bf w}_k|y_{1:k})$.\nSince all involved distributions are Gaussian, one can perform exact inference, leveraging the probability rules in a straightforward manner. The resulting probability distribution is\n\\begin{equation}\np({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k}, \\boldsymbol\\Sigma_{k}), \\nonumber\n\\end{equation}\nin which the mean vector ${\\bf\\boldsymbol\\mu}_{k}$ is given by\n\\begin{equation}\n{\\bf\\boldsymbol\\mu}_k = {\\bf\\boldsymbol\\mu}_{k-1} + {\\bf K}_k (y_k - {\\bf x}_k^T {\\bf\\boldsymbol\\mu}_{k-1}){\\bf x}_k, \\nonumber\n\\end{equation}\nwhere we have introduced the auxiliary variable\n\\begin{equation}\n{\\bf K}_k = \\frac{ \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right)}{{\\bf x}_k^T \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right) {\\bf x}_k + \\sigma_n^2}, \\nonumber\n\\end{equation}\nand the covariance matrix $\\boldsymbol\\Sigma_k$ is obtained as\n\\begin{equation}\n\\boldsymbol\\Sigma_k = \\left( {\\bf I} - {\\bf K}_k{\\bf x}_k {\\bf x}_k^T \\right) ( \\boldsymbol\\Sigma_{k-1} +\\sigma_d^2), \\nonumber\n\\end{equation}\nNote that the mode of $p({\\bf w}_k|y_{1:k})$, i.e. the maximum-a-posteriori estimate (MAP), coincides with the RLS adaptive rule\n\\begin{equation}\n{{\\bf w}}_k^{(RLS)} = {{\\bf w}}_{k-1}^{(RLS)} + {\\bf K}_k (y_k - {\\bf x}_k^T {{\\bf w}}_{k-1}^{(RLS)}){\\bf x}_k .\n\\label{eq:prob_rls}\n\\end{equation}\nThis rule is similar to the one introduced in \\cite{haykin1997adaptive}.\n\nFinally, note that the covariance matrix $\\boldsymbol\\Sigma_k$ is a measure of the uncertainty of the estimate ${\\bf w}_k$ conditioned on the observed data $y_{1:k}$. Nevertheless, for many applications a single scalar summarizing the variance of the estimate could prove to be sufficiently useful. In the next section, we show how such a scalar is obtained naturally when $p({\\bf w}_k|y_{1:k})$ is approximated with an isotropic Gaussian distribution. We also show that this approximation leads to an LMS-like estimation.\n \n\n\n\\section{Approximating the posterior distribution: LMS filter }\n\nThe proposed approach consists in approximating the posterior distribution $p({\\bf w}_k|y_{1:k})$, in general a multivariate Gaussian distribution with a full covariance matrix, by an isotropic spherical Gaussian distribution \n\n\\begin{equation}\n\\label{eq:aprox_post}\n\\hat{p}({\\bf w}_{k}|y_{1:k})=\\mathcal{N}({\\bf w}_{k};{\\bf \\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_{k}^2 {\\bf I} ).\n\\end{equation}\n\nIn order to estimate the mean and covariance of the approximate distribution $\\hat{p}({\\bf w}_{k}|y_{1:k})$, we propose to select those that minimize the Kullback-Leibler divergence with respect to the original distribution, i.e., \n\n\\begin{equation}\n\\{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k\\}=\\arg \\displaystyle{ \\min_{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k}} \\{ D_{KL}\\left(p({\\bf w}_{k}|y_{1:k}))\\| \\hat{p}({\\bf w}_{k}|y_{1:k})\\right) \\}. \\nonumber\n\\end{equation}\n\nThe derivation of the corresponding minimization problem can be found in Appendix A. In particular, the optimal mean and the covariance are found as\n\\begin{equation}\n{\\hat{\\boldsymbol\\mu}}_{k} = {\\boldsymbol\\mu}_{k};~~~~~~ \\hat{\\sigma}_{k}^2 = \\frac{{\\sf Tr}\\{ \\boldsymbol\\Sigma_k\\} }{M}.\n\\label{eq:sigma_hat}\n\\end{equation}\n\n\nWe now show that by using \\eqref{eq:aprox_post} in the recursive predictive and filtering expressions we obtain an LMS-like adaptive rule. First, let us assume that we have an approximate posterior distribution at $k-1$, $\\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) = \\mathcal{N}({\\bf w}_{k-1};\\hat{\\bf\\boldsymbol\\mu}_{k-1}, \\hat{\\sigma}_{k-1}^2 {\\bf I} )$. Since all involved distributions are Gaussian, the predictive distribution\nis obtained as %\n\\begin{eqnarray}\n\\hat{p}({\\bf w}_k|y_{1:k-1}) &=& \\int p({\\bf w}_k|{\\bf w}_{k-1}) \\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) d{\\bf w}_{k-1} \\nonumber\\\\\n&=& \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k|k-1}, \\boldsymbol\\Sigma_{k|k-1}), \n\\label{eq:approx_pred}\n\\end{eqnarray}\nwhere the mean vector and covariance matrix are given by\n\\begin{eqnarray}\n\\hat{\\bf\\boldsymbol\\mu}_{k|k-1} &=& \\hat{\\bf\\boldsymbol\\mu}_{k-1} \\nonumber \\\\\n\\hat{\\boldsymbol\\Sigma}_{k|k-1} &=& (\\hat{\\sigma}_{k-1}^2 + \\sigma_d^2 ){\\bf I}\\nonumber.\n\\end{eqnarray}\n\nFrom \\eqref{eq:approx_pred}, the posterior distribution at time $k$ can be computed using Bayes' Theorem and standard Gaussian manipulations (see for instance \\cite[Ch. 4]{murphy2012machine}). Then, we approximate the posterior $p({\\bf w}_k|y_{1:k})$ with an isotropic Gaussian,\n\\begin{equation}\n\\hat{p}({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k ; {\\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_k^2 {\\bf I} ),\\nonumber\n\\end{equation}\nwhere \n\\begin{eqnarray}\n{\\hat{\\boldsymbol\\mu}}_{k} &= & {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2} (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k \\nonumber \\\\\n&=& {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\eta_k (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k . \n\\label{eq:prob_lms}\n\\end{eqnarray}\nNote that, instead of a gain matrix ${\\bf K}_k$ as in Eq.~\\eqref{eq:prob_rls}, we now have a scalar gain $\\eta_k$ that operates as a variable step size.\n\n\nFinally, to obtain the posterior variance, which is our measure of uncertainty, we apply \\eqref{eq:sigma_hat} and the trick ${\\sf Tr}\\{{\\bf x}_k{\\bf x}_k^T\\}= {\\bf x}_k^T{\\bf x}_k= \\|{\\bf x}_k \\|^2$,\n\n\\begin{eqnarray}\n\\hat{\\sigma}_k^2 &=& \\frac{{\\sf Tr}(\\boldsymbol\\Sigma_k)}{M} \\\\\n&=& \\frac{1}{M}{\\sf Tr}\\left\\{ \\left( {\\bf I} - \\eta_k {\\bf x}_k {\\bf x}_k^T \\right) (\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2)\\right\\} \\\\\n&=& \\left(1 - \\frac{\\eta_k \\|{\\bf x}_k\\|^2}{M}\\right)(\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2).\n\\label{eq:sig_k}\n\\end{eqnarray}\nIf MAP estimation is performed, we obtain an adaptable step-size LMS estimation\n\n\\begin{equation}\n{\\bf w}_{k}^{(LMS)} = {\\bf w}_{k-1}^{(LMS)} + \\eta_k (y_k - {\\bf x}_k^T {\\bf w}_{k-1}^{(LMS)}){\\bf x}_k, \t\n\\label{eq:lms}\n\\end{equation}\nwith\n\\begin{equation}\n\\eta_k = \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2}.\\nonumber\n\\end{equation}\nAt this point, several interesting remarks can be made:\n\n\\begin{itemize}\n\n\\item The adaptive rule \\eqref{eq:lms} has linear complexity since it does not require us to compute the full matrix $\\boldsymbol\\Sigma_k$.\n\n\\item For a stationary model, we have $\\sigma_d^2=0$ in \\eqref{eq:prob_lms} and \\eqref{eq:sig_k}. In this case, the algorithm remains valid and both the step size and the error variance, $\\hat{\\sigma}_{k}$, vanish over time $k$. \n\n\\item Finally, the proposed adaptable step-size LMS has only two parameters, $\\sigma_d^2$ and $\\sigma_n^2$, (and only one, $\\sigma_n^2$, in stationary scenarios) in contrast to other variable step-size algorithms \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}. More interestingly, both $\\sigma_d^2$ and $\\sigma_n^2$ have a clear underlying physical meaning, and they can be estimated in many cases. We will comment more about this in the next section. \n\\end{itemize}\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nWe evaluate the performance of the proposed algorithm in both stationary and tracking experiments. In the first experiment, we estimate a fixed vector ${\\bf w}^{o}$ of dimension $M=50$. The entries of the vector are independently and uniformly chosen in the range $[-1,1]$. Then, the vector is normalized so that $\\|{\\bf w}^o\\|=1$. Regressors $\\boldsymbol{x}_{k}$ are zero-mean Gaussian vectors with identity covariance matrix. The additive noise variance is such that the SNR is $20$ dB. We compare our algorithm with standard RLS and three other LMS-based algorithms: LMS, NLMS \\cite{sayed2008adaptive}, VSS-LMS \\cite{shin2004variable}.\\footnote{The used parameters for each algorithm are: for RLS $\\lambda=1$, $\\epsilon^{-1}=0.01$; for LMS $\\mu=0.01$; for NLMS $\\mu=0.5$; and for VSS-LMS $\\mu_{max}=1$, $\\alpha=0.95$, $C=1e-4$.} The probabilistic LMS algorithm in \\cite{park2014probabilistic} is not simulated because it is not suitable for stationary environments.\n\nIn stationary environments, the proposed algorithm has only one parameter, $\\sigma^2_n$. We simulate both the scenario where we have perfectly knowledge of the amount of noise (probLMS1) and the case where the value $\\sigma^2_n$ is $100$ times smaller than the actual value (probLMS2). The Mean-Square Deviation (${\\sf MSD} = {\\mathbb E} \\| {\\bf w}_0 - {\\bf w}_k \\|^2$), averaged out over $50$ independent simulations, is presented in Fig. \\ref{fig:msd_statationary}.\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{results_stationary_MSD}}\n\\end{minipage}\n\\caption{Performance in terms of MSD of probabilistic LMS with both optimal (probLMS1) and suboptimal (probLMS2) compared to LMS, NLMS, VS-LMS, and RLS.}\n\\label{fig:msd_statationary}\n\\end{figure}\n\nThe performance of probabilistic LMS is close to RLS (obviously at a much lower computational cost) and largely outperforms previous variable step-size LMS algorithms proposed in the literature. Note that, when the model is stationary, i.e. $\\sigma^2_d=0$ in \\eqref{eq:trans_eq}, both the uncertainty $\\hat{\\sigma}^2_k$, and the adaptive step size $\\eta_k$, vanish over time. This implies that the error tends to zero when $k$ goes to infinity. Fig. \\ref{fig:msd_statationary} also shows that the proposed approach is not very sensitive to a bad choice of its only parameter, as demonstrated by the good results of probLMS2, which uses a $\\sigma^2_n$ that is $100$ times smaller than the optimal value. \n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{fig2_final}}\n\\end{minipage}\n\\caption{Real part of one coefficient of the measured and estimated channel in experiment two. The shaded area represents two standard deviations from the prediction {(the mean of the posterior distribution)}.}\n\\label{fig_2}\n\\end{figure}\n\n\n\\begin{table}[ht]\n\\begin{footnotesize}\n\\setlength{\\tabcolsep}{2pt}\n\\def1.5mm{1.5mm}\n\\begin{center}\n\\begin{tabular}{|l@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|}\n\\hline\nMethod & LMS & NLMS & LMS-2013 & VSSNLMS & probLMS & RLS \\\\\n\\hline\n\\hline\nMSD (dB) &-28.45 &-21.07 &-14.36 &-26.90 &-28.36 &-25.97\\\\\n\\hline \n\\end{tabular}\n\\end{center}\n\\caption{Steady-state MSD of the different algorithms for the tracking of a real MISO channel.}\n\\label{tab:table_MSD}\n\\end{footnotesize}\n\n\\end{table}\n\\newpage\nIn a second experiment, we test the tracking capabilities of the proposed algorithm with {real} data of a wireless MISO channel acquired in a realistic indoor scenario. More details on the setup can be found in \\cite{gutierrez2011frequency}. Fig. \\ref{fig_2} shows the real part of one of the channels, and the estimate of the proposed algorithm. The shaded area represents the estimated uncertainty for each prediction, i.e. $\\hat{\\mu}_k\\pm2\\hat{\\sigma}_k$. Since the experimental setup does not allow us to obtain the optimal values for the parameters, we fix these parameters to their values that optimize the steady-state mean square deviation (MSD). \\hbox{Table \\ref{tab:table_MSD}} shows this steady-state MSD of the estimate of the MISO channel with different methods. As can be seen, the best tracking performance is obtained by standard LMS and the proposed method. \n\n\n\n\n\n\\section{Conclusions and Opened Extensions}\n\\label{sec:conclusions}\n\n{We have presented a probabilistic interpretation of the least-mean-square filter. The resulting algorithm is an adaptable step-size LMS that performs well both in stationary and tracking scenarios. Moreover, it has fewer free parameters than previous approaches and these parameters have a clear physical meaning. Finally, as stated in the introduction, one of the advantages of having a probabilistic model is that it is easily extensible:}\n\n\\begin{itemize}\n\\item If, instead of using an isotropic Gaussian distribution in the approximation, we used a Gaussian with diagonal covariance matrix, we would obtain a similar algorithm with different step sizes and measures of uncertainty, for each component of ${\\bf w}_k$. Although this model can be more descriptive, it needs more parameters to be tuned, and the parallelism with LMS vanishes.\n\\item Similarly, if we substitute the transition model of \\eqref{eq:trans_eq} by an Ornstein-Uhlenbeck process, \n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;\\lambda {\\bf w}_{k-1}, \\sigma_d^2), \\nonumber\n\\label{eq:trans_eq_lambda}\n\\end{equation}\na similar algorithm is obtained but with a forgetting factor $\\lambda$ multiplying ${\\bf w}_{k-1}^{(LMS)}$ in \\eqref{eq:lms}. This algorithm may have improved performance under such a kind of autoregresive dynamics of ${\\bf w}_{k}$, though, again, the connection with standard LMS becomes dimmer.\n\n\\item As in \\cite{park2014probabilistic}, the measurement model \\eqref{eq:mess_eq} can be changed to obtain similar adaptive algorithms for classification, ordinal regression, and Dirichlet regression for compositional data. \n\n\\item A similar approximation technique could be applied to more complex dynamical models, i.e. switching dynamical models \\cite{barber2010graphical}. The derivation of efficient adaptive algorithms that explicitly take into account a switch in the dynamics of the parameters of interest is a non-trivial and open problem, though the proposed approach could be useful.\n\n\\item Finally, like standard LMS, this algorithm can be kernelized for its application in estimation under non-linear scenarios.\n\n\\end{itemize}\n\n\n\\begin{appendices}\n\n\\section{KL divergence between a general gaussian distribution and an isotropic gaussian}\n\\label{sec:kl}\n\n We want to approximate $p_{{\\bf x}_1}(x) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_1,\\boldsymbol\\Sigma_1)$ by $p_{{\\bf x}_2}({\\bf x}) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_2,\\sigma_2^2 {\\bf I})$. In order to do so, we have to compute the parameters of $p_{{\\bf x}_2}({\\bf x})$, $\\boldsymbol\\mu_2$ and $\\sigma_2^2$, that minimize the following Kullback-Leibler divergence,\n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) &=&\\int_{-\\infty}^{\\infty} p_{{\\bf x}_1}({\\bf x}) \\ln{\\frac{p_{{\\bf x}_1}({\\bf x})}{p_{{\\bf x}_2}({\\bf x})}}d{\\bf x} \\nonumber \\\\\n&= & \\frac{1}{2} \\{ -M + {\\sf Tr}(\\sigma_2^{-2} {\\bf I}\\cdot \\boldsymbol\\Sigma_1^{-1}) \\nonumber \\\\\n & & + (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 )^T \\sigma^{-2}_2{\\bf I} (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 ) \\nonumber \\\\\n & & + \\ln \\frac{{\\sigma_2^2}^M}{\\det\\boldsymbol\\Sigma_1} \\}. \n\\label{eq:divergence}\n\\end{eqnarray}\nUsing symmetry arguments, we obtain \n\\begin{equation}\n\\boldsymbol\\mu_2^{*} =\\arg \\displaystyle{ \\min_{\\boldsymbol\\mu_2}} \\{ D_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) \\} = \\boldsymbol\\mu_1.\n\\end{equation}\nThen, \\eqref{eq:divergence} gets simplified into \n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) = \\frac{1}{2}\\lbrace { -M + {\\sf Tr}(\\frac{\\boldsymbol\\Sigma_1}{\\sigma_2^{2}}) + \\ln \\frac{\\sigma_2^{2M}}{\\det\\boldsymbol\\Sigma_1}}\\rbrace.\n\\end{eqnarray}\nThe variance $\\sigma_2^2$ is computed in order to minimize this Kullback-Leibler divergence as\n\n\\begin{eqnarray}\n\\sigma_2^{2*} &=& \\arg\\min_{\\sigma_2^2} D_{KL}(P_{x_1}\\| P_{x_2}) \\nonumber \\\\\n &=& \\arg\\min_{\\sigma_2^2}\\{ \\sigma_2^{-2}{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\} + M\\ln \\sigma_2^{2} \\} .\n\\end{eqnarray}\nDeriving and making it equal zero leads to\n\n\\begin{equation}\n\\frac{\\partial}{\\partial \\sigma_2^2} \\left[ \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{\\sigma_2^{2}} + M \\ln \\sigma_2^{2} \\right] = \\left. {\\frac{M}{\\sigma_2^{2}}-\\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{(\\sigma_2^{2})^2}}\\right|_{\\sigma_2^{2}=\\sigma_2^{2*}}\\left. =0 \\right. .\n\\nonumber\n\\end{equation}\nFinally, since the divergence has a single extremum in $R_+$,\n\\begin{equation}\n\\sigma_2^{2*} = \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{M}.\n\\end{equation}\n\n\n\n\n\\end{appendices}\n\n\\vfill\n\\clearpage\n\n\\bibliographystyle{IEEEbib}\n", "answers": ["This research paper proposed an approach based on approximating the posterior distribution with an isotropic Gaussian distribution."], "length": 2556, "dataset": "multifieldqa_en", "language": "en", "all_classes": null, "_id": "394cb48c037481d97cdf1dbd7adef475061b9e77235842e2"}
|
|
|
|