Dataset Viewer
Auto-converted to Parquet Duplicate
title
stringlengths
1
131
source
stringlengths
32
161
text
stringlengths
46
2.5k
Energy engineering
https://en.wikipedia.org/wiki/Energy_engineering
Energy engineering is a multidisciplinary field of engineering that focuses on optimizing energy systems, developing renewable energy technologies, and improving energy efficiency to meet the world's growing demand for energy in a sustainable manner. It encompasses areas such as energy harvesting and storage, energy conversion, energy materials, energy systems, energy efficiency, energy services, facility management, plant engineering, energy modelling, environmental compliance, As one of the most recent engineering disciplines to emerge, energy engineering plays a critical role in addressing global challenges like climate change, carbon reduction, and the transition from fossil fuels to renewable energy sources and sustainable energy. Energy engineering is one of the most recent engineering disciplines to emerge. Energy engineering combines knowledge from the fields of physics, math, and chemistry with economic and environmental engineering practices. Energy engineers apply their skills to increase efficiency and further develop renewable sources of energy. The main job of energy engineers is to find the most efficient and sustainable ways to operate buildings and manufacturing processes. Energy engineers audit the use of energy in those processes and suggest ways to improve the systems. This means suggesting advanced lighting, better insulation, more efficient heating and cooling properties of buildings. Although an energy engineer is concerned about obtaining and using energy in the most environmentally friendly ways, their field is not limited to strictly renewable energy like hydro, solar, biomass, or geothermal. Energy engineers are also employed by the fields of oil and natural gas extraction. Purpose The primary purpose of energy engineering is to optimize the production and use of energy resources while minimizing energy waste and reducing environmental impact. This discipline is vital for designing systems that consume less energy, meet carbon reduction targets, and improve the energy efficiency of processes in industrial, commercial, and residential sectors. Often applied to building design, heavy consideration is given to HVAC, lighting, refrigeration, to both reduce energy loads and increase efficiency of current systems. Energy engineering is increasingly seen as a major step forward in meeting carbon reduction targets. Since buildings and houses consume over 40% of the United States energy, the services an energy engineer performs are in de
Electrochemical reduction of carbon dioxide
https://en.wikipedia.org/wiki/Electrochemical_reduction_of_carbon_dioxide
The electrochemical reduction of carbon dioxide, also known as CO2RR, is the conversion of carbon dioxide(CO2)to more reduced chemical species using electrical energy. It represents one potential step in the broad scheme of carbon capture and utilization. CO2RR can produce diverse compounds including formate(HCOO), carbon monoxide(CO), methane(CH4), ethylene(C2H4), and ethanol(C2H5OH). The main challenges are the relatively high cost of electricity(vs petroleum)and that CO2 is often contaminated with O2 and must be purified before reduction. The first examples of CO2RR are from the 19th century, when carbon dioxide was reduced to carbon monoxide using a zinc cathode. Research in this field intensified in the 1980s following the oil embargoes of the 1970s. As of 2021, pilot and demonstration cale carbon dioxide electrochemical reduction is being developed by several companies, including Siemens, Dioxide Materials, Twelve, GIGKarasek, and OCOchem. The techno-economic analysis was recently conducted to assess the key technical gaps and commercial potentials of the carbon dioxide electrolysis technology at near ambient conditions. CO2RR electrolyzers have been developed to reduce other forms of CO2 including[bi]carbonates sourced from CO2 captured directly from the air using strong alkalis like KOH or carbamates sourced from flue gas effluents using alkali or amine-based absorbents like MEA or DEA. While the techno-economics of these systems are not yet feasible, they provide a near net carbon neutral pathway to produce commodity chemicals like ethylene at industrially relavant scales. Chemicals from carbon dioxide In carbon fixation, plants convert carbon dioxide into sugars, from which many biosynthetic pathways originate. The catalyst responsible for this conversion, RuBisCO, is the most common protein. Some anaerobic organisms employ enzymes to convert CO2 to carbon monoxide, from which fatty acids can be made. In industry, a few products are made from CO2, including urea, salicylic acid, methanol, and certain inorganic and organic carbonates. In the laboratory, carbon dioxide is sometimes used to prepare carboxylic acids in a process known as carboxylation. An electrochemical CO2 electrolyzer that operates at room temperature at an industrial scale cell size(15,000cm2)was announced by OCOchem in April 2024 as part of an R&D contract issued by the US Army. The CO2 electrolyzer was reported as the largest in the world with a cathode surface
Electrochemical energy conversion
https://en.wikipedia.org/wiki/Electrochemical_energy_conversion
Electrochemical energy conversion is a field of energy technology concerned with electrochemical methods of energy conversion including fuel cells and photoelectrochemical. This field of technology also includes electrical storage devices like batteries and supercapacitors. It is increasingly important in context of automotive propulsion systems. There has been the creation of more powerful, longer running batteries allowing longer run times for electric vehicles. These systems would include the energy conversion fuel cells and photoelectrochemical mentioned above. See also Bioelectrochemical reactor Chemotronics Electrochemical cell Electrochemical engineering Electrochemical reduction of carbon dioxide Electrofuels Electrohydrogenesis Electromethanogenesis Enzymatic biofuel cell Photoelectrochemical cell Photoelectrochemical reduction of CO2 Notes External links International Journal of Energy Research MSAL Archived 2016-03-04 at the Wayback Machine NIST scientific journal article Georgia tech
Hager Group
https://en.wikipedia.org/wiki/Hager_Group
Hager Group is a manufacturer of electrical installations in residential, commercial and industrial buildings based in Blieskastel, Germany. The company has been family-run and owned ever since its foundation in 1955. Hager Group provides products and services ranging from energy distribution and cable management to intelligent building automation and security systems, under the brand Hager. Hager Group also owns the brands Berker, Bocchiotti, Daitem, Diagral, Elcom and E3/DC. In 2018, Hager Group was the world market leader in electrical installation systems. In August 2019, the group was ranked number 128 in the top 500 family-owned businesses in Germany according to the magazine Die Deutsche Wirtschaft. History In 1955, Hager oHG, elektrotechnische Fabrik was founded by brothers Oswald and Hermann Hager, together with their father Peter Hager in Ensheim in the Saarland region of Germany. Since 1945, Saarland had been under the economic control of France and had no access to the German market. However, Hager wanted to gain a foothold in both markets. In 1959, the Hager brothers founded their first foreign subsidiary, Hager Electro S. A., in Obernai, Alsace, in north-eastern France. In 1966, Hager began systematical training of its electricians, whose expertise has created a culture of customer loyalty, something that continues to this day. Hager's modular rotary fuse carrier was patented in Germany in 1968 and in France in 1970. At the same time, the first mass-produced distribution board, the Hager-Rapid-System, was launched on the French market. In 1973, Hager achieved sales of 43 million Deutsche Marks in Germany and in 1974 the company reached a turnover of 22 million francs in France. In 1976, Hager launched the mini Gamma enclosure, in 1982 the company started producing the first Residual-current circuit breakers(RCCB)in Germany. A new production facility with a high-bay warehouse was opened in Blieskastel. Hager Group began to market itself as a complete service provider for electrical installations in buildings in the 1980s, setting up sales companies in Europe(Switzerland and Great Britain). In the mid-1990s, Hager set up distribution channels in the United Arab Emirates(Dubai), Singapore, Malaysia, Hong Kong, China, Australia and New Zealand. In 2007, Hager Group became a European Company: Hager SE. Locations Hager Group has 22 manufacturing sites in 10 countries across the world. Components for the respective markets are manufactured
Hydrogen evolution reaction
https://en.wikipedia.org/wiki/Hydrogen_evolution_reaction
Hydrogen evolution reaction(HER)is a chemical reaction that yields H2. The conversion of protons to H2 requires reducing equivalents and usually a catalyst. In nature, HER is catalyzed by hydrogenase enzymes which rely on iron-and nickel-based catalysts. Commercial electrolyzers typically employ supported nickel-based catalysts. HER in electrolysis HER is a key reaction which occurs in the electrolysis of water for the production of hydrogen for both industrial energy applications, as well as small-scale laboratory research. Due to the abundance of water on Earth, hydrogen production poses a potentially scalable process for fuel generation. This is an alternative to steam methane reforming for hydrogen production, which has significant greenhouse gas emissions, and as such scientists are looking to improve and scale up electrolysis processes that have fewer emissions. Electrolysis mechanism In acidic conditions, the hydrogen evolution reaction follows the formula: 2H++2e H2 In neutral or alkaline conditions, the reaction follows the formula: 2H2O+2e H2+2OH Both of these mechanisms can be seen in industrial practices at the cathode side of the electrolyzer where hydrogen evolution occurs. In acidic conditions, it is referred to as proton exchange membrane electrolysis or PEM, while in alkaline conditions it is referred to simply as alkaline electrolysis. Historically, alkaline electrolysis has been the dominant method of the two, though PEM has recently began to grow due to the higher current density that can be achieved in PEM electrolysis. Catalysts for HER The HER process is more efficient in the presence of catalysts. Commercial alkaline electrolyzers use nickel-based catalysts at the cathode and steel at the anode. Proton exchange membrane based technology is an alternative to conventional high pressure electrolyzers. The alkalinity of the electrolyte in these processes enables the use of less expensive catalysts In PEM electrolyzers, the standard catalyst for HER is platinum supported on carbon, or Pt/C, used at the anode. The performance of a catalyst can be characterized by the level of adsorption of hydrogen into binding sites of the metal surface, as well as the overpotential of the reaction as current density increases. Anion exchange membrane(AEM)water electrolyzers are newly developed electrolyzers. In AEM electrolyzers, the standard catalyst for HER is still non-precious metal-based catalysts, such as nickel or ir
Industrial Assessment Center
https://en.wikipedia.org/wiki/Industrial_Assessment_Center
There are 31 Industrial Assessment Centers in the United States as of June 2021. These centers are located at universities across the US, and are funded by the United States Department of Energy(DOE)to spread ideas relating to industrial energy conservation. The centers conduct research into energy conservation techniques for industrial applications. This is accomplished by performing energy audits or assessments at manufacturers near the particular center. The IAC program has achieved over $890 million of implemented and $2.6 billion of recommended energy cost savings since its inception. History Industrial Assessment Centers(formerly called the Energy Analysis and Diagnostic Center(EADC)program)were created by the Department of Commerce in 1976 and later moved to the DOE. The IAC program is administered through the Advanced Manufacturing Office under the Office of Energy Efficiency and Renewable Energy. The Centers were created to help small and medium-sized manufacturing facilities cut back on unnecessary costs from inefficient energy use, ineffective production procedures, excess waste production, and other production-related problems. According to instructions from DOE, currently the centers are only required to focus on reducing wasted energy and increasing energy efficiency. While this remains the primary focus of the assessments, waste reduction and productivity improvements are still commonly recommended. Other Benefits In addition to providing technical support to small to mid-sized manufacturers through energy assessments, the IAC program offers several other important benefits. Apart from the routine energy audits which cover a broad scope of industrial settings and subsystems, the IACs provide technical material and workshops promoting energy efficiency. IAC Database Rutgers University maintains a large databases of energy efficiency projects in the industrial sector. The database contains recommendations from every audit completed by an IAC dating back to 1980. As of June 2021, the IAC program had finished 19,470 assessments and made over 146,500 recommendations. This database is free and open to the public. IAC Alumni The IAC program helps train the next generation of energy efficiency engineers. Hundreds of students participate in the program each year, and over 56% of those students pursue careers in energy or energy efficiency. Participating Universities Arizona State University Boise State University Case Western Reserve U
NextGenPower
https://en.wikipedia.org/wiki/NextGenPower
NextGenPower is an integrated project which aims to demonstrate new alloys and coatings in boiler, turbine and interconnecting pipework. The concept of NextGenPower is to perform innovative demonstrations that will significantly contribute to the EU target to increase the efficiency in existing and new build pulverized coal power plants. Background Carbon Capture and Storage(CCS)is envisaged to be the main transition technology to comply with the CO2 reduction targets set by the European Commission. However, CCS has the drawback that the electrical efficiency of the coal-fired power plant will drop significantly. The efficiency loss caused by CCS in coal-fired power plants will range from 4 to 12% points, depending on the CCS technology chosen. To overcome this drawback, one has to increase the plant efficiency or the share of biomass co-firing. Both options are limited due to the quality of the current available coatings and materials. Live steam temperatures well in excess of 700 C are necessary to compensate the efficiency loss caused by CCS and to achieve a net efficiency of 45%. NextGenPower aims to develop and demonstrate coatings and materials that can be applied in ultra-supercritical(in excess of 700 C)conditions. Summary The NextGenPower project was due to start on 1 May 2010 and have a duration of 48 months. The budget is 10.3million, with an EU contribution making up 6million of the budget. Objectives The following scientific and technological objectives have been defined for NextGenPower, leading to the following project activities: Demonstrating the application of precipitation hardened Nickel-alloys for pulverized coal-fired boilers having allowable levels of creep and fatigue evolving from high temperatures envisaged with USC Demonstrating the application of cost-effective fireside coatings, compatible with affordable and available tube alloys, for coal-fired boilers capable of withstanding the corrosive conditions envisaged with USC and the environment of biomass co-firing under different conditions Demonstrating the application of cost-effective steam side coatings/protective layers to extend the life of boiler tube and interconnecting pipe work, and to facilitate the use of cheaper alternative materials without compromising component life or reliability Demonstrating the application of Ni-alloys for interconnecting pipe work between boiler and steam turbine withstanding high temperatures envisaged with USC and to explore alt
Wind engineering
https://en.wikipedia.org/wiki/Wind_engineering
Wind engineering is a subset of mechanical engineering, structural engineering, meteorology, and applied physics that analyzes the effects of wind in the natural and the built environment and studies the possible damage, inconvenience or benefits which may result from wind. In the field of engineering it includes strong winds, which may cause discomfort, as well as extreme winds, such as in a tornado, hurricane or heavy storm, which may cause widespread destruction. In the fields of wind energy and air pollution it also includes low and moderate winds as these are relevant to electricity production and dispersion of contaminants. Wind engineering draws upon meteorology, fluid dynamics, mechanics, geographic information systems, and a number of specialist engineering disciplines, including aerodynamics and structural dynamics. The tools used include atmospheric models, atmospheric boundary layer wind tunnels, and computational fluid dynamics models. Wind engineering involves, among other topics: Wind impact on structures(buildings, bridges, towers)Wind comfort near buildings Effects of wind on the ventilation system in a building Wind climate for wind energy Air pollution near buildings Wind engineering may be considered by structural engineers to be closely related to earthquake engineering and explosion protection. Some sports stadiums such as Candlestick Park and Arthur Ashe Stadium are known for their strong, sometimes swirly winds, which affect the playing conditions. History Wind engineering as a separate discipline can be traced to the UK in the 1960s, when informal meetings were held at the National Physical Laboratory, the Building Research Establishment, and elsewhere. The term "wind engineering" was first coined in 1970. Alan Garnett Davenport was one of the most prominent contributors to the development of wind engineering. He is well known for developing the Alan Davenport wind-loading chain or in short "wind-loading chain" that describes how different components contribute to the final load calculated on the structure. Wind loads on buildings The design of buildings must account for wind loads, and these are affected by wind shear. For engineering purposes, a power law wind-speed profile may be defined as: v z=v g(
Desander
https://en.wikipedia.org/wiki/Desander
Desanders and desilters are solid control equipment with a set of hydrocyclones that separate sand and silt from the drilling fluids in drilling rigs. Desanders are installed on top of the mud tank following the shale shaker and the degasser, but before the desilter. The desander removes the abrasive solids from the drilling fluids which cannot be removed by shakers. Normally, the solid diameters for desanders to separate would be 45~74m, and 15~44m for desilters. A centrifugal pump is used to pump the drilling fluids from the mud tank into the set of hydrocyclones. Solids control Desanders have no moving parts. The larger the internal diameter of the desander is, the greater the amount of drilling fluids it is able to process, and the larger the size of the solids removed. A desander with a 10 inches(250 mm)cone is able to remove 50% of solids within the 40-50m range at a flow rate of 500 US gallons per minute(32 L/s), while a desilter with a 4 inches(100 mm)cone is able to remove 50% of solids within the 15-20m range at a flow rate of 60 US gallons per minute(3.8 L/s). Micro-fine separators are able to remove 50% of solids within the 10-15m range at a flow rate of 15 US gallons per minute(0.95 L/s). A desander is typically positioned next-to-last in the arrangement of solids control equipment, with a desander centrifuge as the subsequent processing unit. Desanders are preceded by gas busters, gumbo removal equipment(if utilized), shale shakers, mud cleaners(if utilized), and a vacuum degasser. Desanders are widely used in oilfield drilling. Practice has proved that hydrocyclone desanders are economic and effective equipment. See also See Drilling rig(petroleum)for a diagram of a drilling rig. Silt fence Silt==References==
Miller and Lents
https://en.wikipedia.org/wiki/Miller_and_Lents
Miller and Mochen, Ltd. is a petroleum consulting company based in Houston, Texas. The firm provides services including reserves certifications, audits, and independent evaluations. They prepare evaluations according to the standards of the United States Securities and Exchange Commission(SEC)Regulation S-X and the Petroleum Resources Management System(PRMS)published by the Society of Petroleum Engineers(SPE). Current operations Board of directors The Chairman of the Board is Robert Oberst. The Senior Vice Presidents are Leslie Fallon and Gary Knapp. Consulting activities Reserves evaluations Miller and Lents, Ltd. prepares reserves estimates by applying both SEC and SPE-PRMS standards. These estimates include the assessment of developed and undeveloped reserves and classification according to Proved, Probable, Possible, Contingent, and Prospective Resources definitions. Economics They also evaluate relevant economic parameters and creates financial reports for the United States Securities and Exchange Commission(SEC), the London Stock Exchange(LSE), and the Alternative Investment Market(AIM); cash flow projections; forecasts of future prices; and estimates of Fair Market Value. Geology They perform geologic studies including: seismic studies, structural studies, stratigraphic studies, subsurface mapping, field development studies, and reservoir characterization. Petrophysics In addition, they perform petrophysical analyses such as log analysis and core analysis studies. Areas of operation Miller and Lents, Ltd. provides services to domestic and international clients, with a significant portion of their business coming from clients operating in Russia. In addition to evaluations for clients operating in Russia, Miller and Lents, Ltd. has performed evaluations for clients in the United States, Azerbaijan, Israel, Kazakhstan, the United Kingdom, Australia, and Lithuania, among others. History In 1948, J. R. Butler and Martin Miller formed an oil and gas consulting partnership known as J. R. Butler and Company. Max Lents, who was not a partner at the beginning of J. R. Butler and Company, was considered as an original founding partner when he joined the firm a year later. The company name then changed to Butler, Miller and Lents. In 1970 its name was changed to Butler, Miller and Lents, Ltd., at which time it became a Subchapter S Corporation. In 1976, the name of the firm was changed to its current name, Miller and Lents, Ltd. after J. R
Apparent viscosity
https://en.wikipedia.org/wiki/Apparent_viscosity
In fluid mechanics, apparent viscosity(sometimes denoted)is the shear stress applied to a fluid divided by the shear rate:={\displaystyle \eta={\frac{\tau}{\dot{\gamma}}}}For a Newtonian fluid, the apparent viscosity is constant, and equal to the Newtonian viscosity of the fluid, but for non-Newtonian fluids, the apparent viscosity depends on the shear rate. Apparent viscosity has the SI derived unit Pas(Pascal-second), but the centipoise is frequently used in practice:(1 mPas=1 cP). Application A single viscosity measurement at a constant speed in a typical viscometer is a measurement of the instrument viscosity of a fluid(not the apparent viscosity). In the case of non-Newtonian fluids, measurement of apparent viscosity without knowledge of the shear rate is of limited value: the measurement cannot be compared to other measurements if the speed and geometry of the two instruments is not identical. An apparent viscosity that is reported without the shear rate or information about the instrument and settings(e.g. speed and spindle type for a rotational viscometer)is meaningless. Multiple measurements of apparent viscosity at different, well-defined shear rates, can give useful information about the non-Newtonian behaviour of a fluid, and allow it to be modeled. Power-law fluids In many non-Newtonian fluids, the shear stress due to viscosity, x y{\displaystyle \tau _{xy}}, can be modeled by x y=k(d u d y)n{\displaystyle \tau _{xy}=k\left({\frac{du}{dy}}\right)^{n}}where k is the consistency index n is the flow behavior index du/dy is the shear rate, with velocity u and position y These fluids are called power-law fluids. To ensure that x
Squeeze job
https://en.wikipedia.org/wiki/Squeeze_job
Squeeze job, or squeeze cementing is a term often used in the oilfield to describe the process of injecting cement slurry into a zone, generally for pressure-isolation purposes. Background The term probably originated from the concept that enough water is "squeezed" out of the slurry to render it unflowable, so the portion that has actually entered the zone will stay in place when the squeeze pressure is released. After surface indications(e.g., pressure reaching a predetermined maximum)that a squeeze has been attained, any still-pumpable cement slurry remaining in the drill pipe or tubing ideally can be reverse circulated out before it sets. Usually the zone to be squeezed is isolated from above with a packer(and possibly from below with a bridge plug), but sometimes the squeezing pressure is applied to the entire casing string in what is known as a bradenhead squeeze,(named for an old manufacturer of casing heads). Even if a drilling rig is on location, pumping operations usually are done by a service company's cementing unit that can easily mix small batches of cement slurry, measure displacement volume accurately to spot the slurry on bottom, then pump at very low rates and high pressures during the squeeze itself, and finally measure volumes accurately again when reversing out any excess slurry. A squeeze manifold is a compact arrangement of valves and pressure gauges that allows monitoring of the drill pipe and casing pressures throughout the job, and facilitates quick switching of the pumping pressure to either side while the fluid returning from the other side of well is directed to the mud pit or a disposal pit or tank. The generic term "squeeze" also can apply to injection of generally small volumes of other liquids(e.g., treating fluids)into a zone under pressure. Bullhead squeeze(or just plain bullheading)refers to pumping kill-weight mud down the casing beneath closed blowout preventers in a kick-control situation when it isn't feasible to circulate in such from bottom. See also Drilling rig(petroleum)for a diagram of a drilling rig.==References==
Dipmeter Advisor
https://en.wikipedia.org/wiki/Dipmeter_Advisor
The Dipmeter Advisor was an early expert system developed in the 1980s by Schlumberger with the help of artificial-intelligence workers at MIT to aid in the analysis of data gathered during oil exploration. The Advisor was generally not merely an inference engine and a knowledge base of ~90 rules, but generally was a full-fledged workstation, running on one of Xerox's 1100 Dolphin Lisp machines(or in general on Xerox's "1100 Series Scientific Information Processors" line)and written in INTERLISP-D, with a pattern recognition layer which in turn fed a GUI menu-driven interface. It was developed by a number of people, including Reid G. Smith, James D. Baker, and Robert L. Young. It was primarily influential not because of any great technical leaps, but rather because it was so successful for Schlumberger's oil divisions and because it was one of the few success stories of the AI bubble to receive wide publicity before the AI winter. The AI rules of the Dipmeter Advisor were primarily derived from Al Gilreath, a Schlumberger interpretation engineer who developed the "red, green, blue" pattern method of dipmeter interpretation. Unfortunately, this method had limited application in more complex geological environments outside the Gulf Coast, and the Dipmeter Advisor was primarily used within Schlumberger as a graphical display tool to assist interpretation by trained geoscientists, rather than as an AI tool for use by novice interpreters. However, the tool pioneered a new approach to workstation-assisted graphical interpretation of geological information. References Other sources The AI Business: The commercial uses of artificial intelligence, ed. Patrick Winston and Karen A. Prendergast. ISBN 0-262-73077-4 "The Dipmeter Advisor: Interpretation of Geological Signals" Randall Davis, Howard Austin, Ingrid Carlbom, Bud Frawley, Paul Pruchnik, Rich Sneiderman, J. A. Gilreath. External links "The design of the Dipmeter Advisor system"-(at the ACM's website)
Integrated operations
https://en.wikipedia.org/wiki/Integrated_operations
In the petroleum industry, Integrated operations(IO)refers to the integration of people, disciplines, organizations, work processes and information and communication technology to make smarter decisions. In short, IO is collaboration with focus on production. Contents of the term The most striking part of IO has been the use of always-on videoconference rooms between offshore platforms and land-based offices. This includes broadband connections for sharing of data and video-surveillance of the platform. This has made it possible to move some personnel onshore and use the existing human resources more efficiently. Instead of having e.g. an expert in geology on duty at every platform, the expert may be stationed on land and be available for consultation for several offshore platforms. It is also possible for a team at an office in a different time zone to be consulting the night-shift of the platform, so that no land-based workers need work at night. Splitting the team between land and sea demands new work processes, which together with ICT is the two main focus points for IO. Tools like videoconferencing and 3D-visualization also creates an opportunity for new, more cross-discipline cooperations. For instance, a shared 3D-visualization may be tailored to each member of the group, so that the geologist gets a visualization of the geological structures while the drilling engineer focuses on visualizing the well. Here, real-time measurements from the well are important but the downhole bandwidth has previously been very restricted. Improvements in bandwidth, better measurement devices, better aggregation and visualization of this information and improved models that simulate the rock formations and wellbore currently all feed on each other. An important task where all these improvements play together is real-time production optimization. In the process industry in general, the term is used to describe the increased cooperation, independent of location, between operators, maintenance personnel, electricians, production management as well as business management and suppliers to provide a more streamlined plant operation. By deploying IO, the petroleum industry draws on lessons from the process industry. This can be seen in a larger focus on the whole production chain and management ideas imported from the production and process industry. A prominent idea in this regard is real-time optimization of the whole value chain, from long term management of the oil r
Petroleum production engineering
https://en.wikipedia.org/wiki/Petroleum_production_engineering
Petroleum production engineering is a subset of petroleum engineering. Petroleum production engineers design and select subsurface equipment to produce oil and gas well fluids. They often are degreed as petroleum engineers, although they may come from other technical disciplines(e.g., mechanical engineering, chemical engineering, physicist)and subsequently be trained by an oil and gas company. Overview Petroleum production engineers' responsibilities include: Evaluating inflow and outflow performance between the reservoir and the wellbore. Designing completion systems, including tubing selection, perforating, sand control, matrix stimulation, and hydraulic fracturing. Selecting artificial lift equipment, including sucker-rod lift(typically beam pumping), gas lift, electrical submersible pumps, subsurface hydraulic pumps, progressing-cavity pumps, and plunger lift. Selecting(not design)equipment for surface facilities that separate and measure the produced fluids(oil, natural gas, water, and impurities), prepare the oil and gas for transportation to market, and handle disposal of any water and impurities. Note: Surface equipments are designed by Chemical engineers and Mechanical engineers according to data provided by the production engineers. Suggested reading Journal of Petroleum Technology, Society of Petroleum Engineers Outflow should be defined as flow from the casing perforations to the surface facilities.==References==
Pipe rack
https://en.wikipedia.org/wiki/Pipe_rack
Structural steel pipe racks typically support pipes, power cables and instrument cable trays in petrochemical, chemical and power plants. Occasionally, pipe racks may also support mechanical equipment, vessels and valve access platforms. Main pipe racks generally transfer material between equipment and storage or utility areas. Storage racks found in warehouses are not pipe racks, even if they store lengths of pipe. A pipe rack is the main artery of a process unit. Pipe racks carry process and utility piping and may also include instrument and cable trays as well as equipment mounted over all of these. Pipe racks consist of a series of transverse beams that run along the length of the pipe system, spaced at uniform intervals typically around 20 ft. To allow maintenance access under the pipe rack, the transverse beams are typically moment frames. Transverse beams are typically connected with longitudinal struts. There are different types of pipes on the pipe rack. Utility pipes which include steam, cooling water, extinguishing water, fuel oil, and so on. These pipes are mostly located in the middle of a one-level pipe rack or on the top level when there are two levels. Then there are the process pipes. These pipes carry product that is part of the chemical reaction itself. These are placed on the outside of the utility pipes(especially if they are heavy)or on the bottom level when there are multiple levels. Lastly, relief and flare pipes which fulfill a safety goal. They protect the installation against too much pressure and are always located on the outside of the rack.==References==
Marsh funnel
https://en.wikipedia.org/wiki/Marsh_funnel
The Marsh funnel is a simple device for measuring viscosity by observing the time it takes a known volume of liquid to flow from a cone through a short tube. It is standardized for use by mud engineers to check the quality of drilling mud. Other cones with different geometries and orifice arrangements are called flow cones, but have the same operating principle. In use, the funnel is held vertically with the end of the tube closed by a finger. The liquid to be measured is poured through the mesh to remove any particles which might block the tube. When the fluid level reaches the mesh, the amount inside is equal to the rated volume. To take the measurement, the finger is released as a stopclock is started, and the liquid is allowed to run into a measuring container. The time in seconds is recorded as a measure of the viscosity. The Marsh Funnel Based on a method published in 1931 by H.N.Marsh, a Marsh cone is a flow cone with an aspect ratio of 2:1 and a working volume of at least a litre. A Marsh funnel is a Marsh cone with a particular orifice and a working volume of 1.5 litres. It consists of a cone 6 inches(152 mm)across and 12 inches in height(305 mm)to the apex of which is fixed a tube 2 inches(50.8 mm)long and 3/16 inch(4.76 mm)internal diameter. A 10-mesh screen is fixed near the top across half the cone. In American practice(and most of the oil industry)the volume collected is a quart. If water is used, the time should be 26+/-0.5 seconds. If the time is less than this the tube is probably enlarged by erosion, if more it may be blocked or damaged, and the funnel should be replaced. In some companies, and Europe in particular, the volume collected is a litre, for which the water funnel time should be 28 seconds. Marsh himself collected 0.50 litre, for which the time was 18.5 seconds. The Marsh funnel time is often referred to as the Marsh funnel viscosity, and represented by the abbreviation FV. The unit(seconds)is often omitted. Formally, the volume should also be stated. The(quart)Marsh funnel time for typical drilling muds is 34 to 50 seconds, though mud mixtures to cope with some geological conditions may have a time of 100 or more seconds. While the most common use is for drilling muds, which are non-Newtonian fluids, the Marsh funnel is not a rheometer, because it only provides one measurement under one flow condition. However the effective viscosity can be determined from following simple formula.=(t-25
Leverett J-function
https://en.wikipedia.org/wiki/Leverett_J-function
In petroleum engineering, the Leverett J-function is a dimensionless function of water saturation describing the capillary pressure, J(S w)=p c(S w)k/cos{\displaystyle J(S_{w})={\frac{p_{c}(S_{w}){\sqrt{k/\phi}}}{\gamma \cos \theta}}}where S w{\displaystyle S_{w}}is the water saturation measured as a fraction, p c{\displaystyle p_{c}}is the capillary pressure(in pascal), k{\displaystyle k}is the permeability(measured in m2),{\displaystyle \phi}is the porosity(0-1),{\displaystyle \gamma}is the surface tension(in N/m)and{\displaystyle \theta}is the contact angle. The function is important in that it is constant for a given saturation within a reservoir, thus relating reservoir properties for neighboring beds. The Leverett J-function is an attempt at extrapolating capillary pressure data for a given rock to rocks that are similar but with differing permeability, porosity and wetting properties. It assumes that the porous rock can be modelled as a bundle of non-connecting capillary tubes, where the factor k/{\displaystyle{\sqrt{k/\phi}}}is a characteristic length of the capillaries' radii. This function is also widely used in modeling two-phase flow of proton-exchange membrane fuel cells. A large degree of hydration is needed for good proton conductiv
Integrated asset modelling
https://en.wikipedia.org/wiki/Integrated_asset_modelling
Integrated asset modelling(IAM)is the generic term used in the oil industry for computer modelling of both the subsurface and the surface elements of a field development. Historically the reservoir has always been modelled separately from the surface network and the facilities. In order to capture the interaction between those two or more standalone models, several time-consuming iterations were required. For example, a change in the water breakthrough leads to a change in the deliverability of the surface network which in turn leads to a production acceleration or deceleration in the reservoir. In order to go through this lengthy process more quickly, the industry has slowly been adopting a more integrated approach which captures the constraints imposed by the infrastructure on the network immediately. Basis As the aim of an IAM is to provide a production forecast which honours both the physical realities of the reservoir and the infrastructure it needs to contain the following elements; A pressure network A subsurface saturation model An availability model A constraint manager A production optimisation algorithm Some but not all models also contain an economics and risk model component so that the IAM can be used for economic evaluation. IAM vs. IPM The term Integrated Asset Modeling was first used by British Petroleum(BP), and this term is still maintained till date. Integrated asset modeling links individual simulators across technical disciplines, assets, computing environments, and locations. This collaborative methodology represents a shift in oil and gas field management, moving it toward a holistic management approach and away from disconnected teams working in isolation. The open framework of SLBs Integrated Asset Modeling(IAM)software enables the coupling of a wide number of simulation software applications including reservoir simulation models(Eclipse, Intersect, MBX, IMEX, MBAL), multiphase flow simulation models(Pipesim, Olga, GAP), process and facilities simulation models(Symmetry, HYSYS, Petro-sim, UniSim)and economic domain models(Merak Peep). Historically the terms Integrated Production Modeling and Integrated Asset Modeling have been used interchangeably. The modern use of Integrated Production Modeling was coined when Petroleum Experts Ltd. joined their MBAL modeling software with their GAP and Prosper modeling software to form an Integrated Production Model. Benefits of Integrated Asset Modelling Having an IAM built of a
Slickline
https://en.wikipedia.org/wiki/Slickline
Slickline refers to a single strand wire which is used to run a variety of tools down into the wellbore for several purposes. It is used during well drilling operations in the oil and gas industry. In general, it can also describe a niche of the industry that involves using a slickline truck or doing a slickline job. Slickline looks like a long, smooth, unbraided wire, often shiny, silver/chrome in appearance. It comes in varying lengths, according to the depth of wells in the area it is used(it can be ordered to specification)up to 35,000 feet in length. It is used to lower and raise downhole tools used in oil and gas well maintenance to the appropriate depth of the drilled well. In use and appearance it is connected by a drum as it is spooled off the back of the slickline truck to the wireline sheave, a round wheel grooved and sized to accept a specified line and positioned to redirect the line to another sheave that will allow the slickline to enter the wellbore. Slickline is used to lower downhole tools into an oil or gas well to perform a specified maintenance job downhole. Downhole refers to the area in the pipe below surface, the pipe being either the casing cemented in the hole by the drilling rig(which keeps the drilled hole from caving in and pressure from the various oil or gas zones downhole from feeding into one another)or the tubing, a smaller diameter pipe hung inside the casing. Uses Slickline is more commonly used in production tubing. The wireline operator monitors at surface the slickline tension via a weight indicator gauge and the depth via a depth counter 'zeroed' from surface, lowers the downhole tool to the proper depth, completes the job by manipulating the downhole tool mechanically, checks to make sure it worked if possible, and pulls the tool back out by winding the slickline back onto the drum it was spooled from. The slickline drum is controlled by a hydraulic pump, which in turn is controlled by the 'slickline operator' Slickline comes in different sizes and grades. The larger the size, and higher the grade, generally means the higher line tension can be pulled before the line snaps at the weakest spot and causes a costly 'fishing' job. Due to downhole tools getting stuck because of malfunctions or 'downhole conditions' including sand, scale, salt, asphaltenes, and other well byproducts settling or loosening off the pipe walls because of agitation either by the downhole tools or a change in downhole inflow, sometimes i
Estimated pore pressure
https://en.wikipedia.org/wiki/Estimated_pore_pressure
Estimated pore pressure, as used in the oil industry and mud logging, is an approximation of the amount of force that is being exerted into the borehole by fluids or gases within the formation that has been penetrated. In the oil industry, estimated pore pressure is measured in pounds per square inch(psi), but is converted to equivalent mud weight and measured in pounds per gallon(lb/gal)to more easily determine the amount of mud weight required to prevent the fluid or gas from escaping and causing a blowout or wellbore failure.==References==
Flow line
https://en.wikipedia.org/wiki/Flow_line
A flow line, used on a drilling rig, is a large diameter pipe(typically a section of casing)that is connected to the bell nipple(under the drill floor)and extends to the possum belly(on the mud tanks)and acts as a return line(for the drilling fluid as it comes out of the hole), to the mud. Possum Belly The possum belly is used to slow the flow of returning drilling fluid before it hits the shale shakers. This enables the shale shaker to clean the cuttings out of the drilling fluid before it is returned to the pits for circulation. Sample Box Another common add on is the sample box. This is a heavy duty rubber hose that is inserted at the end of the flow line and at the other end emplaced into the sample box itself. The sample box is used to capture samples of drill cuttings for geological logging. The box is typically equipped with a raising door that allows the water and cuttings to escape after a sample is collected. Stinger Line A stinger line is similar to a flow line, but unlike a flow line is not used to maintain circulation. The stinger line is attached to the blowout preventer to allow for the pressure from a blowout to be released. The stinger line usually will run parallel to the flow line. See also Drilling rig(petroleum)Flow show References Flow line
Pore pressure gradient
https://en.wikipedia.org/wiki/Pore_pressure_gradient
Pore pressure gradient is a dimensional petrophysical term used by drilling engineers and mud engineers during the design of drilling programs for drilling(constructing)oil and gas wells into the earth. It is the pressure gradient inside the pore space of the rock column from the surface of the ground down to the total depth(TD), as compared to the pressure gradient of seawater in deep water. In drilling engineering, the pore pressure gradient is usually expressed in API-type International Association of Drilling Contractors(IADC)physical units of measurement, namely "psi per foot", whereas in "pure math," the gradient of a scalar function expressed by the math notation grad(f)may not have physical units associated with it. In the well-known formula P=0.052*mud weight*true vertical depth taught in almost all petroleum engineering courses worldwide, the mud weight(MW)is expressed in pounds per U.S. gallon, and the true vertical depth(TVD)is expressed in feet, and 0.052 is a commonly used conversion constant that can be derived by dimensional analysis: 1 p s i f t 1 f t 12 i n 1 l b/i n 2 1 p s i 231 i n 3 1 U S G a l
Top drive
https://en.wikipedia.org/wiki/Top_drive
A top drive is a mechanical device on a drilling rig that provides clockwise torque to the drill string to drill a borehole. It is an alternative to the rotary table and kelly drive. It is located at the swivel's place below the traveling block and moves vertically up and down the derrick. Benefits The top drive allows the drilling rig to drill the longer section of a stand of drill pipe in one operation. A rotary table type rig can only drill 30-foot(9.1 m)(single drill pipe)sections of drill pipe whereas a top drive can drill 6090-foot(1827 m)stands(double and triple drill pipe respectively, a triple being three joints of drillpipe screwed together), depending on the drilling rig size. Handling longer sections of drill pipe enables a drilling rig to make greater daily progress because up to 90-foot(27 m)can be drilled at a time, thus requiring fewer "connections" to add another 30-foot(9.1 m)of drill pipe. Another advantage of top drive systems is time efficiency. When the bit progresses under a kelly drive, the entire string must be withdrawn from the well bore for the length of the kelly in order to add one more length of drill pipe. With a top drive, the draw works only has to pick up a new stand from the rack and make up two joints. Making fewer and quicker connections reduces the risk of a stuck string from annulus clogging while drilling fluid is not being pumped. Variations Several different kinds of top drives exist, and are usually classified based on the "Safe Working Load"(SWL)of the equipment and the size and type of motor used to rotate the drillpipe. For offshore and heavy duty use, a 1000 short ton unit would be used, whereas a smaller land rig may only require a 500 short ton device. Various sizes of hydraulic motors, or AC or DC electric motors, are available. Standards The American Petroleum Institute has set standards for top drives in a number of its publications including: API 8A: Specification for Drilling and Production Hoisting Equipment(effective May 1998, withdrawn February 2013)API 8B: Recommended Practice for Procedures for Inspections, Maintenance, Repair, and Remanufacture of Hoisting Equipment API 8C: Specification for Drilling and Production Hoisting Equipment The International Organization for Standardization publishes a standard relating to top drives in: ISO 13535: Recommended Practice for Procedures for Inspections, Maintenance, Repair, and Remanufacture of Hoisting Equipment See also Notabl
Demand factor
https://en.wikipedia.org/wiki/Demand_factor
In telecommunications, electronics and the electrical power industry, the term demand factor is used to refer to the fractional amount of some quantity being used relative to the maximum amount that could be used by the same system. The demand factor is always less than or equal to one. As the amount of demand is a time dependent quantity so is the demand factor. f Demand(t)=Demand Maximum possible demand{\displaystyle f_{\text{Demand}}(t)={\frac{\text{Demand}}{\text{Maximum possible demand}}}}The demand factor is often implicitly averaged over time when the time period of demand is understood by the context. Electrical engineering In electrical engineering the demand factor is taken as a time independent quantity where the numerator is taken as the maximum demand in the specified time period instead of the averaged or instantaneous demand. f Demand=Maximum load in given time period Maximum possible load{\displaystyle f_{\text{Demand}}={\frac{\text{Maximum load in given time period}}{\text{Maximum possible load}}}}This is the peak in the load profile divided by the full load of the device. Example: If a residence has equipment which could draw 6,000 W when all equipment was drawing a full load, drew a maximum of 3,000 W in a specified time, then the demand factor=3,000 W/6,000 W=0.5 This quantity is relevant when trying to establish the amount of load for which a system should be rated. In the above example, it would be unlikely that the system would be rated to 6,000 W, even though there may be a slight possibility that this amount of power can be drawn. This is closely related to the load factor which is the average load divided by the peak load in a specified time period. f Load=Average load Maximum load in given time period{\displaystyle f_{\text{Load}}={\frac{\text{Average load}}{\text{Maximum load in given time period}}}}See also Capacity factor List of energy storage projects Diversity factor Utilization
Power-to-X
https://en.wikipedia.org/wiki/Power-to-X
Power-to-X(also P2X and P2Y)are electricity conversion, energy storage, and reconversion pathways from surplus renewable energy. Power-to-X conversion technologies allow for the decoupling of power from the electricity sector for use in other sectors(such as transport or chemicals), possibly using power that has been provided by additional investments in generation. The term is widely used in Germany and may have originated there. The X in the terminology can refer to one of the following: power-to-ammonia, power-to-chemicals, power-to-fuel, power-to-gas(power-to-hydrogen, power-to-methane)power-to-liquid(synthetic fuel), power to food, power-to-heat. Electric vehicle charging, space heating and cooling, and water heating can be shifted in time to match generation, forms of demand response that can be called power-to-mobility and power-to-heat. Collectively power-to-X schemes which use surplus power fall under the heading of flexibility measures and are particularly useful in energy systems with high shares of renewable generation and/or with strong decarbonization targets. A large number of pathways and technologies are encompassed by the term. In 2016 the German government funded a 30 million first-phase research project into power-to-X options. Power-to-fuel Surplus electric power can be converted to gas fuel energy for storage and reconversion. Direct current electrolysis of water(efficiency 8085% at best)can be used to produce hydrogen which can, in turn, be converted to methane(CH4)via methanation. Another possibility is converting the hydrogen, along with CO2 to methanol. Both these fuels can be stored and used to produce electricity again, hours to months later. Storage and reconversion of power-to-fuel Hydrogen and methane can be used as downstream fuels, fed into the natural gas grid, or used to make synthetic fuel. Alternatively they can be used as a chemical feedstock, as can ammonia(NH3). Reconversion technologies include gas turbines, combined cycle plants, reciprocating engines and fuel cells. Power-to-power refers to the round-trip reconversion efficiency. For hydrogen storage, the round-trip efficiency remains limited at 3550%. Electrolysis is expensive and power-to-gas processes need substantial full-load hours to be economic. However, while round-trip conversion efficiency of power-to-power is lower than with batteries and electrolysis can be expensive, storage of the fuels themselves is quite inexpensive. This means
Stodola's cone law
https://en.wikipedia.org/wiki/Stodola%27s_cone_law
The Law of the Ellipse, or Stodola's cone law, is a method for calculating highly nonlinear dependence of extraction pressures with a flow for multistage turbine with high backpressure, when the turbine nozzles are not choked. It is important in turbine off-design calculations. Description Stodola's cone law, consider a multistage turbine, like in the picture. The design calculation is done for the design flow rate(m 0{\displaystyle \scriptstyle{\dot{m}}_{0}\,}, the flow expected for the most uptime). The other parameters for design are the temperature and pressure at the stage group intake, T 0{\displaystyle \scriptstyle T_{0}\,}and p 0{\displaystyle \scriptstyle p_{0}\,}, respectively the extraction pressure at the stage group outlet p 2{\displaystyle \scriptstyle p_{2}\,}(the symbol p 1{\displaystyle \scriptstyle p_{1}\,}is used for the pressure after a stage nozzle; pressure does not interfere in relations here). For off-design calculations, the Stodola's cone law off-design flow rate is m 01{\displaystyle \scriptstyle{\dot{m}}_{01}\,}, respectively, the temperature and pressure at the stage group intake are T 01{\displaystyle \scriptstyle T_{01}\,}and p 01
Turbine
https://en.wikipedia.org/wiki/Turbine
A turbine(or)(from the Greek , tyrb, or Latin turbo, meaning vortex)is a rotary mechanical device that extracts energy from a fluid flow and converts it into useful work. The work produced can be used for generating electrical power when combined with a generator. A turbine is a turbomachine with at least one moving part called a rotor assembly, which is a shaft or drum with blades attached. Moving fluid acts on the blades so that they move and impart rotational energy to the rotor. Gas, steam, and water turbines have a casing around the blades that contains and controls the working fluid. Modern steam turbines frequently employ both reaction and impulse in the same unit, typically varying the degree of reaction and impulse from the blade root to its periphery. History Hero of Alexandria demonstrated the turbine principle in an aeolipile in the first century AD and Vitruvius mentioned them around 70 BC. Early turbine examples are windmills and waterwheels. The word "turbine" was first applied to this kind of device in 1822 by the French mining engineer Claude Burdin in a memo, "Des turbines hydrauliques ou machines rotatoires grande vitesse", which he submitted to the Acadmie royale des sciences in Paris. The word derives from the Latin turbo, meaning "vortex" or "top", and was in use in French to describe certain seashells. However, it was not until 1824 that a committee of the Acadmie(composed of Prony, Dupin, and Girard)reported favorably on Burdin's memo. Benoit Fourneyron, a former student of Claude Burdin, built the first practical water turbine. Credit for invention of the steam turbine is given both to Anglo-Irish engineer Sir Charles Parsons(18541931)for invention of the reaction turbine, and to Swedish engineer Gustaf de Laval(18451913)for invention of the impulse turbine. Theory of operation A working fluid contains potential energy(pressure head)and kinetic energy(velocity head). The fluid may be compressible or incompressible. Several physical principles are employed by turbines to collect this energy: Impulse turbines change the direction of flow of a high velocity fluid or gas jet. The resulting impulse spins the turbine and leaves the fluid flow with diminished kinetic energy. There is no pressure change of the fluid or gas in the turbine blades(the moving blades), as in the case of a steam or gas turbine, all the pressure drop takes place in the stationary blades(the nozzles). Before reaching the turbine, the
Shading coil
https://en.wikipedia.org/wiki/Shading_coil
A shading coil or shading ring(Also called Frager spire or Frager coil)is one or more turns of electrical conductor(usually copper or aluminum)located in the face of the magnet assembly or armature of an alternating current solenoid. The alternating current in the energized primary coil induces an alternating current in the shading coil. This induced current creates an auxiliary magnetic flux which is 90 degrees out of phase from the magnetic flux created by the primary coil. Because of the 90 degree phase difference between the current in the shading coil and the current in the primary coil, the shading coil maintains a magnetic flux and hence a force between the armature and the assembly while the current in the primary coil crosses zero. Without this shading ring, the armature would tend to open each time the main flux goes through zero and create noise, heat and mechanical damages on the magnet faces, so it reduces bouncing or chatter of relay or power contacts. Shaded-pole AC motors A shaded-pole motor is an AC single phase induction motor. Its includes an auxiliary winding composed of a copper ring called a shading ring(or shading coil with more than one turn). The auxiliary winding produces a secondary magnetic flux which, along with the flux from the primary coil, forms a rotating magnetic field suitable for applying torque to and rotating the rotor. These devices are typically used as low-cost motors for microwave oven fans. References External links The short film AC MOTORS AND GENERATORS(1961)is available for free viewing and download at the Internet Archive. The short film AC MOTORS(1969)is available for free viewing and download at the Internet Archive. Engineer's Relay Handbook, 5th edition, published by the Relay and Switch Industry Association(RSIA)formerly NARM Information about relays and the Latching Relay circuit "Harry Porter's Relay Computer", a computer made out of relays "Relay Computer Two", by Jon Stanley
Power Engineering (magazine)
https://en.wikipedia.org/wiki/Power_Engineering_(magazine)
Power Engineering is a monthly magazine dedicated to professionals in the field of power engineering and power generation. Articles are focused on new developments in power plant design, construction and operation in North America. Power Engineering was published by PennWell Corporation, the largest U.S. publisher of electric power industry books, directories, maps and conferences. In 2018, PennWell was acquired by Clarion Events, a British company owned by The Blackstone Group. Power Engineering International, also published by PennWell, covers Europe, Asia-Pacific, the Middle East and the rest of the world. References External links Official website
Power system protection
https://en.wikipedia.org/wiki/Power_system_protection
Power system protection is a branch of electrical power engineering that deals with the protection of electrical power systems from faults through the disconnection of faulted parts from the rest of the electrical network. The objective of a protection scheme is to keep the power system stable by isolating only the components that are under fault, whilst leaving as much of the network as possible in operation. The devices that are used to protect the power systems from faults are called protection devices. Components Protection systems usually comprise five components Current and voltage transformers to step down the high voltages and currents of the electrical power system to convenient levels for the relays to deal with Protective relays to sense the fault and initiate a trip, or disconnection, order Circuit breakers or RCDs to open/close the system based on relay and autorecloser commands Batteries to provide power in case of power disconnection in the system Communication channels to allow analysis of current and voltage at remote terminals of a line and to allow remote tripping of equipment. For parts of a distribution system, fuses are capable of both sensing and disconnecting faults. Failures may occur in each part, such as insulation failure, fallen or broken transmission lines, incorrect operation of circuit breakers, short circuits and open circuits. Protection devices are installed with the aims of protection of assets and ensuring continued supply of energy. Switchgear is a combination of electrical disconnect switches, fuses or circuit breakers used to control, protect and isolate electrical equipment. Switches are safe to open under normal load current(some switches are not safe to operate under normal or abnormal conditions), while protective devices are safe to open under fault current. Very important equipment may have completely redundant and independent protective systems, while a minor branch distribution line may have very simple low-cost protection. Types of protection High-voltage transmission network Protection of the transmission and distribution system serves two functions: protection of the plant and protection of the public(including employees). At a basic level, protection disconnects equipment that experiences an overload or a short to earth. Some items in substations such as transformers might require additional protection based on temperature or gas pressure, among others. Generator sets In a power plant, the protectiv
Cheng cycle
https://en.wikipedia.org/wiki/Cheng_cycle
The Cheng cycle is a thermodynamic cycle which uses a combination of two working fluids, one gas and one steam. It can therefore be considered a combination of the Brayton cycle and the Rankine cycle. It was named for Dr. Dah Yu Cheng. The company founded by Dr. Cheng has developed systems in partnership with both GM and GE turbine manufacturers to take advantage of the Cheng cycle by modification of existing turbine designs before construction. The Cheng cycle involves the heated exhaust gas from the turbine being used to make steam in a heat recovery steam generator(HRSG). The steam so produced is injected into the gas turbine's combustion chamber to increase power output. The process can be thought of as a parallel combination of the gas turbine Brayton cycle and a steam turbine Rankine cycle. The cycle was invented by Prof. Dah Yu Cheng of the University of Santa Clara who patented it in 1976. A fully Cheng Cycle design Gas/Steam two fluid flow turbine can achieve a theoretical thermal efficiency of 60% matching or even exceeding many traditional Combined Cycle Gas Turbines that keep the steam Rankine Cycle and gas Brayton Cycle as separate loops. See also Combined cycle Brayton cycle Rankine cycle Cogeneration==References==
Generation expansion planning
https://en.wikipedia.org/wiki/Generation_expansion_planning
Generation expansion planning(also known as GEP)is finding an optimal solution for the planning problem in which the installation of new generation units satisfies both technical and financial limits. GEP is a challenging problem because of the large scale, long-term and nonlinear nature of generation unit size. Due to lack of information, companies have to solve this problem in a risky environment because the competition between generation companies for maximizing their benefit make them to conceal their strategies. Under such an ambiguous condition, various nonlinear solutions have been proposed to solve this sophisticated problem. These solutions are based on different strategies including: game theory, two-level game model, multi-agent system, genetic algorithm, particle swarm optimization and so forth. See also Demand response power system==References==
Flexible AC transmission system
https://en.wikipedia.org/wiki/Flexible_AC_transmission_system
In electrical engineering, a flexible alternating current transmission system(FACTS)is a family of power-electronic based devices designed for use on an alternating current(AC)transmission system to improve and control power flow and support voltage. FACTS devices are alternatives to traditional electric grid solutions and improvements, where building additional transmission lines or substation is not economically or logistically viable. In general, FACTS devices improve power and voltage in three different ways: shunt compensation of voltage(replacing the function of capacitors or inductors), series compensation of impedance(replacing series capacitors)or phase-angle compensation(replacing generator droop-control or phase-shifting transformers). While other traditional equipment can accomplish all of this, FACTS devices utilize power electronics that are fast enough to switch sub-cycle opposed to seconds or minutes. Most FACTS devices are also dynamic and can support voltage across a range rather than just on and off, and are multi-quadrant, i.e. they can both supply and consume reactive power, and even sometimes real power. All of this give them their "flexible" nature and make them well-suited for applications with unknown or changing requirements. The FACTs family initially grew out of the development of high-voltage direct current(HVDC)conversion and transmission, which used power electronics to convert AC to direct current(DC)to enable large, controllable power transfers. While HVDC focused on conversion to DC, FACTS devices used the developed technology to control power and voltage on the AC system. The most common type of FACTS device is the static VAR compensator(SVC), which uses thyristors to switch and control shunt capacitors and reactors, respectively. History When AC won the war of the currents in the late 19th century, and electric grids began expanding and connecting cities and states, the need for reactive compensation became apparent. While AC offered benefits with transformation and reduced current, the alternating nature of voltage and current lead to additional challenges with the natural capacitance and inductance of transmission lines. Heavily loaded lines consumed reactive power due to the line's inductance, and as transmission voltage increased throughout the 20th century, the higher voltage supplied capacitive reactive power. As operating a transmission line only at it surge impedance loading(SIL)was not feasible,
Power engineering software
https://en.wikipedia.org/wiki/Power_engineering_software
Power engineering software is a software used to create models, analyze or calculate the design of Power stations, Overhead power lines, Transmission towers, Electrical grids, Grounding and Lightning systems and others. It is a type of application software used for power engineering problems which are transformed into mathematical expressions. History The first software program for power engineering was created by the end of the 1960s for the purpose of monitoring power plants. In the following decades, Power engineering and Computer technologies developed very fast. Software programs were created to collect data for power plants. One of the first computer languages used in Nuclear plants and Thermal plants was C(programming language). The first power systems analysis program to feature a graphical user interface and IPSA was designed in the mid-1970s. Other platforms for electrical power modelling were created by the end of the 1980s. Currently, the programming language Python, commonly used in French Nuclear plants, is used to write energy-efficient algorithms and software programs. Classification Power Plants Analysis Software The early 2000s saw the rapid development of analytical programming and 3D modeling. Software products were being created for designing power plants and their elements and connections. The programs were based on mathematical algorithms and computations. Power software such as IPSA, SKM, CYME, DINIS, PSS/E, DIgSILENT and ETAP are pioneers in the category of power engineering software. Most of these products used MARKAL, ESME and other modelling methods. The transmission lines were designed according to minimum requirements set out in the SQSS(security and quality of supply standard). This also applies to other elements of the power systems. In the software world, many CAD software products for 2D and 3D electrical designs were developed. Renewable Energy Controller Software The controllers of Renewable energy used different software. The digital controllers are of different types: ADC, DAC, 4-bit, 8-bit, 16-bit, and many others. To date, the controllers are mostly programmed with computer languages like: C, C++, Java and others. Power Engineering Protection Software Another kind of software is one for simulating security systems for power systems and power plants. Such software simulates the activation of the various types of protections, which protects the transformers, power lines and other components. It graphs the differ
Grid oscillation
https://en.wikipedia.org/wiki/Grid_oscillation
The grid oscillations are oscillations in an electric grid manifesting themselves in low-frequency(mostly below 1 Hz)periodic changes of the power flow. These oscillations are a natural effect of negative feedback used in the power system control algorithms. During the normal operation of the power grid, these oscillations, triggered by some change in the system, decay with time(are "damped" within few tens of seconds), and are mostly not noticeable. If the damping in the system is not sufficient, the amplitude of oscillations can grow eventually leading to a blackout. For example, shortly before the 1996 Western North America blackouts the grid after each disturbance was oscillating with a frequency of 0.26 Hz for about 30 seconds. At some point a sequence of faults and operations of automatic protection relays caused loss of damping, eventually breaking the system into disconnected "islands" with many customers losing power. The other notable events involving oscillations were the Northeast blackout of 2003 and the 2009 subsynchronous oscillations in Texas. While the theory and calculations tools for analyzing oscillations are available, pinpointing the source of instability in a real grid is frequently difficult as of the early 2020s. The oscillations are a normal occurrence, yet the difference in a flow as small as 10 MW is known to occasionally push the system from the stable mode with decaying oscillations into a situation where their amplitudes grow with time. The system operator frequently gets no warning that the grid is close to its damping limit. Underdamping The primary cause of the oscillations is damping that is too low. The following conditions typically lead to weak damping: high power transmission over long distances; high-power networks interconnected by weak tie lines; fast-feedback automatic voltage control. High penetration of inverter-based resources exacerbated grid stability issues, including the oscillations(in addition to subcycle overvoltage and AC overcurrent). In some cases, high frequency oscillations(hundreds of Hz)were also observed. The oscillations can also occur due to the design of control loops of high-voltage direct current links(HVDC)and static var compensators(SVC). Terminology North American Electric Reliability Corporation suggested the following classification for the grid oscillations: System(Natural): low-frequency changes in the rotor angle triggered by power imbalance: Local: oscillations of o
Power-flow study
https://en.wikipedia.org/wiki/Power-flow_study
In power engineering, a power-flow study(also known as power-flow analysis or load-flow study)is a numerical analysis of the flow of electric power in an interconnected system. A power-flow study usually uses simplified notations such as a one-line diagram and per-unit system, and focuses on various aspects of AC power parameters, such as voltage, voltage angles, real power and reactive power. It analyzes the power systems in normal steady-state operation. Power-flow or load-flow studies are important for planning future expansion of power systems as well as in determining the best operation of existing systems. The principal information obtained from the power-flow study is the magnitude and phase angle of the voltage at each bus, and the real and reactive power flowing in each line. Commercial power systems are usually too complex to allow for hand solution of the power flow. Special-purpose network analyzers were built between 1929 and the early 1960s to provide laboratory-scale physical models of power systems. Large-scale digital computers replaced the analog methods with numerical solutions. In addition to a power-flow study, computer programs perform related calculations such as short-circuit fault analysis, stability studies(transient and steady-state), unit commitment and economic dispatch. In particular, some programs use linear programming to find the optimal power flow, the conditions which give the lowest cost per kilowatt hour delivered. A load flow study is especially valuable for a system with multiple load centers, such as a refinery complex. The power-flow study is an analysis of the systems capability to adequately supply the connected load. The total system losses, as well as individual line losses, also are tabulated. Transformer tap positions are selected to ensure the correct voltage at critical locations such as motor control centers. Performing a load-flow study on an existing system provides insight and recommendations as to the system operation and optimization of control settings to obtain maximum capacity while minimizing the operating costs. The results of such an analysis are in terms of active power, reactive power, voltage magnitude and phase angle. Furthermore, power-flow computations are crucial for optimal operations of groups of generating units. In term of its approach to uncertainties, load-flow study can be divided to deterministic load flow and uncertainty-concerned load flow. Deterministic load-flow study does
Voltage sag
https://en.wikipedia.org/wiki/Voltage_sag
A voltage sag(U.S. English)or voltage dip(British English)is a short-duration reduction in the voltage of an electric power distribution system. It can be caused by high current demand such as inrush current(starting of electric motors, transformers, heaters, power supplies)or fault current(overload or short circuit)elsewhere on the system. Voltage sags are defined by their magnitude or depth, and duration. A voltage sag happens when the RMS voltage decreases between 10 and 90 percent of nominal voltage for one-half cycle to one minute. Some references define the duration of a sag for a period of 0.5 cycle to a few seconds, and a longer duration of low voltage would be called a sustained sag. The definition of voltage sag can be found in IEEE 1159, 3.1.73 as "A variation of the RMS value of the voltage from nominal voltage for a time greater than 0.5 cycles of the power frequency but less than or equal to 1 minute. Usually further described using a modifier indicating the magnitude of a voltage variation(e.g. sag, swell, or interruption)and possibly a modifier indicating the duration of the variation(e.g., instantaneous, momentary, or temporary)." Voltage sag in large power system The main goal of the power system is to provide reliable and high-quality electricity for its customers. One of the main measures of power quality is the voltage magnitude. Therefore, Monitoring the power system to ensure its performance is one of the highest priorities. However, since power systems are usually grids including hundreds of buses, installing measuring instruments at every single busbar of the system is not cost-efficient. In this regard, various approaches have been suggested to estimate the voltage of different buses merely based on the measured voltage on a few buses. Related concepts The term sag should not be confused with a brownout, which is the reduction of voltage for minutes or hours. The term transient, as used in power quality, is an umbrella term and can refer to sags, swells, dropouts, etc. Swell Voltage swell is the opposite of voltage sag. Voltage swell, which is a momentary increase in voltage, happens when a heavy load turns off in a power system. Causes Several factors can cause a voltage sag: Some electric motors draw much more current when they are starting than when they are running at their rated speed. A line-to-ground fault will cause a voltage sag until the protective switchgear(fuse or circuit breaker)operates. Some acc
Active power filter
https://en.wikipedia.org/wiki/Active_power_filter
Active power filters(APF)are filters, which can perform the job of harmonic elimination. Active power filters can be used to filter out harmonics in the power system which are significantly below the switching frequency of the filter. The active power filters are used to filter out both higher and lower order harmonics in the power system. The main difference between active power filters and passive power filters is that APFs mitigate harmonics by injecting active power with the same frequency but with reverse phase to cancel that harmonic, where passive power filters use combinations of resistors(R), inductors(L)and capacitors(C)and does not require an external power source or active components such as transistors. This difference, make it possible for APFs to mitigate a wide range of harmonics. See also Static synchronous series compensator Power conditioner Active filter Line filter References External links TYPQC-APF Active Power Filter
Grounding transformer
https://en.wikipedia.org/wiki/Grounding_transformer
A grounding transformer or earthing transformer is a type of auxiliary transformer used in three-phase electric power systems to provide a ground path to either an ungrounded wye or a delta-connected system. Grounding transformers are part of an earthing system of the network. They let three-phase(delta connected)systems accommodate phase-to-neutral loads by providing a return path for current to a neutral. Grounding transformers are typically used to: Provide a relatively low-impedance path to ground, thereby maintaining the system neutral at or near ground potential. Limit the magnitude of transient over voltages when restriking ground faults occur. Provide a source of ground fault current during line-to-ground faults. Permit the connection of phase-to-neutral loads when desired. Grounding transformers most commonly incorporate a single winding transformer with a zigzag winding configuration, but may also be created with a(rare case)delta-wye transformer. Neutral grounding transformers are very common on generators in power plants and wind farms. Neutral grounding transformers are sometimes applied on high-voltage(sub-transmission)systems, such as at 33 kV, where the circuit would otherwise not have a ground; for example, if a system is fed by a delta-connected transformer. The grounding point of the transformer may be connected through a resistor or arc suppression coil to limit the fault current on the system in the event of a line-to-ground fault.==References==
Centrifugal pump
https://en.wikipedia.org/wiki/Centrifugal_pump
Centrifugal pumps are used to transport fluids by the conversion of rotational kinetic energy to the hydrodynamic energy of the fluid flow. The rotational energy typically comes from an engine or electric motor. They are a sub-class of dynamic axisymmetric work-absorbing turbomachinery. The fluid enters the pump impeller along or near to the rotating axis and is accelerated by the impeller, flowing radially outward into a diffuser or volute chamber(casing), from which it exits. Common uses include water, sewage, agriculture, petroleum, and petrochemical pumping. Centrifugal pumps are often chosen for their high flow rate capabilities, abrasive solution compatibility, mixing potential, as well as their relatively simple engineering. A centrifugal fan is commonly used to implement an air handling unit or vacuum cleaner. The reverse function of the centrifugal pump is a water turbine converting potential energy of water pressure into mechanical rotational energy. History According to Reti, the first machine that could be characterized as a centrifugal pump was a mud lifting machine which appeared as early as 1475 in a treatise by the Italian Renaissance engineer Francesco di Giorgio Martini. True centrifugal pumps were not developed until the late 17th century, when Denis Papin built one using straight vanes. The curved vane was introduced by British inventor John Appold in 1851. Working principle Like most pumps, a centrifugal pump converts rotational energy, often from a motor, to energy in a moving fluid. A portion of the energy goes into kinetic energy of the fluid. Fluid enters axially through eye of the casing, is caught up in the impeller blades, and is whirled tangentially and radially outward until it leaves through all circumferential parts of the impeller into the diffuser part of the casing. The fluid gains both velocity and pressure while passing through the impeller. The doughnut-shaped diffuser, or scroll, section of the casing decelerates the flow and further increases the pressure. Description by Euler A consequence of Newton's second law of mechanics is the conservation of the angular momentum(or the moment of momentum)which is of fundamental significance to all turbomachines. Accordingly, the change of the angular momentum is equal to the sum of the external moments. Angular momentums Q r c u{\displaystyle \rho Qrcu}at inlet and outlet, an external torq
Stationary engineer
https://en.wikipedia.org/wiki/Stationary_engineer
A stationary engineer(also called an operating engineer, power engineer or process operator)is a technically trained professional who operates, troubleshoots and oversees industrial machinery and equipment that provide and utilize energy in various forms. The title "power engineer" has different meanings in the United States and in Canada. Stationary engineers are responsible for the safe operation and maintenance of a wide range of equipment including boilers, steam turbines, gas turbines, gas compressors, generators, motors, air conditioning systems, heat exchangers, heat recovery steam generators(HRSGs)that may be directly fired(duct burners)or indirectly fired(gas turbine exhaust heat collectors), hot water generators, and refrigeration machinery in addition to its associated auxiliary equipment(air compressors, natural gas compressors, electrical switchgear, pumps, etc.). Stationary engineers are trained in many areas, including mechanical, thermal, chemical, electrical, metallurgy, instrumentation, and a wide range of safety skills. They typically work in factories, office buildings, hospitals, warehouses, power generation plants, industrial facilities, and residential and commercial buildings. The use of the title Stationary Engineer predates other engineering designations and is not to be confused with Professional Engineer, a title typically given to design engineers in their given field. The job of today's engineer has been greatly changed by computers and automation as well as the replacement of steam engines on ships and trains. Workers have adapted to the challenges of the changing job market. Today, stationary engineers are required to be significantly more involved with the technical aspect of the job, as many plants and buildings are updated with increasingly more automated systems of control valves and distributed control systems. History The profession of stationary engineering emerged during the Industrial Revolution with the development of steam-powered pumps by Thomas Savery and Thomas Newcomen which were used to draw water from mines, and the industrial steam engines perfected by James Watt. Railroad engineers operated early steam locomotives and continue to operate trains today, as well as marine engineers, who operated the boilers on steamships. The certification and classification of stationary engineers was developed in order to reduce incidents of boiler explosions in the late 19th century. Notable individuals who worked
Barra system
https://en.wikipedia.org/wiki/Barra_system
The Barra system is a passive solar building technology developed by Horazio Barra in Italy. It uses a collector wall to capture solar radiation in the form of heat. It also uses the thermosiphon effect to distribute the warmed air through channels incorporated into the reinforced concrete floors, warming the floors and hence the building. Alternatively, in hot weather, cool nighttime air can be drawn through the floors to chill them in a form of air conditioning. Many successful systems were built in Europe, but Barra seems fairly unknown elsewhere. Passive solar collector To convert the sun's light into heat indirectly, a separate insulated space is constructed on the sunny side of the house walls. Looking at the outside, and moving through a cross section there is an outside clear layer. This was traditionally built using glass, but with the advent of cheap, robust Polycarbonate glazing most designs use twin-or triple-wall polycarbonate greenhouse sheeting. Typically the glazing is designed to pass visible light, but block IR to reduce losses, and block UV to protect building materials. The next layer is an absorption space. This absorbs most of the light entering the collector. It usually consists of an air gap of around 10 cm thickness with one or more absorption meshes suspended vertically in the space. Often window fly screen mesh is used, or horticultural shade cloth. The mesh itself can hold very little heat and warms up rapidly in light. The heat is absorbed by air passing around and through the mesh, and so the mesh is suspended with an air gap on both the front and back sides. Finally a layer of insulation sits between the absorption space and the house. Usually this is normal house insulation, using materials such as polyisocyanurate foam, rock wool, foil and polystyrene. This collector is very responsive-in the sun it heats up rapidly and the air inside starts to convect. If the collector were to be directly connected to the building using a hole near the floor and a hole near the ceiling an indirect solar gain system would be created. One problem with this that, like Trombe walls, the heat would radiate back out at night, and a convection current would chill the room during the night. Instead, the air movement can be stopped using automatic dampers, similar to those used for ventilating foundation spaces in cold climates, or plastic film dampers, which work by blocking air flow in one direction with a very lightweight flap
Sun path
https://en.wikipedia.org/wiki/Sun_path
Sun path, sometimes also called day arc, refers to the daily(sunrise to sunset)and seasonal arc-like path that the Sun appears to follow across the sky as the Earth rotates and orbits the Sun. The Sun's path affects the length of daytime experienced and amount of daylight received along a certain latitude during a given season. The relative position of the Sun is a major factor in the heat gain of buildings and in the performance of solar energy systems. Accurate location-specific knowledge of sun path and climatic conditions is essential for economic decisions about solar collector area, orientation, landscaping, summer shading, and the cost-effective use of solar trackers. Angles Effect of the Earth's axial tilt Sun paths at any latitude and any time of the year can be determined from basic geometry. The Earth's axis of rotation tilts about 23.5 degrees, relative to the plane of Earth's orbit around the Sun. As the Earth orbits the Sun, this creates the 47 declination difference between the solstice sun paths, as well as the hemisphere-specific difference between summer and winter. In the Northern Hemisphere, the winter sun(November, December, January)rises in the southeast, transits the celestial meridian at a low angle in the south(more than 43 above the southern horizon in the tropics), and then sets in the southwest. It is on the south(equator)side of the house all day long. A vertical window facing south(equator side)is effective for capturing solar thermal energy. For comparison, the winter sun in the Southern Hemisphere(May, June, July)rises in the northeast, peaks out at a low angle in the north(more than halfway up from the horizon in the tropics), and then sets in the northwest. There, the north-facing window would let in plenty of solar thermal energy to the house. In the Northern Hemisphere in summer(May, June, July), the Sun rises in the northeast, peaks out slightly south of overhead point(lower in the south at higher latitude), and then sets in the northwest, whereas in the Southern Hemisphere in summer(November, December, January), the Sun rises in the southeast, peaks out slightly north of overhead point(lower in the north at higher latitude), and then sets in the southwest. A simple latitude-dependent equator-side overhang can easily be designed to block 100% of the direct solar gain from entering vertical equator-facing windows on the hottest days of the year. Roll-down exterior shade screens, interior translucent-or
Sonnenschiff
https://en.wikipedia.org/wiki/Sonnenschiff
Sonnenschiff(lit. 'sun ship')is a large integrated office and retail building in Freiburg im Breisgau, Germany. It was built in 2004 in the city's Vauban quarter as part of the Solar Settlement at Schlierberg. Sonnenschiff was designed by the architect Rolf Disch(who also built the Heliotrope building)and generates four times more energy than it uses. As a whole this building produces more energy than it consumes per year and utilizes the most up-to-date building technology. Some aspects that make this building particular are its vacuum insulated walls, ventilation with 95% heat recovery, triple paned windows and its energy faade. It is the first positive energy office building worldwide. The office spaces are flanked on both the East and West sides entirely with windows, which maximizes natural lighting and employee views while it minimizes the energy used for artificial lighting. Sonnenschiff includes a supermarket, convenience store and bakery-caf on the first floor, offices and work spaces on the 2-4 floors and 9 penthouses on its roof. In addition to the office and retail space, there are two conference rooms. Design This steel-framed construction is called the "Sun Ship" as its skyline resembles the silhouette of a ship. Its design allows it to naturally stay cool in the summer and to store heat in the winter. The ground floor is used for high-end retail and commercial space. The next three floors are used as office and commercial space, while nine penthouses on its rooftop offer residential space(112 to 300 sq. meters). This unique integration of retail, commercial and residential space, all with a carbon-free footprint and a positive energy balance distinguish this development. Residents Apart from the nine penthouses, Sonnenschiff houses several companies such as the high-end supermarket Alnatura, a DM drug store and such institutions as kostrom and the non-profit ko-Institut. PlusEnergy PlusEnergy is a term coined by Rolf Disch that indicates a structures extreme energy efficiency so that it holds a positive energy balance, actually producing more energy than it uses. With the completion of his private residence, the Heliotrope, in 1994, Disch had created the first PlusEnergy house. His next goal in its development was thus the mass application of the concept to residential, commercial and retail space. As the concept further developed and gained financial backing as well, Disch built several more projects with PlusEnergy certific
Curtis House, Rickmansworth
https://en.wikipedia.org/wiki/Curtis_House,_Rickmansworth
The Curtis House, Rickmansworth was the first solar house constructed in the United Kingdom. The house, in Rickmansworth, England, was built in 1956 by British architect Edward Curtis, for his own occupation. References External links "Dream House"-British Pathe film on YouTube showing the house
Villa Girasole
https://en.wikipedia.org/wiki/Villa_Girasole
The Villa Girasole(il girasole meaning the sunflower in Italian)is a house constructed in the 1930s in Marcellise, northern Italy, near Verona. The conception of architect Angelo Invernizzi, the Girasole rotates to follow the sun as it moves, just as a sunflower opens up and turns to follow the sun. This is how the unique house got its name. Architect Angelo Invernizzi, a wealthy Italian engineer of Genoa, Italy, dreamed of building a house that would maximize the health properties of the sun by rotating to follow it. He designed the house for himself with the help of Romolo Carapacchi, a mechanical engineer; Fausto Saccorotti, an interior decorator; and Ettore Fagiuoli, an architect. Invernizzis daughter, Lidia Invernizzi, described in the 17-minute film Il girasole: una casa vicino a Verona by Marcel Meili and Christoph Schaub, Invernizzi could have built the house himself, but he instead invited many people to participate in its creation: painters, sculptors, furniture makers, and more. People who believed in a new era: nothing should be built as before. Having a family connection to Marcellise, even though working and living in Genoa, he wanted to build the house there in its hilly splendor and with its memories of a simpler life. History and construction Invernizzi first began drawing designs for his rotating house in 1929, but construction started in 1931, only during summer months. Invernizzi and his team used the project as a means to experiment with new materials, like concrete and fibre cement. "In keeping with the project's experimental nature, a considerable amount of adaptation and refinement accompanied construction". They ended up using aluminium sheeting to replace the concrete on the outside walls because the concrete had left cracks. At first, Invernizzi only expected the house to make a 180 degree turn, but eventually after he saw it make the 180 degree turn, he "decided to make the complete turn" of 360 degrees. The project was complete in 1935, after four years. Interior/Exterior The Girasole has two storeys and is shaped like the letter "L". It sits on an over 44 metre circular base, with a 42 metre tall tower at the centre. This is where the house rotates from, using motors. The "L" rotates "over three circular tracks where 15 trolleys can slide the 5,000 cubic metres building at a speed of 4 mm per second and it takes 9 hours and 20 minutes to rotate fully". There is a manual control panel located in the
PlusEnergy
https://en.wikipedia.org/wiki/Energy-plus_building
An energy-plus building(also called: plus energy building, plus-energy house, efficiency-plus house)produces more energy from renewable energy sources, over the course of a year, than it imports from external sources. This is achieved using a combination of microgeneration technology and low-energy building techniques, such as: passive solar building design, insulation and careful site selection and placement. A reduction of modern conveniences can also contribute to energy savings, however many energy-plus houses are almost indistinguishable from a traditional home, preferring instead to use highly energy-efficient appliances, fixtures, etc., throughout the house. "Plusenergihuset"(the plus energy house)was the Danish term used by Jean Fischer in his publication from 1982 about his own energy-plus house. PlusEnergy is a brand name, used by Rolf Disch, to describe a structure that produces more energy than it uses. The term was coined by Disch in 1994 when building his private residence, the Heliotrope as the first PlusEnergy house in the world. Disch then went on to refine the concepts involved with several more projects built by his company, Rolf Disch Solar Architecture, in order to promote PlusEnergy for wider adoption in residential, commercial and retail spaces. Disch maintains that PlusEnergy is more than just a method of producing environmentally-friendly housing, but also an integrated ecological and architectural concept. As such, PlusEnergy is intended to be superior to low-energy or zero-energy designs such as those of Passivhaus. Technical approach The PlusEnergy approach uses a variety of techniques to produce a building that generates more energy than it consumes. A typical example is to capture heat during the day in order to reduce the need to generate heat over night. This is achieved using large North and South facing window areas to allow sunlight to penetrate the structure, reducing the need for energy use from light bulbs. Triple-pane or quadruple-pane windows(U-value=0.4 0.7 W(m2K))trap this heat inside, and the addition of heavy insulation then means the structure is already warm in the evening and therefore needs less heating. In the Sun Ship, a 60,000 sq ft(5,600 m2)commercial, retail and residential PlusEnergy structure, techniques such as phase changing materials in the walls and vacuum insulation are also used. This permits maximum availability of floor space without compromising efficient insulation. Social and
Passive cooling
https://en.wikipedia.org/wiki/Passive_cooling
Passive cooling is a building design approach that focuses on heat gain control and heat dissipation in a building in order to improve the indoor thermal comfort with low or no energy consumption. This approach works either by preventing heat from entering the interior(heat gain prevention)or by removing heat from the building(natural cooling). Natural cooling utilizes on-site energy, available from the natural environment, combined with the architectural design of building components(e.g. building envelope), rather than mechanical systems to dissipate heat. Therefore, natural cooling depends not only on the architectural design of the building but on how the site's natural resources are used as heat sinks(i.e. everything that absorbs or dissipates heat). Examples of on-site heat sinks are the upper atmosphere(night sky), the outdoor air(wind), and the earth/soil. Passive cooling is an important tool for design of buildings for climate change adaptation reducing dependency on energy-intensive air conditioning in warming environments. Overview Passive cooling covers all natural processes and techniques of heat dissipation and modulation without the use of energy. Some authors consider that minor and simple mechanical systems(e.g. pumps and economizers)can be integrated in passive cooling techniques, as long they are used to enhance the effectiveness of the natural cooling process. Such applications are also called 'hybrid cooling systems'. The techniques for passive cooling can be grouped in two main categories: Preventive techniques that aim to provide protection and/or prevention of external and internal heat gains. Modulation and heat dissipation techniques that allow the building to store and dissipate heat gain through the transfer of heat from heat sinks to the climate. This technique can be the result of thermal mass or natural cooling. Preventive techniques Protection from or prevention of heat gains encompasses all the design techniques that minimizes the impact of solar heat gains through the building's envelope and of internal heat gains that is generated inside the building due occupancy and equipment. It includes the following design techniques: Microclimate and site design-By taking into account the local climate and the site context, specific cooling strategies can be selected to apply which are the most appropriate for preventing overheating through the envelope of the building. The microclimate can play a huge role in dete
Helsinki Central Library Oodi
https://en.wikipedia.org/wiki/Helsinki_Central_Library_Oodi
The Helsinki Central Library Oodi(Finnish: Helsingin keskustakirjasto Oodi; Swedish: Helsingfors centrumbibliotek Ode), commonly referred to as Oodi(lit. 'ode'), is a public library in Helsinki, Finland. The library is situated in the Kluuvi district, close to Helsinki Central Station and next to Helsinki Music Centre and Kiasma Museum of Contemporary Art. Despite its name, the library is not the main library in the Helsinki City Library system, which is located in Pasila instead; "central" refers to its location in the city centre. History A design competition in 2012 to build the library was won by the Finnish architectural firm ALA Architects and structural design by Ramboll Finland. ALA Architects won the commission over 543 other competitors. The library was planned to be a three-story building and to include a sauna(which hasn't materialised as of 2021)and a ground-floor movie theatre. In January 2015, the Helsinki City Council voted 758 to launch the building project. The estimated costs of the new library was 98 million, of which the state agreed to pay 30 million in connection with the centenary of Finland's independence in 2017. The City of Helsinki budgeted 66 million for the building. On 31 December 2016, it was announced that the new library would be named Oodi in Finnish and Ode in Swedish. The name was selected from a pool of some 1,600 names proposed by the public. According to Helsinki Deputy City Director Ritva Viljanen, "Oodi" was chosen because it's easy to remember, easy to say, and easy to translate. The selection jury also did not want to name the new library after a person. The library was built in the Tlnlahti district next to Helsinki Music Centre and Kiasma Museum of Contemporary Art and inaugurated on 5 December 2018 on the eve of the Finnish Independence Day. Awards In 2019, the International Federation of Library Associations(IFLA)named Oodi as the best Public Library of the Year. Services Specially designed robots transport books to the third floor that has an 17,200-square-metre(185,000 sq ft)area designated for books. The rest of the space is designed for meetings and events. The National Audiovisual Institute(KAVI)organizes regular archival film screenings at the Kino Regina cinema, located since 2019 in the Helsinki Central Library Oodi. Energy use and environmental impact The building is regarded as very energy-efficient due to its use of local materials and its use of sunlight. The building uses pas
Solar Settlement at Schlierberg
https://en.wikipedia.org/wiki/Solar_Settlement_at_Schlierberg
The Solar Settlement at Schlierberg(German: Solarsiedlung am Schlierberg)is a 59-home PlusEnergy housing community in Freiburg, Germany. Solar architect Rolf Disch wanted to apply his PlusEnergy concept, created originally with his Heliotrope home, to mass residential production. The residential complex won awards, including House of the Year(2002), Residential PV solar integration award(2002), and "Germany's most beautiful housing community"(2006). It is one of the first housing communities in the world in which all the homes produce a positive energy balance and which is emissions-free and CO2 neutral. Location The Solar Settlement at Schlierberg is a 59-home PlusEnergy housing community at Elly-Heuss-Knapp-Strasse/Rosa-Luxemburg-Strasse adjacent to the Vauban quarter about 3 km from Freiburg city centre in South West Germany. Five rows of terraced houses with a Southern orientation are grouped to the left and right of a central access road, housing about 170 residents. Buildings Construction began 1999 and the settlement was completed in 2006. The houses contain 2-3 floors and were built with ecological building materials via wooden post-and-beam construction from regional forests and prefabricated individual modules, PVC-free, and environmentally friendly insulation materials. Apartment sizes are from 81 to 210 m2 and are rental and owner-occupied. An underground parking lot keeps the street car free. Energy The houses are oriented to the South for optimal passive and active use of solar energy. Thermal insulation is used according to passive house standard, including glazing of the main facades with a U-value of 0.5, resulting in a heat requirement of only 11-14 kilowatt hours per m2 and year, which as of 2012 was 200(including maintenance costs)per year. Each house has a decentralized ventilation system with heat recovery. The settlement is connected to a local heating network. The south facing roofs are covered with photovoltaic modules with a generation potential of 445 kWp of the entire site. As of 2022, it is the largest residential roof-integrated photovoltaic system. PlusEnergy is a concept developed by Rolf Disch denoting a "structure's extreme energy efficiency so that it holds a positive energy balance", producing more energy than it uses. In 1994, Disch had created the first PlusEnergy house in the world with the completion of his private residence, the Heliotrope. According to Disch "PlusEnergy is a fundamental environme
Passive solar building design
https://en.wikipedia.org/wiki/Passive_solar_building_design
In passive solar building design, windows, walls, and floors are made to collect, store, reflect, and distribute solar energy, in the form of heat in the winter and reject solar heat in the summer. This is called passive solar design because, unlike active solar heating systems, it does not involve the use of mechanical and electrical devices. The key to designing a passive solar building is to best take advantage of the local climate performing an accurate site analysis. Elements to be considered include window placement and size, and glazing type, thermal insulation, thermal mass, and shading. Passive solar design techniques can be applied most easily to new buildings, but existing buildings can be adapted or "retrofitted". Passive energy gain Passive solar technologies use sunlight without active mechanical systems(as contrasted to active solar, which uses thermal collectors). Such technologies convert sunlight into usable heat(in water, air, and thermal mass), cause air-movement for ventilating, or future use, with little use of other energy sources. A common example is a solarium on the equator-side of a building. Passive cooling is the use of similar design principles to reduce summer cooling requirements. Some passive systems use a small amount of conventional energy to control dampers, shutters, night insulation, and other devices that enhance solar energy collection, storage, and use, and reduce undesirable heat transfer. Passive solar technologies include direct and indirect solar gain for space heating, solar water heating systems based on the thermosiphon, use of thermal mass and phase-change materials for slowing indoor air temperature swings, solar cookers, the solar chimney for enhancing natural ventilation, and earth sheltering. More widely, solar technologies include the solar furnace, but this typically requires some external energy for aligning their concentrating mirrors or receivers, and historically have not proven to be practical or cost effective for widespread use. 'Low-grade' energy needs, such as space and water heating, have proven over time to be better applications for passive use of solar energy. As a science The scientific basis for passive solar building design has been developed from a combination of climatology, thermodynamics(particularly heat transfer: conduction(heat), convection, and electromagnetic radiation), fluid mechanics/natural convection(passive movement of air and water without the use of electricity
Mahoney tables
https://en.wikipedia.org/wiki/Mahoney_tables
The Mahoney tables are a set of reference tables used in architecture, used as a guide to climate-appropriate design. They are named after architect Carl Mahoney, who worked on them together with John Martin Evans, and Otto Knigsberger. They were first published in 1971 by the United Nations Department of Economic and Social Affairs. The concept developed by Mahoney(1968)in Nigeria provided the basis of the Mahoney Tables, later developed by Koenigsberger, Mahoney and Evans(1970), published by the United Nations in English, French and Spanish, with large sections included in the widely distributed publication by Koenigsberger et al.(1978). The Mahoney Tables(Evans, 1999; Evans, 2001)proposed a climate analysis sequence that starts with the basic and widely available monthly climatic data of temperature, humidity and rainfall, such as that found in HMSO(1958)and Pearce and Smith(1990), or data published by national meteorological services, for example SMN(1995). Today, the data for most major cities can be downloaded directly from the Internet(from sites such as http://www.wunderground.com/global/AG.html, 2006). The tables use readily available climate data and simple calculations to give design guidelines, in a manner similar to a spreadsheet, as opposed to detailed thermal analysis or simulation. There are six tables; four are used for entering climatic data, for comparison with the requirements for thermal comfort; and two for reading off appropriate design criteria. A rough outline of the table usage is: Air Temperatures. The max, min, and mean temperatures for each month are entered into this table. Humidity, Precipitation, and Wind. The max, min, and mean figures for each month are entered into this table, and the conditions for each month classified into a humidity group. Comparison of Comfort Conditions and Climate. The desired max/min temperatures are entered, and compared to the climatic values from table 1. A note is made if the conditions create heat stress or cold stress(i.e. the building will be too hot or cold). Indicators(of humid or arid conditions). Rules are provided for combining the stress(table 3)and humidity groups(table 2)to check a box classifying the humidity and aridity for each month. For each of six possible indicators, the number of months where that indicator was checked are added up, giving a yearly total. Schematic Design Recommendations. The yearly totals in table 4 correspond to rows in this table, lis
Heliotrope (building)
https://en.wikipedia.org/wiki/Heliotrope_(building)
The Heliotrope is an environmentally friendly housing project by German architect Rolf Disch. There are three such buildings in Germany. The first experimental version was built in 1994 as the architect's home in Freiburg im Breisgau, while the other two were used as exhibition buildings for the Hansgrohe company in Offenburg and a dentist's lab in Hilpoltstein in Bavaria. Several different energy generation modules are used in the building including a 603 sq ft(56.0 m2)dual-axis solar photovoltaic tracking panel, a geothermal heat exchanger, a combined heat and power unit(CHP)and solar-thermal balcony railings to provide heat and warm water. These innovations along with the favorable insulation of the residence allows the Heliotrope to capture anywhere between four and six times its energy usage depending on the time of year. The Heliotrope is also fitted with a grey-water cleansing system and built-in natural waste composting. At the same time that Freiburg s Heliotrope was built, Hansgrohe contracted Disch's architecture practice to design and build another Heliotrope to be used as a visitors center and showroom in Offenburg, Germany. A third one was then contracted and built in Hilpoltstein, Bavaria to be used as a technical dental laboratory. Disch's unique design accommodates different utilizations from private residences to laboratories, and nevertheless maintains the structure's positive energy balance. Disch also designed the Sonnenschiff office complex. PlusEnergy PlusEnergy is a concept coined by Rolf Disch that indicates a structure's energy efficiency. A PlusEnergy building holds a positive energy balance, generating more energy than it uses. The first Heliotrope Disch built in 1994 was the first house to be PlusEnergy certified. Disch built several more projects with PlusEnergy certifications with the goal of bringing the concept to the residential, commercial and retail space. PlusEnergy is a fundamental environmental imperative, Disch claims. Disch believes that passive building is not enough because passive homes still emit CO2 into the atmosphere. Environment and energy needs The house is designed to face the sun with its triple-pane windows(U=0.5)during the heating months of the year and turn its highly insulated back(U=0.12)to the sun during the warmer months when heating isn't necessary. This significantly reduces heating and cooling requirements for the building throughout the year, which are provided for by a heat
Smart glass
https://en.wikipedia.org/wiki/Smart_glass
Smart glass, also known as switchable glass, dynamic glass, and smart-tinting glass, is a type of glass that can change its optical properties, becoming opaque or tinted, in response to electrical or thermal signals. This can be used to prevent sunlight and heat from entering a building during hot days, improving energy efficiency. It can also be used to conveniently provide privacy or visibility to a room. There are two primary classifications of smart glass: active or passive. The most common active glass technologies used today are electrochromic, liquid crystal, and suspended particle devices(SPD). Thermochromic and photochromic are classified as passive technologies. When installed in the envelope of buildings, smart glass helps to create climate adaptive building shells, which benefits include things such as natural light adjustment, visual comfort, UV and infrared blocking, reduced energy use, thermal comfort, resistance to extreme weather conditions, and privacy. Some smart windows can self-adapt to heat or cool for energy conservation in buildings. Smart windows can eliminate the need for blinds, shades or window treatments. Some effects can be obtained by laminating smart film or switchable film onto flat surfaces using glass, acrylic or polycarbonate laminates. Some types of smart films can be applied to existing glass windows using either a self-adhesive smart film or special glue. Spray-on methods for applying clear coatings to block heat and conduct electricity are also under development. History The term "smart window" originated in the 1980s. It was introduced by Swedish material physicist Claes-Gran Granqvist from Chalmers University of Technology, who was brainstorming ideas for making building materials more energy efficient with scientists from Lawrence Berkeley National Laboratory in California. Granqvist used the term to describe a responsive window capable of dynamically changing its tint. Electrically switchable smart glass The following table shows an overview of the different electrically switchable smart glass technologies: Electrochromic devices Electrochromic devices change light transmission properties in response to voltage and thus allow control over the amount of light and heat passing through. In electrochromic windows, the material changes its opacity. A burst of electricity is required for changing its opacity, but the material maintains its shade with little to no additional electrical signals. Old electrochromic
Sunroom
https://en.wikipedia.org/wiki/Sunroom
A sunroom, also frequently called a solarium(and sometimes a "Florida room", "garden conservatory", "garden room", "patio room", "sun parlor", "sun porch", "three season room" or "winter garden"), is a room that permits abundant daylight and views of the landscape while sheltering from adverse weather. Sunroom and solarium have the same denotation: solarium is Latin for "place of sun[light]". Solaria of various forms have been erected throughout European history. Currently, the sunroom or solarium is popular in Europe, Canada, the United States, Australia, and New Zealand. Sunrooms may feature passive solar building design to heat and illuminate them. In Great Britain, which has a long history of formal conservatories, a small conservatory is sometimes denominated a "sunroom". In gardening, a garden room is a secluded and partly enclosed outside space within a garden that creates a room-like effect. Design Attached sunrooms typically are constructed of transparent tempered glazing atop a brick or wood "knee wall" or framed entirely of wood, aluminum, or PVC, and glazed on all sides. Frosted glass or glass block may be used to add privacy. Screens are a fundamental aspect of a "Florida room", and jalousie windows are often featured. An integrated sunroom is specifically designed with many windows and climate controls. A solarium is typically distinguished from a sunroom by the former being specifically and primarily designed to collect sunlight for warmth and light as opposed to being primarily designed to feature scenic views, and by being composed of walls, save one, and a roof that are entirely of framed glass. These typically are erected in higher latitude(low angle of sunlight)or cold(higher altitude)locations. In contrast, a sunroom sensu stricto has an opaque roof. Technologies During the 1960s, professional re-modelling companies developed affordable systems to enclose a patio or deck, offering design, installation, and full service warranties. Patio rooms featured lightweight, engineered roof panels, single pane glass, and aluminium construction. As technology advanced, insulated glass, vinyl, and vinyl-wood composite framework appeared. More recently, specialized blinds and curtains have been developed, many electrically operated by remote control. Specialized flooring, including radiant heat, may be adapted to both attached and integrated sunrooms. See also Arizona room Conservatory(greenhouse)Observation car Porch Smart glass Oranger
Light tube
https://en.wikipedia.org/wiki/Light_tube
Light tubes(also known as solar pipes, tubular skylights or sun tunnels)are structures that transmit or distribute natural or artificial light for the purpose of illumination and are examples of optical waveguides. In their application to daylighting, they are also often called tubular daylighting devices, sun pipes, sun scopes, or daylight pipes. They can be divided into two broad categories: hollow structures that contain the light with reflective surfaces; and transparent solids that contain the light by total internal reflection. Principles of nonimaging optics govern the flow of light through them. Types IR light tubes Manufacturing custom designed infrared light pipes, hollow waveguides and homogenizers is non-trivial. This is because these are tubes lined with a highly polished infrared reflective coating of gold, which can be applied thick enough to permit these tubes to be used in highly corrosive atmospheres. Carbon black can be applied to certain parts of light pipes to absorb IR light(see photonics). This is done to limit IR light to only certain areas of the pipe. While most light pipes are produced with a round cross-section, light pipes are not limited to this geometry. Square and hexagonal cross-sections are used in special applications. Hexagonal pipes tend to produce the most homogenized type of IR Light. The pipes do not need to be straight. Bends in the pipe have little effect on efficiency. Light tube with reflective material The first commercial reflector systems were patented and marketed in the 1850s by Paul Emile Chappuis in London, utilizing various forms of angled mirror designs. Chappuis Ltd's reflectors were in continuous production until the factory was destroyed in 1943. The concept was rediscovered and patented in 1986 by Solatube International of Australia. This system has been marketed for widespread residential and commercial use. Other daylighting products are on the market under various generic names, such as "SunScope", "solar pipe", "light pipe", "light tube", and "tubular skylight". A tube lined with highly reflective material leads the light rays through a building, starting from an entrance-point located on its roof or one of its outer walls. A light tube is not intended for imaging(in contrast to a periscope, for example); thus image distortions pose no problem and are in many ways encouraged due to the reduction of "directional" light. The entrance point usually comprises a dome(cupola), which has the fu
Passive daytime radiative cooling
https://en.wikipedia.org/wiki/Passive_daytime_radiative_cooling
Passive daytime radiative cooling(PDRC)(also passive radiative cooling, daytime passive radiative cooling, radiative sky cooling, photonic radiative cooling, and terrestrial radiative cooling)is the use of unpowered, reflective/thermally-emissive surfaces to lower the temperature of a building or other object. It has been proposed as a method of reducing temperature increases caused by greenhouse gases by reducing the energy needed for air conditioning, lowering the urban heat island effect, and lowering human body temperatures. PDRCs can aid systems that are more efficient at lower temperatures, such as photovoltaic systems, dew collection devices, and thermoelectric generators. Some estimates propose that dedicating 12% of the Earth's surface area to PDRC would stabilize surface temperatures. Regional variations provide different cooling potentials with desert and temperate climates benefiting more than tropical climates, attributed to the effects of humidity and cloud cover. PDRCs can be included in adaptive systems, switching from cooling to heating to mitigate any potential "overcooling" effects. PDRC applications for indoor space cooling is growing with an estimated "market size of ~$27 billion in 2025." PDRC surfaces are designed to be high in solar reflectance to minimize heat gain and strong in longwave infrared(LWIR)thermal radiation heat transfer matching the atmosphere's infrared window(813 m). This allows the heat to pass through the atmosphere into space. PDRCs leverage the natural process of radiative cooling, in which the Earth cools by releasing heat to space. PDRC operates during daytime. On a clear day, solar irradiance can reach 1000 W/m2 with a diffuse component between 50-100 W/m2. The average PDRC has an estimated cooling power of ~100-150 W/m2, proportional to the exposed surface area. PDRC applications are deployed as sky-facing surfaces. Low-cost scalable PDRC materials with potential for mass production include coatings, thin films, metafabrics, aerogels, and biodegradable surfaces. While typically white, other colors can also work, although generally offering less cooling potential. Research, development, and interest in PDRCs has grown rapidly since the 2010s, attributable to a breakthrough in the use of photonic metamaterials to increase daytime cooling in 2014, along with growing concerns over energy use and global warming. PDRC can be contrasted with traditional compression-based cooling systems(e.g., air conditio
Drake Landing Solar Community
https://en.wikipedia.org/wiki/Drake_Landing_Solar_Community
The Drake Landing Solar Community(DLSC)is a planned community in Okotoks, Alberta, Canada, equipped with a central solar heating system and other energy efficient technologies. This heating system is the first of its kind in North America, although much larger systems have been built in northern Europe. The 52 homes(few variation of size and style, with average above-grade floor area of 145m2)in the community are heated with a solar district heating system that is charged with heat originating from solar collectors on the garage roofs and is enabled for year-round heating by underground seasonal thermal energy storage(STES). The system was designed to model a way of addressing global warming and the burning of fossil fuels. The solar energy is captured by 800 solar thermal collectors located on the roofs of all 52 houses' garages. It is billed as the first solar powered subdivision in North America, although its electricity and transportation needs are provided by conventional sources. In 2012 the installation achieved a world record solar fraction of 97%; that is, providing that amount of the community's heating requirements with solar energy over a one-year time span. In 20152016 season the installation achieved a solar fraction of 100%. This was achieved by the borehole thermal storage system(BTES)finally reaching high temperature after years of charging, as well as improving control methods, operating pumps at lower speed most of the time, reducing extra energy need as well using weather forecasts to optimize transfer of heat between different storage tanks and loops. During some other years, auxiliary gas heaters are used for a small fraction of the year to provide heat to a district loop. The systems operate at coefficient of performance of 30. After nearly 17 years of continuous monitoring(far exceeding the initial 4-year test period), performance analysis and improvements, a significant body of knowledge and experience has been learned about this type of system for Canadian applications. In 2020, the system started showing signs of deterioration resulting in significant maintenance issues. System components, knowledge, and technical expertise for repairs were becoming increasingly challenging to find. In response to system failures, the Drake Landing Solar Company added redundancies to the system to be sure that homes in the community were receiving heat. After a thorough investigation on available next steps, it was determined that the
Algebra
https://en.wikipedia.org/wiki/Algebra
Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic operations other than the standard arithmetic operations, such as addition and multiplication. Elementary algebra is the main form of algebra taught in schools. It examines mathematical statements using variables for unspecified values and seeks to determine for which values the statements are true. To do so, it uses different methods of transforming equations to isolate variables. Linear algebra is a closely related field that investigates linear equations and combinations of them called systems of linear equations. It provides methods to find the values that solve all equations in the system at the same time, and to study the set of these solutions. Abstract algebra studies algebraic structures, which consist of a set of mathematical objects together with one or several operations defined on that set. It is a generalization of elementary and linear algebra since it allows mathematical objects other than numbers and non-arithmetic operations. It distinguishes between different types of algebraic structures, such as groups, rings, and fields, based on the number of operations they use and the laws they follow, called axioms. Universal algebra and category theory provide general frameworks to investigate abstract patterns that characterize different classes of algebraic structures. Algebraic methods were first studied in the ancient period to solve specific problems in fields like geometry. Subsequent mathematicians examined general techniques to solve equations independent of their specific applications. They described equations and their solutions using words and abbreviations until the 16th and 17th centuries when a rigorous symbolic formalism was developed. In the mid-19th century, the scope of algebra broadened beyond a theory of equations to cover diverse types of algebraic operations and structures. Algebra is relevant to many branches of mathematics, such as geometry, topology, number theory, and calculus, and other fields of inquiry, like logic and the empirical sciences. Definition and etymology Algebra is the branch of mathematics that studies algebraic structures and the operations they use. An algebraic structure is a non-empty set of mathematical objects, such as the integers, together with algebraic operations
Outline of algebra
https://en.wikipedia.org/wiki/Outline_of_algebra
Algebra is one of the main branches of mathematics, covering the study of structure, relation and quantity. Algebra studies the effects of adding and multiplying numbers, variables, and polynomials, along with their factorization and determining their roots. In addition to working directly with numbers, algebra also covers symbols, variables, and set elements. Addition and multiplication are general operations, but their precise definitions lead to structures such as groups, rings, and fields. Branches Pre-algebra Elementary algebra Boolean algebra Abstract algebra Linear algebra Universal algebra Algebraic equations An algebraic equation is an equation involving only algebraic expressions in the unknowns. These are further classified by degree. Linear equation algebraic equation of degree one. Polynomial equation equation in which a polynomial is set equal to another polynomial. Transcendental equation equation involving a transcendental function of one of its variables. Functional equation equation in which the unknowns are functions rather than simple quantities. Differential equation equation involving derivatives. Integral equation equation involving integrals. Diophantine equation equation where the only solutions of interest of the unknowns are the integer ones. History History of algebra General algebra concepts Fundamental theorem of algebra states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with an imaginary part equal to zero. Equations equality of two mathematical expressions Linear equation an algebraic equation with a degree of one Quadratic equation an algebraic equation with a degree of two Cubic equation an algebraic equation with a degree of three Quartic equation an algebraic equation with a degree of four Quintic equation an algebraic equation with a degree of five Polynomial an algebraic expression consisting of variables and coefficients Inequalities a comparison between values Functions mapping that associates a single output value with each input value Sequences ordered list of elements either finite or infinite Systems of equations finite set of equations Vectors element of a vector space Matrix two dimensional array of numbers Vector space basic algebraic structure of linear algebra Field algebraic structure with addition, mu
Timeline of geometry
https://en.wikipedia.org/wiki/Timeline_of_geometry
The following is a timeline of key developments of geometry: Before 1000 BC ca. 2000 BC Scotland, carved stone balls exhibit a variety of symmetries including all of the symmetries of Platonic solids. 1800 BC Moscow Mathematical Papyrus, findings volume of a frustum 1800 BC Plimpton 322 contains the oldest reference to the Pythagorean triplets. 1650 BC Rhind Mathematical Papyrus, copy of a lost scroll from around 1850 BC, the scribe Ahmes presents one of the first known approximate values of at 3.16, the first attempt at squaring the circle, earliest known use of a sort of cotangent, and knowledge of solving first order linear equations 1st millennium BC 800 BC Baudhayana, author of the Baudhayana Sulba Sutra, a Vedic Sanskrit geometric text, contains quadratic equations, and calculates the square root of 2 correct to five decimal places ca. 600 BC the other Vedic "Sulba Sutras"("rule of chords" in Sanskrit)use Pythagorean triples, contain a number of geometrical proofs, and approximate at 3.16 5th century BC Hippocrates of Chios utilizes lunes in an attempt to square the circle 5th century BC Apastamba, author of the Apastamba Sulba Sutra, another Vedic Sanskrit geometric text, makes an attempt at squaring the circle and also calculates the square root of 2 correct to five decimal places 530 BC Pythagoras studies propositional geometry and vibrating lyre strings; his group also discover the irrationality of the square root of two, 370 BC Eudoxus states the method of exhaustion for area determination 300 BC Euclid in his Elements studies geometry as an axiomatic system, proves the infinitude of prime numbers and presents the Euclidean algorithm; he states the law of reflection in Catoptrics, and he proves the fundamental theorem of arithmetic 260 BC Archimedes proved that the value of lies between 3+1/7(approx. 3.1429)and 3+10/71(approx. 3.1408), that the area of a circle was equal to multiplied by the square of the radius of the circle and that the area enclosed by a parabola and a straight line is 4/3 multiplied by the area of a triangle with equal base and height. He also gave a very accurate estimate of the value of the square root of 3. 225 BC Apollonius of Perga writes On Conic Sections and names the ellipse, parabola, and hyperbola, 150 BC Jain mathematicians in India write the "Sthananga Sutra", which contains work on the theory of numbers, arithmetical operations, geometry, operations with fractions
History of algebra
https://en.wikipedia.org/wiki/History_of_algebra
Algebra can essentially be considered as doing computations similar to those of arithmetic but with non-numerical mathematical objects. However, until the 19th century, algebra consisted essentially of the theory of equations. For example, the fundamental theorem of algebra belongs to the theory of equations and is not, nowadays, considered as belonging to algebra(in fact, every proof must use the completeness of the real numbers, which is not an algebraic property). This article describes the history of the theory of equations, referred to in this article as "algebra", from the origins to the emergence of algebra as a separate area of mathematics. Etymology The word "algebra" is derived from the Arabic word al-jabr, and this comes from the treatise written in the year 830 by the medieval Persian mathematician, Al-Khwrizm, whose Arabic title, Kitb al-mutaar f isb al-abr wa-l-muqbala, can be translated as The Compendious Book on Calculation by Completion and Balancing. The treatise provided for the systematic solution of linear and quadratic equations. According to one history, "[i]t is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the previous translation. The word 'al-jabr' presumably meant something like 'restoration' or 'completion' and seems to refer to the transposition of subtracted terms to the other side of an equation; the word 'muqabalah' is said to refer to 'reduction' or 'balancing'that is, the cancellation of like terms on opposite sides of the equation. Arabic influence in Spain long after the time of al-Khwarizmi is found in Don Quixote, where the word 'algebrista' is used for a bone-setter, that is, a 'restorer'." The term is used by al-Khwarizmi to describe the operations that he introduced, "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. Stages of algebra Algebraic expression Algebra did not always make use of the symbolism that is now ubiquitous in mathematics; instead, it went through three distinct stages. The stages in the development of symbolic algebra are approximately as follows: Rhetorical algebra, in which equations are written in full sentences. For example, the rhetorical form of x+1=2{\displaystyle x+1=2}is "The th
Rational difference equation
https://en.wikipedia.org/wiki/Rational_difference_equation
A rational difference equation is a nonlinear difference equation of the form x n+1=+i=0 k i x n i A+i=0 k B i x n i ,{\displaystyle x_{n+1}={\frac{\alpha+\sum _{i=0}^{k}\beta _{i}x_{n-i}}{A+\sum _{i=0}^{k}B_{i}x_{n-i}}}~,}where the initial conditions x 0 , x 1 , ... , x k{\displaystyle x_{0},x_{-1},\dots ,x_{-k}}are such that the denominator never vanishes for any n. First-order rational difference equation A first-order rational difference equation is a nonlinear difference equation of the form w t+1=a w t+b c w t+d .{\displaystyle w_
Algebraic signal processing
https://en.wikipedia.org/wiki/Algebraic_signal_processing
Algebraic signal processing(ASP)is an emerging area of theoretical signal processing(SP). In the algebraic theory of signal processing, a set of filters is treated as an(abstract)algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability. History In the original formulation of algebraic signal processing by Puschel and Moura, the signals are collected in an A{\displaystyle{\mathcal{A}}}-module for some algebra A{\displaystyle{\mathcal{A}}}of filters, and filtering is given by the action of A{\displaystyle{\mathcal{A}}}on the A{\displaystyle{\mathcal{A}}}-module. Definitions Let K{\displaystyle K}be a field, for instance the complex numbers, and A{\displaystyle{\mathcal{A}}}be a K{\displaystyle K}-algebra(i.e. a vector space over K{\displaystyle K}with a binary operation : A A A{\displaystyle \ast :{\mathcal{A}}\otimes{\mathcal{A}}\to{\mathcal{A}}}that is linear in both arguments)treated as a set of filters. Suppose M{\displaystyle{\mathcal{M}}}is a vector space representing a set signals. A representation of A{\displaystyle{\mathcal{A}}}consists of an algebra homomorphism : A E n d(M)
AWM–Microsoft Research Prize in Algebra and Number Theory
https://en.wikipedia.org/wiki/AWM%E2%80%93Microsoft_Research_Prize_in_Algebra_and_Number_Theory
The AWMMicrosoft Research Prize in Algebra and Number Theory and is a prize given every other year by the Association for Women in Mathematics to an outstanding young female researcher in algebra or number theory. It was funded in 2012 by Microsoft Research and first issued in 2014. Winners Sophie Morel(2014), for her research in number theory, particularly her contributions to the Langlands program, an application of her results on weighted cohomology, and a new proof of Brenti's combinatorial formula for Kazhdan-Lusztig polynomials. Lauren Williams(2016), for her research in algebraic combinatorics, particularly her contributions on the totally nonnegative Grassmannian, her work on cluster algebras, and her proof(with Musiker and Schiffler)of the famous Laurent positivity conjecture. Melanie Wood(2018), for her research in number theory and algebraic geometry, particularly her contributions in arithmetic statistics and tropical geometry, as well as her work with Ravi Vakil on the limiting behavior of natural families of varieties. Melody Chan(2020), in recognition of her advances at the interface between algebraic geometry and combinatorics. Jennifer Balakrishnan(2022), in recognition of her advances in computing rational points on algebraic curves over number fields. Yunqing Tang(2024), for "work in arithmetic geometry, including results on the GrothendieckKatz p{\displaystyle p}-curvature conjecture, a conjecture of Ogus on algebraicity of cycles, arithmetic intersection theory, and the unbounded denominators conjecture of Atkin and Swinnerton-Dyer" See also List of awards honoring women List of mathematics awards References External links AWMMicrosoft Research Prize, Association for Women in Mathematics
Berlekamp–Rabin algorithm
https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Rabin_algorithm
In number theory, Berlekamp's root finding algorithm, also called the BerlekampRabin algorithm, is the probabilistic method of finding roots of polynomials over the field F p{\displaystyle \mathbb{F}_{p}}with p{\displaystyle p}elements. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers. History The method was proposed by Elwyn Berlekamp in his 1970 work on polynomial factorization over finite fields. His original work lacked a formal correctness proof and was later refined and modified for arbitrary finite fields by Michael Rabin. In 1986 Ren Peralta proposed a similar algorithm for finding square roots in F p{\displaystyle \mathbb{F}_{p}}. In 2000 Peralta's method was generalized for cubic equations. Statement of problem Let p{\displaystyle p}be an odd prime number. Consider the polynomial f(x)=a 0+a 1 x++a n x n{\textstyle f(x)=a_{0}+a_{1}x+\cdots+a_{n}x^{n}}over the field F p Z/p Z{\displaystyle \mathbb{F}_{p}\simeq \mathbb{Z}/p\mathbb{Z}}of remainders modulo p{\displaystyle p}. The algorithm should find all{\displaystyle \lambda}in F p{\displaystyle \mathbb{F
Board puzzles with algebra of binary variables
https://en.wikipedia.org/wiki/Board_puzzles_with_algebra_of_binary_variables
Board puzzles with algebra of binary variables ask players to locate the hidden objects based on a set of clue cells and their neighbors marked as variables(unknowns). A variable with value of 1 corresponds to a cell with an object. Conversely, a variable with value of 0 corresponds to an empty cellno hidden object. Overview These puzzles are based on algebra with binary variables taking a pair of values, for example,(no, yes),(false, true),(not exists, exists),(0, 1). It invites the player quickly establish some equations, and inequalities for the solution. The partitioning can be used to reduce the complexity of the problem. Moreover, if the puzzle is prepared in a way that there exists a unique solution only, this fact can be used to eliminate some variables without calculation. The problem can be modeled as binary integer linear programming which is a special case of integer linear programming. History Minesweeper, along with its variants, is the most notable example of this type of puzzle. Algebra with binary variables Below the letters in the mathematical statements are used as variables where each can take the value either 0 or 1 only. A simple example of an equation with binary variables is given below: a+b=0 Here there are two variables a and b but one equation. The solution is constrained by the fact that a and b can take only values 0 or 1. There is only one solution here, both a=0, and b=0. Another simple example is given below: a+b=2 The solution is straightforward: a and b must be 1 to make a+b equal to 2. Another interesting case is shown below: a+b+c=2 a+b 1 Here, the first statement is an equation and the second statement is an inequality indicating the three possible cases: a=1 and b=0, a=0 and b=1, and a=0 and b=0, The last case causes a contradiction on c by forcing c=2, which is not possible. Therefore, either first or second case is correct. This leads to the fact that c must be 1. The modification of a large equation into smaller form is not difficult. However, an equation set with binary variables cannot be always solved by applying linear algebra. The following is an example for applying the subtraction of two equations: a+b+c+d=3 c+d=1 The first statement has four variables whereas the second statement has only two variables. The latter one means that the sum of c and d is 1. Using this fact on the first statement, the equations above can be reduced to a+b=2 c+
Bose–Mesner algebra
https://en.wikipedia.org/wiki/Bose%E2%80%93Mesner_algebra
In mathematics, a BoseMesner algebra is a special set of matrices which arise from a combinatorial structure known as an association scheme, together with the usual set of rules for combining(forming the products of)those matrices, such that they form an associative algebra, or, more precisely, a unitary commutative algebra. Among these rules are: the result of a product is also within the set of matrices, there is an identity matrix in the set, and taking products is commutative. BoseMesner algebras have applications in physics to spin models, and in statistics to the design of experiments. They are named for R. C. Bose and Dale Marsh Mesner. Definition Let X be a set of v elements. Consider a partition of the 2-element subsets of X into n non-empty subsets, R1, ..., Rn such that: given an x X{\displaystyle x\in X}, the number of y X{\displaystyle y\in X}such that{x , y}R i{\displaystyle \{x,y\}\in R_{i}}depends only on i(and not on x). This number will be denoted by vi, and given x , y X{\displaystyle x,y\in X}with{x , y}R k{\displaystyle \{x,y\}\in R_{k}}, the number of z X{\displaystyle z\in X}such that{x , z}R i{\displaystyle \{x,z\}\in R_{i}}and{z , y}R j{\displaystyle \{z,y\}\in R_{j}}depends only on i,j and k(and not on x and y). This number will be denoted by p i j k{\displaystyle p_{ij}^{k}}. This structure is enhanced by adding all pairs of repeated elements of X and collecting them in a subset R0. This enhancement permits th
Canonical form
https://en.wikipedia.org/wiki/Canonical_form
In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness. The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example: Jordan normal form is a canonical form for matrix similarity. The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix. In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation(with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. Despite this advantage, canonical forms frequently depend on arbitrary choices(like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form. Canonical form can also mean a differential form that is defined in a natural(canonical)way. Definition Given a set S of objects with an equivalence relation R on S, a canonical form is given by designating some objects of S to be "in canonical form", such that every object under consideration is equivalent to exactly one object in canonical form. In other words, the canonical forms in S represent the equivalence classes, once and only once. To test whether two objects are equivalent, it then su
Casus irreducibilis
https://en.wikipedia.org/wiki/Casus_irreducibilis
Casus irreducibilis(from Latin 'the irreducible case')is the name given by mathematicians of the 16th century to cubic equations that cannot be solved in terms of real radicals, that is to those equations such that the computation of the solutions cannot be reduced to the computation of square and cube roots. Cardano's formula for solution in radicals of a cubic equation was discovered at this time. It applies in the casus irreducibilis, but, in this case, requires the computation of the square root of a negative number, which involves knowledge of complex numbers, unknown at the time. The casus irreducibilis occurs when the three solutions are real and distinct, or, equivalently, when the discriminant is positive. It is only in 1843 that Pierre Wantzel proved that there cannot exist any solution in real radicals in the casus irreducibilis. The three cases of the discriminant Let a x 3+b x 2+c x+d=0{\displaystyle ax^{3}+bx^{2}+cx+d=0}be a cubic equation with a 0{\displaystyle a\neq 0}. Then the discriminant is given by D :=((x 1 x 2)(x 1 x 3)(x 2 x 3))2=18 a b c d 4 a c 3 27 a 2 d 2+b 2 c 2
Closed-form expression
https://en.wikipedia.org/wiki/Closed-form_expression
In mathematics, an expression or equation is in closed form if it is formed with constants, variables, and a set of functions considered as basic and connected by arithmetic operations(+, , ,/, and integer powers)and function composition. Commonly, the basic functions that are allowed in closed forms are nth root, exponential function, logarithm, and trigonometric functions. However, the set of basic functions depends on the context. For example, if one adds polynomial roots to the basic functions, the functions that have a closed form are called elementary functions. The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series, and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object; that is, an expression of this object in terms of previous ways of specifying it. Example: roots of polynomials The quadratic formula x=b b 2 4 a c 2 a{\displaystyle x={\frac{-b\pm{\sqrt{b^{2}-4ac}}}{2a}}}is a closed form of the solutions to the general quadratic equation a x 2+b x+c=0.{\displaystyle ax^{2}+bx+c=0.}More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only nth-roots and field operations(+, , ,/).{\displaystyle(+,-,\times ,/).}In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions. There are expressions in radicals for all solu
Coefficient
https://en.wikipedia.org/wiki/Coefficient
In mathematics, a coefficient is a multiplicative factor involved in some term of a polynomial, a series, or any other type of expression. It may be a number without units, in which case it is known as a numerical factor. It may also be a constant with units of measurement, in which it is known as a constant multiplier. In general, coefficients may be any expression(including variables such as a, b and c). When the combination of variables and constants is not necessarily involved in a product, it may be called a parameter. For example, the polynomial 2 x 2 x+3{\displaystyle 2x^{2}-x+3}has coefficients 2, 1, and 3, and the powers of the variable x{\displaystyle x}in the polynomial a x 2+b x+c{\displaystyle ax^{2}+bx+c}have coefficient parameters a{\displaystyle a}, b{\displaystyle b}, and c{\displaystyle c}. A constant coefficient, also known as constant term or simply constant, is a quantity either implicitly attached to the zeroth power of a variable or not attached to other variables in an expression; for example, the constant coefficients of the expressions above are the number 3 and the parameter c, involved in 3=c x0. The coefficient attached to the highest degree of the variable in a polynomial of one variable is referred to as the leading coefficient; for example, in the example expressions above, the leading coefficients are 2 and a, respectively. In the context of differential equations, these equations can often be written in terms of polynomials in one or more unknown functions and their derivatives. In such cases, the coefficients of the differential equation are the coefficients of this polynomial, and these may be non-constant functions. A coefficient is a constant coefficient when it is a constant function. For avoiding confusion, in this context a coefficient that is not attached to unknown functions or their derivatives is generally called a constant term rather than a constant coefficient. In particular, in a linear differential equation with constant coefficient
Cole Prize
https://en.wikipedia.org/wiki/Cole_Prize
The Frank Nelson Cole Prize, or Cole Prize for short, is one of twenty-two prizes awarded to mathematicians by the American Mathematical Society, one for an outstanding contribution to algebra, and the other for an outstanding contribution to number theory. The prize is named after Frank Nelson Cole, who served the Society for 25 years. The Cole Prize in algebra was funded by Cole himself, from funds given to him as a retirement gift; the prize fund was later augmented by his son, leading to the double award. The prizes recognize a notable research work in algebra(given every three years)or number theory(given every three years)that has appeared in the last six years. The work must be published in a recognized, peer-reviewed venue. The first award for algebra was made in 1928 to L. E. Dickson, while the first award for number theory was made in 1931 to H. S. Vandiver. Frank Nelson Cole Prize in Algebra Frank Nelson Cole Prize in Number Theory For full citations, see external links. See also List of mathematics awards References External links Frank Nelson Cole Prize in Algebra Frank Nelson Cole Prize in Number Theory
Conservation form
https://en.wikipedia.org/wiki/Conservation_form
Conservation form or Eulerian form refers to an arrangement of an equation or system of equations, usually representing a hyperbolic system, that emphasizes that a property represented is conserved, i.e. a type of continuity equation. The term is usually used in the context of continuum mechanics. General form Equations in conservation form take the form t+f()=0{\displaystyle{\frac{\partial \xi}{\partial t}}+{\boldsymbol{\nabla}}\cdot \mathbf{f}(\xi)=0}for any conserved quantity{\displaystyle \xi}, with a suitable function f{\displaystyle \mathbf{f}}. An equation of this form can be transformed into an integral equation d d t V d V=V f()d S{\displaystyle{\frac{d}{dt}}\int _{V}\xi ~dV=-\oint _{\partial V}\mathbf{f}(\xi)\cdot{\boldsymbol{\nu}}~dS}using the divergence theorem. The integral equation states that the change rate of the integral of the quantity{\displaystyle \xi}over an arbitrary control volume V{\displaystyle V}is given by the flux f(){\displaystyle \mathbf{f}(\xi)}through the boundary of the control volume, with{\displaystyle{\boldsymbol{\nu}}}being the outer surface normal through the boundary.{\displaystyle \xi}is neither produced nor consumed inside of V{\displaystyle V}and is hence conserved. A
Consistent and inconsistent equations
https://en.wikipedia.org/wiki/Consistent_and_inconsistent_equations
In mathematics and particularly in algebra, a system of equations(either linear or nonlinear)is called consistent if there is at least one set of values for the unknowns that satisfies each equation in the systemthat is, when substituted into each of the equations, they make each equation hold true as an identity. In contrast, a linear or non linear equation system is called inconsistent if there is no set of values for the unknowns that satisfies all of the equations. If a system of equations is inconsistent, then the equations cannot be true together leading to contradictory information, such as the false statements 2=1, or x 3+y 3=5{\displaystyle x^{3}+y^{3}=5}and x 3+y 3=6{\displaystyle x^{3}+y^{3}=6}(which implies 5=6). Both types of equation system, inconsistent and consistent, can be any of overdetermined(having more equations than unknowns), underdetermined(having fewer equations than unknowns), or exactly determined. Simple examples Underdetermined and consistent The system x+y+z=3 , x+y+2 z=4{\displaystyle{\begin{aligned}x+y+z&=3,\\x+y+2z&=4\end{aligned}}}has an infinite number of solutions, all of them having z=1(as can be seen by subtracting the first equation from the second), and all of them therefore having x+y=2 for any values of x and y. The nonlinear system x 2+y
Constant (mathematics)
https://en.wikipedia.org/wiki/Constant_(mathematics)
In mathematics, the word constant conveys multiple meanings. As an adjective, it refers to non-variance(i.e. unchanging with respect to some other value); as a noun, it has two different meanings: A fixed and well-defined number or other non-changing mathematical object, or the symbol denoting it. The terms mathematical constant or physical constant are sometimes used to distinguish this meaning. A function whose value remains unchanged(i.e., a constant function). Such a constant is commonly represented by a variable which does not depend on the main variable(s)in question. For example, a general quadratic function is commonly written as: a x 2+b x+c ,{\displaystyle ax^{2}+bx+c\,,}where a, b and c are constants(coefficients or parameters), and x a variablea placeholder for the argument of the function being studied. A more explicit way to denote this function is x a x 2+b x+c ,{\displaystyle x\mapsto ax^{2}+bx+c\,,}which makes the function-argument status of x(and by extension the constancy of a, b and c)clear. In this example a, b and c are coefficients of the polynomial. Since c occurs in a term that does not involve x, it is called the constant term of the polynomial and can be thought of as the coefficient of x0. More generally, any polynomial term or expression of degree zero(no variable)is a constant.: 18 Constant function A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as f(x)=5{\displaystyle f(x)=5}, has a graph of a horizontal line parallel to the x-axis. Such a function always takes the same value(in this case 5), because the variable does not appear in the expression defining the function. Context-dependence The context-dependent nature of the concept of "constant" can be seen in this example from elementary calculus: d d
Cyclotomic polynomial
https://en.wikipedia.org/wiki/Cyclotomic_polynomial
In mathematics, the nth cyclotomic polynomial, for any positive integer n, is the unique irreducible polynomial with integer coefficients that is a divisor of x n 1{\displaystyle x^{n}-1}and is not a divisor of x k 1{\displaystyle x^{k}-1}for any k < n. Its roots are all nth primitive roots of unity e 2 i k n{\displaystyle e^{2i\pi{\frac{k}{n}}}}, where k runs over the positive integers less than n and coprime to n(and i is the imaginary unit). In other words, the nth cyclotomic polynomial is equal to n(x)=gcd(k , n)=1 1 k n(x e 2 i k n).{\displaystyle \Phi _{n}(x)=\prod _{\stackrel{1\leq k\leq n}{\gcd(k,n)=1}}\left(x-e^{2i\pi{\frac{k}{n}}}\right).}It may also be defined as the monic polynomial with integer coefficients that is the minimal polynomial over the field of the rational numbers of any primitive nth-root of unity(e 2 i/n{\displaystyle e^{2i\pi/n}}is an example of such a root). An important relation linking cyclotomic polynomials and primitive roots o
Digital root
https://en.wikipedia.org/wiki/Digital_root
The digital root(also repeated digital sum)of a natural number in a given radix is the(single digit)value obtained by an iterative process of summing digits, on each iteration using the result from the previous iteration to compute a digit sum. The process continues until a single-digit number is reached. For example, in base 10, the digital root of the number 12345 is 6 because the sum of the digits in the number is 1+2+3+4+5=15, then the addition process is repeated again for the resulting number 15, so that the sum of 1+5 equals 6, which is the digital root of that number. In base 10, this is equivalent to taking the remainder upon division by 9(except when the digital root is 9, where the remainder upon division by 9 will be 0), which allows it to be used as a divisibility rule. Formal definition Let n{\displaystyle n}be a natural number. For base b > 1{\displaystyle b>1}, we define the digit sum F b : N N{\displaystyle F_{b}:\mathbb{N}\rightarrow \mathbb{N}}to be the following: F b(n)=i=0 k 1 d i{\displaystyle F_{b}(n)=\sum _{i=0}^{k-1}d_{i}}where k=log b n+1{\displaystyle k=\lfloor \log _{b}{n}\rfloor+1}is the number of digits in the number in base b{\displaystyle b}, and d i=n mod b i+1
Elementary algebra
https://en.wikipedia.org/wiki/Elementary_algebra
Elementary algebra, also known as high school algebra or college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables(quantities without fixed values). This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic: addition, subtraction, multiplication, division, etc. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers. It is typically taught to secondary school students and at introductory college level in the United States, and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations. Algebraic operations Algebraic notation Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression 3 x 2 2 x y+c{\displaystyle 3x^{2}-2xy+c}has the following components: A coefficient is a numerical value, or letter representing a numerical constant, that multiplies a variable(the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet(e.g. a , b , c{\displaystyle a,b,c})are typically used to represent constants, and those toward the end of the alphabet(e.g. x , y{\displaystyle x,y}and z)are used to represent variables. They are usually printed in italics. Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation, and are applied to algebraic variables and terms. Multiplication symbols are usu
Equivalence class
https://en.wikipedia.org/wiki/Equivalence_class
In mathematics, when the elements of some set S{\displaystyle S}have a notion of equivalence(formalized as an equivalence relation), then one may naturally split the set S{\displaystyle S}into equivalence classes. These equivalence classes are constructed so that elements a{\displaystyle a}and b{\displaystyle b}belong to the same equivalence class if, and only if, they are equivalent. Formally, given a set S{\displaystyle S}and an equivalence relation{\displaystyle \sim}on S ,{\displaystyle S,}the equivalence class of an element a{\displaystyle a}in S{\displaystyle S}is denoted[a]{\displaystyle[a]}or, equivalently,[a]{\displaystyle[a]_{\sim}}to emphasize its equivalence relation{\displaystyle \sim}, and is defined as the set of all elements in S{\displaystyle S}with which a{\displaystyle a}is{\displaystyle \sim}-related. The definition of equivalence relations implies that the equivalence classes form a partition of S ,{\displaystyle S,}meaning, that every element of the set belongs to exactly one equivalence class. The set of the equivalence classes is sometimes called the quotient set or the quotient space of S{\displaystyle S}by ,{\displaystyle \sim ,}and is denoted by S/.{\displaystyle S/{\sim}.}When the set S{\displaystyle S}has some structure(such as a group operation or a topology)and the equivalence relation ,
Euler's totient function
https://en.wikipedia.org/wiki/Euler%27s_totient_function
In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as(n){\displaystyle \varphi(n)}or(n){\displaystyle \phi(n)}, and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 k n for which the greatest common divisor gcd(n, k)is equal to 1. The integers k of this form are sometimes referred to as totatives of n. For example, the totatives of n=9 are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since gcd(9, 3)=gcd(9, 6)=3 and gcd(9, 9)=9. Therefore,(9)=6. As another example,(1)=1 since for n=1 the only integer in the range from 1 to n is 1 itself, and gcd(1, 1)=1. Euler's totient function is a multiplicative function, meaning that if two numbers m and n are relatively prime, then(mn)=(m)(n). This function gives the order of the multiplicative group of integers modulo n(the group of units of the ring Z/n Z{\displaystyle \mathbb{Z}/n\mathbb{Z}}). It is also used for defining the RSA encryption system. History, terminology, and notation Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter to denote it: he wrote D for "the multitude of numbers less than D, and which have no common divisor with it". This definition varies from the current definition for the totient function at D=1 but is otherwise the same. The now-standard notation(A)comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss did not use parentheses around the argument and wrote A. Thus, it is often called Euler's phi function or simply the phi function. In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's. The cototient of n is defined as n(n). It counts the number of positive integers less than o
Factorization of polynomials over finite fields
https://en.wikipedia.org/wiki/Factorization_of_polynomials_over_finite_fields
In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them. All factorization algorithms, including the case of multivariate polynomials over the rational numbers, reduce the problem to this case; see polynomial factorization. It is also used for various applications of finite fields, such as coding theory(cyclic redundancy codes and BCH codes), cryptography(public key cryptography by the means of elliptic curves), and computational number theory. As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article. Background Finite field The theory of finite fields, whose origins can be traced back to the works of Gauss and Galois, has played a part in various branches of mathematics. Due to the applicability of the concept in other topics of mathematics and sciences like computer science there has been a resurgence of interest in finite fields and this is partly due to important applications in coding theory and cryptography. Applications of finite fields introduce some of these developments in cryptography, computer algebra and coding theory. A finite field or Galois field is a field with a finite order(number of elements). The order of a finite field is always a prime or a power of prime. For each prime power q=pr, there exists exactly one finite field with q elements, up to isomorphism. This field is denoted GF(q)or Fq. If p is prime, GF(p)is the prime field of order p; it is the field of residue classes modulo p, and its p elements are denoted 0, 1, ..., p1. Thus a=b in GF(p)means the same as a b(mod p). Irreducible polynomials Let F be a finite field. As for general fields, a non-constant polynomial f in F[x]is said to be irreducible over F if it is not the product of two polynomials of positive degree. A polynomial of positive degree that is not irreduci
Generalized arithmetic progression
https://en.wikipedia.org/wiki/Generalized_arithmetic_progression
In mathematics, a generalized arithmetic progression(or multiple arithmetic progression)is a generalization of an arithmetic progression equipped with multiple common differences whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence 17 , 20 , 22 , 23 , 25 , 26 , 27 , 28 , 29 , ...{\displaystyle 17,20,22,23,25,26,27,28,29,\dots}is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it. A semilinear set generalizes this idea to multiple dimensions it is a set of vectors of integers, rather than a set of integers. Finite generalized arithmetic progression A finite generalized arithmetic progression, or sometimes just generalized arithmetic progression(GAP), of dimension d is defined to be a set of the form{x 0+l 1 x 1++l d x d : 0 l 1 < L 1 , ... , 0 l d < L d}{\displaystyle \{x_{0}+\ell _{1}x_{1}+\cdots+\ell _{d}x_{d}:0\leq \ell _{1}<L_{1},\ldots ,0\leq \ell _{d}<L_{d}\}}where x 0 , x 1 , ... , x d , L 1 , ... , L d
Graph algebra (social sciences)
https://en.wikipedia.org/wiki/Graph_algebra_(social_sciences)
Graph algebra is systems-centric modeling tool for the social sciences. It was first developed by Sprague, Pzeworski, and Cortes as a hybridized version of engineering plots to describe social phenomena.==Notes and references==
Hundred Fowls Problem
https://en.wikipedia.org/wiki/Hundred_Fowls_Problem
The Hundred Fowls Problem is a problem first discussed in the fifth century CE Chinese mathematics text Zhang Qiujian suanjing(The Mathematical Classic of Zhang Qiujian), a book of mathematical problems written by Zhang Qiujian. It is one of the best known examples of indeterminate problems in the early history of mathematics. The problem appears as the final problem in Zhang Qiujian suanjing(Problem 38 in Chapter 3). However, the problem and its variants have appeared in the medieval mathematical literature of India, Europe and the Arab world. The name "Hundred Fowls Problem" is due to the Belgian historian Louis van Hee. Problem statement The Hundred Fowls Problem as presented in Zhang Qiujian suanjing can be translated as follows: "Now one cock is worth 5 qian, one hen 3 qian and 3 chicks 1 qian. It is required to buy 100 fowls with 100 qian. In each case, find the number of cocks, hens and chicks bought." Mathematical formulation Let x be the number of cocks, y be the number of hens, and z be the number of chicks, then the problem is to find x, y and z satisfying the following equations: x+y+z=100 5x+3y+z/3=100 Obviously, only non-negative integer values are acceptable. Expressing y and z in terms of x we get y=25(7/4)x z=75+(3/4)x Since x, y and z all must be integers, the expression for y suggests that x must be a multiple of 4. Hence the general solution of the system of equations can be expressed using an integer parameter t as follows: x=4t y=25 7t z=75+3t Since y should be a non-negative integer, the only possible values of t are 0, 1, 2 and 3. So the complete set of solutions is given by(x,y,z)=(0,25,75),(4,18,78),(8,11,81),(12,4,84). of which the last three have been given in Zhang Qiujian suanjing. However, no general method for solving such problems has been indicated, leading to a suspicion of whether the solutions have been obtained by trial and error. The Hundred Fowls Problem found in Zhang Qiujian suanjing is a special case of the general problem of finding integer solutions of the following system of equations: x+y+z=d ax+by+cz=d Any problem of this type is sometime referred to as "Hundred Fowls problem". Variations Some variants of the Hundred Fowls Problem have appeared in the mathematical literature of several cultures. In the following we present a few sample problems discussed in these cultures. Indian mathematics Mahavira's Ganita-sara-sangraha contains the following
Inverse element
https://en.wikipedia.org/wiki/Inverse_element
In mathematics, the concept of an inverse element generalises the concepts of opposite(x)and reciprocal(1/x)of numbers. Given an operation denoted here , and an identity element denoted e, if x y=e, one says that x is a left inverse of y, and that y is a right inverse of x.(An identity element is an element such that x*e=x and e*y=y for all x and y for which the left-hand sides are defined.)When the operation is associative, if an element x has both a left inverse and a right inverse, then these two inverses are equal and unique; they are called the inverse element or simply the inverse. Often an adjective is added for specifying the operation, such as in additive inverse, multiplicative inverse, and functional inverse. In this case(associative operation), an invertible element is an element that has an inverse. In a ring, an invertible element, also called a unit, is an element that is invertible under multiplication(this is not ambiguous, as every element is invertible under addition). Inverses are commonly used in groupswhere every element is invertible, and ringswhere invertible elements are also called units. They are also commonly used for operations that are not defined for all possible operands, such as inverse matrices and inverse functions. This has been generalized to category theory, where, by definition, an isomorphism is an invertible morphism. The word 'inverse' is derived from Latin: inversus that means 'turned upside down', 'overturned'. This may take its origin from the case of fractions, where the(multiplicative)inverse is obtained by exchanging the numerator and the denominator(the inverse of x y{\displaystyle{\tfrac{x}{y}}}is). Definitions and basic properties The concepts of inverse element and invertible element are commonly defined for binary operations that are everywhere defined(that is, the operation is defined for any two elements of its domain). However, these concepts are also commonly used with partial operations, that is operations that are not defined everywhere. Common examples are matrix multiplication, function composition and composition of morphisms in a category. It follows that the common definitions of associativity and identity element must be extended to partial operations; this is the object of the first subsections. In this section, X is a set(p
Irreducible polynomial
https://en.wikipedia.org/wiki/Irreducible_polynomial
In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted for the possible factors, that is, the ring to which the coefficients of the polynomial and its possible factors are supposed to belong. For example, the polynomial x2 2 is a polynomial with integer coefficients, but, as every integer is also a real number, it is also a polynomial with real coefficients. It is irreducible if it is considered as a polynomial with integer coefficients, but it factors as(x 2)(x+2){\displaystyle \left(x-{\sqrt{2}}\right)\left(x+{\sqrt{2}}\right)}if it is considered as a polynomial with real coefficients. One says that the polynomial x2 2 is irreducible over the integers but not over the reals. Polynomial irreducibility can be considered for polynomials with coefficients in an integral domain, and there are two common definitions. Most often, a polynomial over an integral domain R is said to be irreducible if it is not the product of two polynomials that have their coefficients in R, and that are not unit in R. Equivalently, for this definition, an irreducible polynomial is an irreducible element in a ring of polynomials over R. If R is a field, the two definitions of irreducibility are equivalent. For the second definition, a polynomial is irreducible if it cannot be factored into polynomials with coefficients in the same domain that both have a positive degree. Equivalently, a polynomial is irreducible if it is irreducible over the field of fractions of the integral domain. For example, the polynomial 2(x 2 2)Z[x]{\displaystyle 2(x^{2}-2)\in \mathbb{Z}[x]}is irreducible for the second definition, and not for the first one. On the other hand, x 2
Kernel (algebra)
https://en.wikipedia.org/wiki/Kernel_(algebra)
In algebra, the kernel of a homomorphism is the relation describing how elements in the domain of the homomorphism become related in the image. A homomorphism is a function that preserves the underlying algebraic structure in the domain to its image. When the algebraic structures involved have an underlying group structure, the kernel is taken to be the preimage of the group's identity element in the image, that is, it consists of the elements of the domain mapping to the image's identity. For example, the map that sends every integer to its parity(that is, 0 if the number is even, 1 if the number is odd)would be a homomorphism to the integers modulo 2, and its respective kernel would be the even integers which all have 0 as its parity. The kernel of a homomorphism of group-like structures will only contain the identity if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective. For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and some kernels have received a special name, such as normal subgroups for groups and two-sided ideals for rings. The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation. Kernels allow defining quotient objects(also called quotient algebras in universal algebra). For many types of algebraic structure, the fundamental theorem on homomorphisms(or first isomorphism theorem)states that image of a homomorphism is isomorphic to the quotient by the kernel. Definition Group homomorphisms Let G and H be groups and let f be a group homomorphism from G to H. If eH is the identity element of H, then the kernel of f is the preimage of the singleton set{eH}; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH. The kernel is usually denoted ker f(or a variation). In symbols: ker f={g G : f(g)=e H}.{\
Laws of Form
https://en.wikipedia.org/wiki/Laws_of_Form
Laws of Form(hereinafter LoF)is a book by G. Spencer-Brown, published in 1969, that straddles the boundary between mathematics and philosophy. LoF describes three distinct logical systems: The primary arithmetic(described in Chapter 4 of LoF), whose models include Boolean arithmetic; The primary algebra(Chapter 6 of LoF), whose models include the two-element Boolean algebra(hereinafter abbreviated 2), Boolean logic, and the classical propositional calculus; Equations of the second degree(Chapter 11), whose interpretations include finite automata and Alonzo Church's Restricted Recursive Arithmetic(RRA). "Boundary algebra" is a Meguire(2011)term for the union of the primary algebra and the primary arithmetic. Laws of Form sometimes loosely refers to the "primary algebra" as well as to LoF. The book The preface states that the work was first explored in 1959, and Spencer Brown cites Bertrand Russell as being supportive of his endeavour. He also thanks J. C. P. Miller of University College London for helping with the proofreading and offering other guidance. In 1963 Spencer Brown was invited by Harry Frost, staff lecturer in the physical sciences at the department of Extra-Mural Studies of the University of London, to deliver a course on the mathematics of logic. LoF emerged from work in electronic engineering its author did around 1960. Key ideas of the LOF were first outlined in his 1961 manuscript Design with the Nor, which remained unpublished until 2021, and further refined during subsequent lectures on mathematical logic he gave under the auspices of the University of London's extension program. LoF has appeared in several editions. The second series of editions appeared in 1972 with the "Preface to the First American Edition", which emphasised the use of self-referential paradoxes, and the most recent being a 1997 German translation. LoF has never gone out of print. LoF's mystical and declamatory prose and its love of paradox make it a challenging read for all. Spencer-Brown was influenced by Wittgenstein and R. D. Laing. LoF also echoes a number of themes from the writings of Charles Sanders Peirce, Bertrand Russell, and Alfred North Whitehead. The work has had curious effects on some classes of its readership; for example, on obscure grounds, it has been claimed that the entire book is written in an operational way, giving instructions to the reader instead of telling them what "is", and that in accordance with G. Spencer-Brown's interest
Like terms
https://en.wikipedia.org/wiki/Like_terms
In mathematics, like terms are summands in a sum that differ only by a numerical factor. Like terms can be regrouped by adding their coefficients. Typically, in a polynomial expression, like terms are those that contain the same variables to the same powers, possibly with different coefficients. More generally, when some variable are considered as parameters, like terms are defined similarly, but "numerical factors" must be replaced by "factors depending only on the parameters". For example, when considering a quadratic equation, one considers often the expression(x r)(x s),{\displaystyle(x-r)(x-s),}where r{\displaystyle r}and s{\displaystyle s}are the roots of the equation and may be considered as parameters. Then, expanding the above product and regrouping the like terms gives x 2(r+s)x+r s .{\displaystyle x^{2}-(r+s)x+rs.}Generalization In this discussion, a "term" will refer to a string of numbers being multiplied or divided(that division is simply multiplication by a reciprocal)together. Terms are within the same expression and are combined by either addition or subtraction. For example, take the expression: a x+b x{\displaystyle ax+bx}There are two terms in this expression. Notice that the two terms have a common factor, that is, both terms have an x{\displaystyle x}. This means that the common factor variable can be factored out, resulting in(a+b)x{\displaystyle(a+b)x}If the expression in parentheses may be calculated, that is, if the variables in the expression in the parentheses are known numbers, then it is simpler to write the calculation a+b{\displaystyle a+b}. and juxtapose that new number with the remaining unknown number. Terms combined in an expression with a common, unknown factor(or multiple unknown factors)are called like terms. Examples
Linearly disjoint
https://en.wikipedia.org/wiki/Linearly_disjoint
In mathematics, algebras A, B over a field k inside some field extension{\displaystyle \Omega}of k are said to be linearly disjoint over k if the following equivalent conditions are met:(i)The map A k B A B{\displaystyle A\otimes _{k}B\to AB}induced by(x , y)x y{\displaystyle(x,y)\mapsto xy}is injective.(ii)Any k-basis of A remains linearly independent over B.(iii)If u i , v j{\displaystyle u_{i},v_{j}}are k-bases for A, B, then the products u i v j{\displaystyle u_{i}v_{j}}are linearly independent over k. Note that, since every subalgebra of{\displaystyle \Omega}is a domain,(i)implies A k B{\displaystyle A\otimes _{k}B}is a domain(in particular reduced). Conversely if A and B are fields and either A or B is an algebraic extension of k and A k B{\displaystyle A\otimes _{k}B}is a domain then it is a field and A and B are linearly disjoint. However, there are examples where A k B{\displaystyle A\otimes _{k}B}is a domain but A and B are not linearly disjoint: for example, A=B=k(t), the field of rational functions over k. One also has: A, B are linearly disjoint over k if and only if the subfields of{\displaystyle \Omega}generated by A , B{\displaystyle A,B}, resp. are linearly disjoint over k.(cf. Tensor product of fields)Suppose A, B are linearly disjoint over k. If A
Map algebra
https://en.wikipedia.org/wiki/Map_algebra
Map algebra is an algebra for manipulating geographic data, primarily fields. Developed by Dr. Dana Tomlin and others in the late 1970s, it is a set of primitive operations in a geographic information system(GIS)which allows one or more raster layers("maps")of similar dimensions to produce a new raster layer(map)using mathematical or other operations such as addition, subtraction etc. History Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps(typically an isarithmic map or a chorochromatic map)drawn on transparent film(e.g., cellulose acetate)to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects and city planners, starting with Warren Manning and further refined and popularized by Jaqueline Tyrwhitt, Ian McHarg and others during the 1950s and 1960s. In the mid-1970s, landscape architecture student C. Dana Tomlin developed some of the first tools for overlay analysis in raster as part of the IMGRID project at the Harvard Laboratory for Computer Graphics and Spatial Analysis, which he eventually transformed into the Map Analysis Package(MAP), a popular raster GIS during the 1980s. While a graduate student at Yale University, Tomlin and Joseph K. Berry re-conceptualized these tools as a mathematical model, which by 1983 they were calling "map algebra." This effort was part of Tomlin's development of cartographic modeling, a technique for using these raster operations to implement the manual overlay procedures of McHarg. Although the basic operations were defined in his 1983 PhD dissertation, Tomlin had refined the principles of map algebra and cartographic modeling into their current form by 1990. Although the term cartographic modeling has not gained as wide an acceptance as synonyms such as suitability analysis, suitability modeling and multi-criteria decision making, "map algebra" became a core part of GIS. Because Tomlin released the source code to MAP, its algorithms were implemented(with varying degrees of modification)as the analysis toolkit of almost every raster GIS software package starting in the 1980s, including GRASS, IDRISI(now TerrSet), and the GRID module of ARC/INFO(later incorporated into the Spatial Analyst module of ArcGIS). This widespread implementation further led to the development of many extensions to map algebra, following efforts to extend
Matrix factorization of a polynomial
https://en.wikipedia.org/wiki/Matrix_factorization_of_a_polynomial
In mathematics, a matrix factorization of a polynomial is a technique for factoring irreducible polynomials with matrices. David Eisenbud proved that every multivariate real-valued polynomial p without linear terms can be written as AB=pI, where A and B are square matrices and I is the identity matrix. Given the polynomial p, the matrices A and B can be found by elementary methods. Example The polynomial x2+y2 is irreducible over R[x,y], but can be written as[x y y x][x y y x]=(x 2+y 2)[1 0 0 1]{\displaystyle \left[{\begin{array}{cc}x&-y\\y&x\end{array}}\right]\left[{\begin{array}{cc}x&y\\-y&x\end{array}}\right]=(x^{2}+y^{2})\left[{\begin{array}{cc}1&0\\0&1\end{array}}\right]}References External links A Mathematica implementation of an algorithm to matrix-factorize polynomials
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
1