url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://leben-zwischen-leben.de/oil-tank-calculator.html
|
## Oil Tank Calculator
Householders with domestic oil tanks should take the following action to ensure they are safe for use: *Site tanks as far away as possible from drains, streams and ponds. This tank capacity program allows you to calculate tank charts for tanks with flat heads. Select your measurement units. Calculation of Hot oil Expansion Tank. turbulence factor. API Standards API Std 650 - Standard for Welded Tanks for Oil Storage; Tags. These tanks can have different sizes, ranging from 2 to 60 m diameter or more. BBL (US Oil) 0. Fuel Oil Tank Level Calculator Use our oil tank level calculator to quickly convert inches of oil to gallons, based on tank geometry. Gallons-per-day, temperature rise, cost-per-unit, and energy factor values are all adjustable. Length (L) inches (max 86) Brackets And Straps. Top: ft² m². Tank Connection's tank capacity calculators allow you to easily find the ideal dimensions for any storage tank design. Over time, this adds up to the loss of thousands of cubic feet of compressed air that could otherwise have been used to power processes in your facility. Retail Fueling Systems. 2*turbulence factor. Cubic ft x 7. of tank = the desired temperature of your tank for regular operation. com has made it easier than ever to convert between hundreds of common and not-so-common oilfield measurements using our powerful yet simple-to-use. Most storage tanks are designed and built to the American Petroleum Institute API-650 specification. Tank Level Volume Calculator calculates the volume of a tank when given the dimensions of the tank. I used our volume calculator with a diameter of 15. The recommendations contained herein are considered standard industry practice for tanks constructed to NFPA 31, Standard for the Installation of Oil-Burning Equipment in the United States. 4 Btu/hr ft2 oF. Separator gas Sp. The key to pouring a slab that meets code requirements is a thorough understanding of your local building code. 9d and crude oil (RVP 5). Corporate Office: 806 Island Ford Road McGaheysville, VA 22840. Accuracy: For many substances, vapor pressures are only poorly known; expect errors up to a few 10% of the vapor pressure value and up to approx. Accept View. For bulk oil storage tanks, the overall heat transfer coefficients quoted in Table 2. A rectangular tank is a generalized form of a cube, where the sides can have varied lengths. For HFC and HFD fire-resistant fluids the general rule is 5 to 8. The cost for a small project is as low as $275, while larger projects may cost as much as$2,400. Standard oil tank with strong welded lap joints. Conditions that affect the outcome of a job include but are not limited to: application rates, application methods, pavement condition, weather and surface temperature. Inches 1/2 Inches 1/4 Inches 1/8 Inches. Your digital toolbox for cannabis growers -- Canna-Calc. At pressures less than bubble point pressure, the Standing (1947) correlations for the solution gas-oil ratio and the saturated oil formation volume factor are used to calculate the saturated oil density. See Product. With r = w/2 = height of the semicircle ends, we can define 3 general fill position areas. Round custom carrying handles designed for easy transportation. IP 200/04. Description A fuel oil tank is an upright cylinder, buried so that its circular top is 10 feet beneath ground level. Calculate the height on the sight gauge where these marks are required. Corporate Office: 806 Island Ford Road McGaheysville, VA 22840. Now you know how to find the surface area of a rectangular tank. Metric Tons. The table below gives the dimensions and capacities of some standard tank sizes. If the US ever goes Metric, you'll be able to use a Meter to measure Meters - A Meter Meter. A steel vertical 275-gallon oil tank like the one pictured above measures 27″ wide, 44. Internet of Things solutions for telemetry and control, while supporting REST, WebAPI, MQTT, Azure IOT Hubs and more. Then match up the inches with the size of you oil tank that corresponds to the chart below. Indoor fuel oil tanks are generally located in a utility room, basement or garage. Calculate the work required to pump all of the oil to the surface. Householders with domestic oil tanks should take the following action to ensure they are safe for use: *Site tanks as far away as possible from drains, streams and ponds. Bottom: ft² m². Different types of tank have differing shapes and dimension parameters. Pull the stick out of the tank and place it against the end of the tank, if you have a standing tank. 1,000 gallons. toll-free (800) 525-6277. Stanwade Tanks & Petroleum Equipment, Inc. This page will convert vegetable oil from units of weight such as grams and ounces into units of volume such as teaspoons, tablespoons, and cups. Aboveground storage tank (AST) removal runs in the $300 to$1,500 range. Rectangular Tank. Width (W) inches (max 32) Height (H) inches (max 32) Gross Volume usg (max 250) or. 3 – Based on a paint solar absorptance factor of 0. Sponsored Links. Calculate total dike capacity: Total capacity of the concrete dike. Refrigerant Type. Use this essential oil dilution calculator to figure out how many drops of essential oil per mL of carrier oil you need. com) and measure the inches of oil in your tank. Top bdrm has Huge Attic, Oil Tank 6 yrs old, White PVC fencing, New Oil tank, house also has gas. 4 Btu/hr ft2 oF) (1000 ft2) ( (90 oF) - (32 oF)) = 23200 Btu/hr. 21 °C) 15 °F (-9. Find the volume of oval tanks in cubic inches and gallons. A fuel oil tank is an upright cylinder, buried so that its circular top is 10 feet beneath ground level. We have been in business since 1986 and comtinue to provide our customers with the best service in the industry. For a basic project in zip code 47474 with 1 heater, the cost to Install a Tank Heater starts at $1,209 -$1,407 per heater. We hope you enjoy working with us. Heating Months per Year: 5. Oil tank removal costs varied widely by geographic area, midwest oil tank removal $2,500, Western state oil tank removal$725. Horizontal Vertical. Thus, the formula for the total gallons of oil in the tank is given by: Now, we can plug in r = 18, x = 10, and l = 48 to get the number of gallons in the tank. The standard oil tank size comes in two basic shapes: oval and cylindrical. Tank capacity calculator for on oil tank, water tank, etc. 2-Cycle Oil Mix Calculator & Chart Outboard engines, leaf blowers, weed trimmers and other equipment with small engines often have a 2-cycle, also known as 2-stroke, engine. engineer4free. The information generated by the Dalton Coatings Calculators is for estimating purposes only. Accumulators. Aboveground storage tank (AST) removal runs in the $300 to$1,500 range. If you have a 275 Gallon Tank: My tank gauge shows it is. We are really sorry, but this website does not work without JavaScript enabled. $$Radius = {Diameter \over 2}$$. Home > Dry Bulk Storage Capacity Calculator. Tanks and Covers. Increments: 1 inch 0. We offer top customer service, best brands and devices, including FREE Shipping on all order $65+. Click one of the 3 tabs across the top which represents your tank. What We Do. Select the amount of fuel you have: Tank Size: — 275 330 550 1000. If the engine has a single fill port for both engine oil and gas, it is a 2-cycle engine that requires a specific mix of oil and gas to function properly. 95 litres of heating oil. Tank Volume Calculator Oil Tanks BBL (US Oil) ; A tank volume calculator, also known as a tank size calculator, is a quick and easy way to convert the height, width and length of your tank into a volume format. Overall, 10% of revenue went towards mass transit, 27% for local roads, 51% for. Thus, the formula for the total gallons of oil in the tank is given by: Now, we can plug in r = 18, x = 10, and l = 48 to get the number of gallons in the tank. Made of robust material. If more fuel than can fit in the tank is ordered, please refer to the Payments and. Capacity based upon a tank with flat ends (no allowance has been made for dish ends). 12 or 110 volt Piusi pump. 3 – Based on a paint solar absorptance factor of 0. as the data items required for input. When the power goes off the water that is above your tank overflow level, as well as the water that is above the level of your returns (if you do not have check valves or air inlets) will siphon back into the sump. The heat loss from the oil tank can be estimated to: Q = (0. Measurement. T) at the surface. Convert the volume to cubic feet: 5,500 / 7. Because this residential oil storage tank was being installed at a site where more than 1,320 U. ASTM D1250-04. Tank Calculator is categorized as Office Tools. This website uses cookies. 27 W/m2 oC - can be calculated as: Q = (2. Once the vessel is tied up at the terminal, a ship-shore checklist will have to be filled out. Calculate the height on the sight gauge where these marks are required. A Spokane Valley Wa, 99206; Phone: 877-761-1673 Email: sales@morrheat. All fields are required to calculate your storage tank volume. Amount of columns: Notes: Calculate Clear View PDF. Oil Pumps, Internal Wet Sump. Calculate the balance moment of each. The surrounding temperature is 32oF. 6 CBD Oil Usage General Guidelines. After it touches the bottom, remove it slowly and record the number of inches on the stick, then compare it to the heating oil chart. Tank Size Calculator. Oil tank removal costs varied widely by geographic area, midwest oil tank removal$2,500, Western state oil tank removal $725. This version is usable for browsers without Javascript also. As a result, rebuildable atomizers (RBAs) and rebuildable tank atomizers (RTAs) have become increasingly popular as well, allowing frequent users to save money by building and refilling their own PVs. Not many people know the process of oil extraction, through to a tank farm, through to distribution and ending up in your car, business or home. Corporate Office: 806 Island Ford Road McGaheysville, VA 22840. Wax/Oil Pens; Calculators. Oil Tank Removal Cost. How to Calculate Furnace Oil Consumption. Some of our more popular sizes are the 10,000 gallon fuel tank and the 1,000 gallon fuel tank. 1,000 gallons. An expansion tank is a small tank divided in two sections by a rubber diaphragm. Most of our staff continues to work from home but we are open for limited office hours: Daily from 10:00am to 2:00pm or by appointment. engineer4free. Vertical 330 Gal. FREE online storage tank volume calculators. If the tank is filled with water, this calculator also displays the. "Oil Tank Gauge" is an easy to use, free app designed to help crude oil haulers, water haulers, and pumpers who work around tanks in the oilfield and need a way to simplify tank calculations. TrueContain™ Containment Systems. This item: Vertical 275 Gal. splitting the gas cost). Distillate (Heating Oil and Diesel) - 5. Make sure you’re calculating the correct volume for your tank by selecting one of the below options. It is important to tighten securely. 550 gallons. 825 million Btus per barrel. Servers as a liquid volume calculator with output in US gallons, UK gallons, BBL (US Oil), and litres. Hydrogen Sulfide Scavenger. Retail Fueling Systems. State and Local Government Revenues and Racial Disparities. Use a tank stick (can be purchased from a local plumbing supply or Amazon. examples:. Volume of a Square or Rectangular Tank or Clarifier. The trick here is to determine U. This is used to calculate the heat transfer through the walls of the tank from the oil into the ambient (or out of the ambient into the oil). Residential Oil Tanks. Vertical Tank Chart - O'Day Equipment. Above ground oil tank removal costs vary, but usually range from$408 to $1,001 with an average of$696. Step One: Measure the Tank. The new AP-42 method corrects errors for more accurate estimates. Sponsored Links. Please use periods (. 1 Calculation of Buoyancy Forces 3. 4 or 5-star ANCAP safety rating. › Oil & Gas Posted by Dinesh on 04-09-2019T17:13 Use this simple drilling calculator to calculate STOIIP - stock tank oil initially in place using area, reservoir thickness, rock porosity, connate water saturation, oil formation volume factor values. PLUMBERS EDGE Oil Tank Floor Leg Set. FREE online storage tank volume calculators. Please enter data for Tank Radius (radius is 1/2 of diameter) and side water depth. 09 °C) 35 °F (1. Buried oil tank: (underground) - Insert the tank stick into the fill pipe where the delivery driver fills your oil tank. Newberry Tanks: Intentional Innovation Where Quality and Delivery are a Given. Matches' Tank cost - API, horizontal, vertical, cone roof, bottom, flat bottom, round end. This worksheet can be used to calculate the secondary containment volume of a rectangular or square dike or berm for a single vertical cylindrical tank. Petroleum Equipment: Stanwade offers a full line of petroleum equipment, including total tank packages. Select type of tank: vertical horizontal or rectangular. 27 inches would give 180 barrels. Stock-tank GOR=147 scf/STB (from Valko and McCain, 2003 correlation) Total GOR=747 scf/STB. An underground storage tank (UST) removal costs $1,000 to$3,500. Measure the width, height, and depth of your oil tank. By entering dimension criteria for a specific type of tank the Pipe Flow Advisor software is able to perform the calculations to determine weight, capacity and fluid volume for a given level of liquid. Example 1: You've got 1 US gallon of gas, and your engine needs a gas/oil mix of 40:1. We offer top customer service, best brands and devices, including FREE Shipping on all order $65+. Installation. We have been in business since 1986 and comtinue to provide our customers with the best service in the industry. If you need a handy calculation tool to find the surface area of a rectangular. If both tanks on E are at 20 gallons left, then the fill tank will get full to 250 gallons while the other tank is going to stay very near 20 gallons during fill time. GALLONS OF OIL IN TANK. For a cylindrical tank with the base radius r = 4 cm and the height h = 7 cm, we can calculate its surface area like so: A = 2πr (r + h) A = 2π * 4 * (4 + 7) = 276. All of these conditions should all be taken into account when calculating mix design. 33 x 4) = 9. 801} \normalsize = 64. Example - Heat Loss from an Exposed Insulated Oil Tank. is designed to provide the petroleum industry with welded steel tanks for use in the storage of petroleum products and other liquid products commonly handled and stored by the various branches of the petroleum industry. This item: Vertical 275 Gal. Size and Capacity: This tank is approximately 3 1/2 feet tall/wide by 16 feet long and will hold 800 gallons when filled to 80% capacity. 2 relates heat loss from a water surface to air velocity and surface temperature. The cost for a small project is as low as$275, while larger projects may cost as much as $2,400. Hydrogen Sulfide Scavenger. 5 feet and a height of 1 inch and got a volume of 117. At pressures less than bubble point pressure, the Standing (1947) correlations for the solution gas-oil ratio and the saturated oil formation volume factor are used to calculate the saturated oil density. Paraffin & Asphaltene Treatment. In this post, I would like to write about how to do tank sounding for ballast water and fuel oil. The tank contents height is the distance from the top fo the tank contents down to the bottom of the tank or the sensor probe location, such as a hydrostatic. 175, or about$910-$1,000 to refill a 275-gallon tank and$1,850-$2,100 for a 550-gallon tank; and 10 years ago, from October 2003 to March 2004, the average was$1. A 5,000 gallon tanker has a width of approximately 10 feet. Piping layout for Tank farm. Example: 16" of Oil in a 300 Gallon Tank is about 119 Gallons in the Tank. Fuel Storage Tanks. Question: How many barrels of oil (42 gallons of oil in one oil tank) would there be in an oil tank that was 15 feet tall and 10 feet across? Thanks. Step One: Measure the Tank. This water pressure formula can be applied to all liquids. Thats about 106 gallons total. The approximate amount of fuel you have will be displayed in fractions. Crude Oil Settling Tanks48 Reference Material51. Vaping Glossary: A - Z of Vaping Terms; Mods, Pods & Tanks; DIY E-Liquid. Example: 16" of Oil in a 300 Gallon Tank is about 119 Gallons in the Tank. GALLONS OF OIL IN TANK. With our 'New Thinking' expedited delivery options and a 90-year proven track record for quality, Newberry has what you need, when you need it. Degassing is the forced removal of vapors from a tank in preparation for or during cleaning, typically done to tanks storing gasoline or crude oil. As a tank is depleted, the lid sinks, and the sun casts shadows on the inside of the tank changes. Crude Oil Settling Tanks48 Reference Material51. Calculate separator and stock-tank shrinkage factors. Removing the old heating oil tank can run $500-$3,000, depending on local rates and the size of the tank, its condition and how easily it can be reached. Consider the ground water table (G. Calculator Use. API 650 is widely used for tanks that are designed to internal pressures of 2. 5 times largest tank) 650 barrels. 2*turbulence factor. 5 mm) outside diameter tubing. Actual water and oil tanks may not be perfect geometric shapes or might have other features not accounted for. 2 relates heat loss from a water surface to air velocity and surface temperature. Wax/Oil Pens; Calculators. Standard oil tank with strong welded lap joints. English barrel (bbl) or Stock Tank Barrel (STB) 1 bbl = 0. This calculator provides a mathematical estimate of the filled and working capacity of liquid tanks such as oil and water tanks. Nominal output. For example, as per P&ID and plot plan, total piping hold up volume and equipment hold up volume = 8. Truck Tank Capacity Calculators FMCSA Tank Regulations Aluminum Fuel Tanks Buying a Diesel Fuel Tank Diesel Tanks D-Tank Hydraulic System Design and Operation Hydraulic Reservoir Guide Hydraulic Reservoir Tank Industry Connections. 5 ounces of two-cycle oil, while preparing a 40:1 blend calls for 3 ounces of oil. Residential heating oil tank sizes range from 220 gallons to 1,000 gallons, but the average size used in homes is 275 gallons. Servers as a liquid volume calculator with output in US gallons, UK gallons, BBL (US Oil), and litres. If more fuel than can fit in the tank is ordered, please refer to the Payments and. The total volume of a partially-filled spherical tank equals total sphere volume minus spherical cap volume. The main tank is oem and the return line was improperly positioned in relation to the fill port and oil expansions needs. 175, or about $910-$1,000 to refill a 275-gallon tank and $1,850-$2,100 for a 550-gallon tank; and 10 years ago, from October 2003 to March 2004, the average was $1. engineer4free. New Oak staircase, Crawl w/dehumidifier, Hi Hats thru-out, Nest thermostats, EV charging station. The tank size calculator on this page is designed for measuring the capacity of a variety of fuel tanks. Corporate Office: 806 Island Ford Road McGaheysville, VA 22840. FREE online rectangular storage tank volume calculator. Conditions that affect the outcome of a job include but are not limited to: application rates, application methods, pavement condition, weather and surface temperature. This calculator determines a scuba diver's SAC and RMV rate based on the average depth, bottom time, gas used, and tank size. Smaller houses with more appliances may also. Cylindrical tank Rectangular tank Enter cylindrical tank dimensions below: Calculation results are approximate. Crude Oil Settling Tanks48 Reference Material51. Piping layout for Tank farm. This unique calculator can be used for heating-as well as for cooling applications. By navigating this website you consent to cookies being stored on your computer. 99lbs for a total tank weight of 34. Note, the app has advanced functions and another usage could be a vertical. Steel UL Above and Underground Storage Tanks. Check your tank gauge. There are several volumetric methods used to calculate OOIP, and there has been work to standardise the calculation worldwide. Solar Panels are leased. A continual desire for more cost efficient and environmentally friendly hydraulic systems requires continuous development. HBE tank heaters are used for preheating hydraulic oil and maintaining the operating temperature on hydraulic units. Our pit-sizing calculator allows builders to construct their own designs using SmokerBuilder's proven design principles. The standard oil tank size comes in two basic shapes: oval and cylindrical. | Tank Chart Calculator. Made of robust material. Abandoning Underground Storage Tanks. Tank Length (ft). Whether you’re looking to find refueling tips and the equipment you’ll need to maintain your gas tanks, or you’re looking for regulations on home fuel tanks, this site will help you. maintenance temperature. All expansion/ breathing, & fill port are located on aux tank. 21 °C) 15 °F (-9. Calculate horizontal, vertical and rectangular tank capacities and dip levels. To calculate fill volume of a vertical oval tank it is best if we assume it is 2 halves of a cylinder separated by a rectangular tank. Furthermore, the app can convert API to density @15 and vice versa. API Standards API Std 650 - Standard for Welded Tanks for Oil Storage; Tags. I'm converting to gas. If you need a calculator that will accept other input units, then click here. When oil enters the tank, the rooftop rises. Note: To allow for fuel expansion and safe deliveries, an oil tank holds slightly less fuel than its official full capacity. In the past, underground oil tanks were common, but because of the contamination caused by leaky tanks, it's difficult to get a mortgage for a house with an underground tank. Click Calculate. Commercial. If you know the liquid height you can also work out how much is needed to top it up to full. Step One: Measure the Tank. An SPCC plan as required under these regulations should be used as a reference for proper oil storage, as a tool for spill prevention, as a guide for facility inspections and tank testing, and as a resource during emergency response to control, contain and clean up an oil release. D = Normal tank diameter , in feet = H = depth of tank , in feet = G = design Specific gravity of liquid = Tank is unanchored, use equations pertaining to unanchored tanks, Project No. (A cold plant in winter in the mezzanine is about 45 F to 60 F. The approximate amount of fuel you have will be displayed in fractions. Once the vessel is tied up at the terminal, a ship-shore checklist will have to be filled out. Centimetres Inches. KW Liquid Tank Heating: Calculation Form To determine kW required to raise the temperature of any Liquid, complete the fields below. Envirosafe manufactures gasoline, oil and diesel fuel storage tanks in a variety of sizes, including 1,000 gallon, 5,000 gallon, 10,000 gallon, 15,000 gallon, and 20,000 gallon as well as custom sizes. Dealing with oil storage solutions is a complex and potentially dangerous job, which is why you should leave it to our oil tank specialists. Solar Panels are leased. Motherwell Tank Protection, MTP, design and manufacture pressure vacuum relief valves, free vents, gauge hatch and level gauges. This worksheet can be used to calculate the secondary containment volume of a rectangular or square dike or berm for a single vertical cylindrical tank. com) and measure the inches of oil in your tank. 2-Cycle Oil Mix Calculator & Chart Outboard engines, leaf blowers, weed trimmers and other equipment with small engines often have a 2-cycle, also known as 2-stroke, engine. On a simple design, work out the volume internally, then calculate the interval required. Our range of Hydraulic oil tanks come in three styles, v bottom(23,45,68,90,136 litre), round chassis mount(40, 60, 80,100,120, 125, 170, 200, 225 litre) and our "spacesaver" range for back of cab application(200 litre). This is an important factor for recording the volume of oil either delivered to, or used from the storage tank, or to monitor system consumption and top-up volumes. Tank Size Calculator. Select your Fuel:Oil ratio. That means, when the liquid that will be stored has a higher specific gravity than the current stored product. Simply note down the length and diameter of your current fuel tank in millimeters or inches. 63 US gallons. FREE online storage tank volume calculators. Choose from a variety of 2D and 3D model formats available for compatability with Autodesk Programs, Revit, SolidWorks, and others. 9 mm) outside diameter tubing. Calculate the work required to pump all of the oil to the surface. Before beginning this project, check with your local building codes department to get a copy of the code. The tank footprint is equal to ∏D2/4, where D is the tank diameter. The best prices on vape products, eJuices, vape juice and premium eLiquids. When oil enters the tank, the rooftop rises. Gross capacities can range from 100 bbl to over 1. 12 or 110 volt Piusi pump. With this in mind, use the gauge on your tank and the chart below to estimate how many gallons of heating oil are in your tank. Capacity based upon a tank with flat ends (no allowance has been made for dish ends). Contact us at 1-800-444-3218. Temperatures. Capacity based upon a tank with flat ends (no allowance has been made for dish ends). 330 gallon tank also available. Enter the Height in mm. Calculate FVF of oil. Electrostatic powder-coated paint. 32 litres of heating oil. The capacity calculators allow you to accurately forecast the storage and processing ability of your storage tank using basic size assessments. 67 °C) 25 °F (-3. This coils maintain operating temperature of Tanks when we have heat loss. Total the balance moments. Secure fill cap back on the tank. $$Volume = \pi × {5 ft \over 2}^2 × 7\,yd = 412. 3 from our software library for free. The cost for a small project is as low as 275, while larger projects may cost as much as 2,400. Calculation of Hot oil Expansion Tank. Pressure Drop Online-Calculator for small mobiles. 675 gallons. Most Commom Above Ground Oil Storage Tank (AST)Sizes in Westchester and Putnam County. Total the weights. Supplying industries which include Oil & Gas, Petrochemical, food, water and Biogas. D = Normal tank diameter , in feet = H = depth of tank , in feet = G = design Specific gravity of liquid = Tank is unanchored, use equations pertaining to unanchored tanks, Project No. splitting the gas cost). Simply measure the number of inches of fuel in your tank and use the tank chart to determine how many gallons of fuel are in your size tank. As there is a low chance of oil spillage a Bypass Interceptor would be most appropriate. Separator gas Sp. 32 litres of heating oil. For round tanks find the diameter and length or height. This factor is used to express the expansion of the gas when brought to surface. Online VAPE SHOP offers top quality Vape Hardware, Kits, Tanks, MODS, Coils, Pods, E-Juices, Nicotine Salts and Accessories. Tank Volume & Weight Calculator. The capacity calculators allow you to accurately forecast the storage and processing ability of your storage tank using basic size assessments. Note: Calculations are possible only, if Javascript is activated in your browser. Then match up the inches with the size of you oil tank that corresponds to the chart below. Compare storage tank designs, configurations and metrics easily with the calculators and see which construction gives you optimal capacity for your needs. 5) Width of channel (in feet) (must be 6 to 20) vh/vt. The tank contents height is calculated by subtracting the depth to tank contents from the tank height, and will appear once the required height units are selected directly underneath. Oil Tank Chart. Calculate the work required to pump all of the oil to the surface. In 2019, five states (CT, MD, IL, NY, PA) and DC spent over 25% of their state motor fuel tax revenue on mass transit. 2) When you have located the temperature value, follow the row across. oil drilling calculators, applets & spreadsheets (xls) Petroleum Engineering Dictionaries & Glossaries OILFIELD GLOSSARY - Schlumberger Limited Multimedia Oilfield Glossary (Text & Images). Tank Size Calculator. At pressures less than bubble point pressure, the Standing (1947) correlations for the solution gas-oil ratio and the saturated oil formation volume factor are used to calculate the saturated oil density. The cylindrical tank calculator would then perform the following operations to calculate the size of the cylindrical oil tank:$$Volume = \pi × Radius^2 × Length$$. If your tank has an irregular shape, this calculator will not be able to find its surface area. Depending on the size and style of tank, they are either finished in black powdercoated steel or polished aluminium. "Oil Tank Gauge" is an easy to use, free app designed to help crude oil haulers, water haulers, and pumpers who work around tanks in the oilfield and need a way to simplify tank calculations. Dry Bulk - Tank Capacity Calculator for figuring the tank size and tank volume of dry bulk storage tanks including: skirted tanks with door access, drive through skirted silo, elevated tanks with leg support and elevated tanks with structure support. These ideals are easy enough to achieve on a stationary hydraulic machine in an industrial setting. Please keep this in mind when placing your order. Calculate horizontal, vertical and rectangular tank capacities and dip levels. Enter the Length in FEET:. Whether you are looking for a 300 gallon or a 13,000 gallon tank, we can make it custom for your job. This is an important factor for recording the volume of oil either delivered to, or used from the storage tank, or to monitor system consumption and top-up volumes. tank capacity), liquid volume, and the volume of the liquid currently in the tank. Commercial. Required Barrels of Containment Needed. It works both ways. Metric Tons per Year. For a cylindrical tank with the base radius r = 4 cm and the height h = 7 cm, we can calculate its surface area like so: A = 2πr (r + h) A = 2π * 4 * (4 + 7) = 276. Calculate separator and stock-tank shrinkage factors. Tank Heel means all crude oil, feedstock, raw materials, blendstocks and refined and intermediate petroleum products (including, in process and finished products) located in a tank below the level of the pump. 9d and crude oil (RVP 5). • Determining the size of the oil tank and the amount of oil in the tank is relatively easy. Notes: 1,000-gallon tanks come in either above ground or underground versions. I want to calculate the tank volume in cubic feet and work out how much oil will fit in the cylinder in US gallons. HBE tank heaters are used for preheating hydraulic oil and maintaining the operating temperature on hydraulic units. Tank Size Calculator. Assuming we want a standard berm wall height of 1 foot, calculate the required area: 735. The general rule of thumb for a horizontal smoker is that the firebox should be 1/3 the size (volume) of the cooking chamber. : T-400 & T-405 Cone Location : Unit : Design Engineer : Manufacturer : Model : Mfr Ref. Smaller houses with more appliances may also. Calculator Use. Calculate your tanks total capacity and dip level volume. I never did the math for an exact number it has a main tank of at least 5 gallons plus an aux. Oiling System Equipment & Filters. (based on 1. Some homes simply don't have the space for large heating oil tanks. 6-Year Savings. Factors that can affect oil tank removal cost include local rates, local regulations, the size and condition of the tank, and the ground conditions for buried tanks. They can tell you within a gallon or so, how much oil you have in your tank. Different types of tank have differing shapes and dimension parameters. We make it easy and affordable to switch your home heating to propane. Find out which oil is recommended for your car!. Oil weighs 50lb/ft^3. i will try comm tank to get an estimate. Improved packaging for increased product finish during freight and shelf storage life. exe or tcalc. Tank is empty in the BP; If you want this to house an extractor, place a foundation on the Oil node you wish to use at a height that allows you to build an extractor on top of it. All of these conditions should all be taken into account when calculating mix design. , average oil tan removal cost was$1,403. Just make sure you are using the proper dimensions/dimension and appropriate sizes/size. He is curious whether his heated water cools faster than when in a bathtub, and needs to calculate the surface area of his cylindrical tank of height 5. A simple online tool to determine pipeline, well-bore, tubing volumes in gallons and bbls (barrels of oil = 42 gallons). examples:. See Product. For a conventional reservoir used in open circuits the general rule is a tank oil capacity of 3 to 5 times the flow of the pump (s) per minute plus a 10 percent air cushion. Removing an oil, fuel or water tank costs $1,138 on average and typically ranges between$532 and $1,804. 6-Year Savings. Tank Size Calculator. HORIZONTAL CYLINDRICAL TANK MODELS. Householders with domestic oil tanks should take the following action to ensure they are safe for use: *Site tanks as far away as possible from drains, streams and ponds. By navigating this website you consent to cookies being stored on your computer. Calculate the work required to pump all of the oil to the surface. Click Calculate. Call 1-855-449-0464. For rectangular or cubes, find the length, width, and height. Then match up the inches with the size of you oil tank that corresponds to the chart below. Removing an oil tank requires the issuance of a permit. Oil tank removal cost is by far the number one subject of discussion New Jersey property owners want to have with us, and with good reason. Consult MBA if your tank operates above 180 F or if the difference between your operating temperature and the temperature of your supply fluid or steam in your coil will be less than 15 F. Stanwade Tanks & Petroleum Equipment, Inc. Currently the only thing "left" to get "right" its the oil/energy consumption and add L/H Walker class units. If the US ever goes Metric, you'll be able to use a Meter to measure Meters - A Meter Meter. 275H (flat) 275V (upright) 500. 27 W/ (m2oC)) (1000 m2) ( (38 oC) - (0 oC)) = 86260 W. • Determining the size of the oil tank and the amount of oil in the tank is relatively easy. This will reduce your consumption while you are waiting for your oil tank to be filled. Oval Tanks. An SPCC plan as required under these regulations should be used as a reference for proper oil storage, as a tool for spill prevention, as a guide for facility inspections and tank testing, and as a resource during emergency response to control, contain and clean up an oil release. Tank Heel means all crude oil, feedstock, raw materials, blendstocks and refined and intermediate petroleum products (including, in process and finished products) located in a tank below the level of the pump. Reservoir temperature=217°F. Pipe Weight Formula - This formula can be used to determine the weight per foot for any size of pipe with any wall thickness. volume = Pi * major * minor * length / 4. HORIZONTAL CYLINDRICAL TANK MODELS. FREE online storage tank volume calculators. CALCULATE TANK CAPACITY. Frances, I assume you mean a cylindrical tank. Use our oil tank chart to determine how much oil you have in your tank and how you can get the best price on home heating oil. Enter your tank’s length, width etc. Tank designs vary between manufacturers and this calculator can only provide an approximate indication. Tank design codes reflect the culmination of decades of work by many dedicated individuals. Oil Tank Removal Cost. The leak rate calculation has built-in specific gravity values for water, seawater, diesel fuel, SAE 30 oil, and gasoline. Liquid Height. Oil Tank Level Monitor. examples:. We're proud to present to you the first Shadow Empire Tank Design Calculator. Access to 4,900 tank terminals world wide A market intelligence database Find tank capacity Get free access. Variables to consider when using an oil usage calculator: Although our heating oil consumption calculator is a great tool, do remember it is a rough guide only. Best Oil, Fuel & Water Tank Installers near you. See full list on sensorsone. Use this calculator to calculate physical values related to the shape of a stadium. Stanwade Tanks & Petroleum Equipment, Inc. Step Two: Find the Tank Volume Formula. • Determining the size of the oil tank and the amount of oil in the tank is relatively easy. Note 3: 1991 oil tank purchase costs. Boi = oil formation volume factor. exe or tcalc. Oil weights 50 lb/ft^3. Oil companies are just dishonest. Past Articles. Amount of columns: Notes: Calculate Clear View PDF. Consult MBA if your tank operates above 180 F or if the difference between your operating temperature and the temperature of your supply fluid or steam in your coil will be less than 15 F. Compressed air is released during the venting. 5 MMbbl in a single storage tank. Commonly found in Oil refineries. ERC Environmental is here to provide another cost option. Immersion Heater Wattage Calculator. The main tank is oem and the return line was improperly positioned in relation to the fill port and oil expansions needs. Crude Oil Settling Tanks48 Reference Material51. Areas we are continuously seeking to improve performance include cooling capacity, noise level, pressure drop and fatigue. 1417 x Radius² x. Check out http://www. Powered by more than 63 years of experience on the petroleum storage tanks market, Granby Storage Tank’s products are conceived bearing homeowners’ specific needs in mind. Required Barrels of Containment Needed. Step One: Measure the Tank. A diver's air consumption rate can be expressed in terms of either pressure (SAC = PSI/bar per minute) or volume (RMV = cubic feet/liters per minute). Cubic ft x 7. This calculator is only intented to give you a rough estimate. Previous Next. This method was designed for gases dissolved in crude oils. In 2019, five states (CT, MD, IL, NY, PA) and DC spent over 25% of their state motor fuel tax revenue on mass transit. Manufacturing 4900 International Fuel Tanks Since 1995; May 2019. Oil tank removal cost is by far the number one subject of discussion New Jersey property owners want to have with us, and with good reason. (Calculations in either calculator will also supply the appropriate converted values for the other. Amount of columns: Notes: Calculate Clear View PDF. Calling the amount of heat added Q, which will cause a change in temperature ∆T to a weight of substance W, at a specific heat of material Cp, then Q =w • Cp • ∆T. 120 or 240 Vac. Anchorage, AK 1-800-770-8265 • Fairbanks, AK 1-800-770-1711 • Lakewood, WA 1-800-725-8108. Sign up for our newsletter and receive helpful water saving tips and be the first to know about upcoming sales. How To Size A Hydraulic Tank. Piping layout for Tank farm. We are the only storage tank manufacturer worldwide that designs, fabricates and installs all four major types of steel storage tanks. Call 1-855-449-0464. 5 ounces of two-cycle oil, while preparing a 40:1 blend calls for 3 ounces of oil. Fuel Storage Tanks. API Std 650 Tank Calculator. Tank calculator. DEQ has developed a tool for estimating emissions from working and breathing losses from volatile organic liquid storage tanks. if you get 8 CFM at 120 psi, you will get 4 CFM at 60 PSI. A 5,000 gallon tanker has a width of approximately 10 feet. 4 Btu/hr ft2 oF) (1000 ft2) ( (90 oF) - (32 oF)) = 23200 Btu/hr. Ultrasonic sensor uses sound to detect the oil level, and will survive some oil being splashed up on it. Online software is available for calculating the optimal values. Subtle changes in the amount of oil can reveal unnoticed leaks or indicate the end of the tank. Immersion Heater Wattage Calculator. Good planning is the hallmark of efficient tanker operations. Enter the Height in mm. Bioheat is heating oil and can be used in your oil tank without any modifications to your tank or furnace. Crude Oil Settling Tanks48 Reference Material51. I live on Long Island, NY and starting construction for an addition. Reservoir temperature=217°F. Where, A = area. Total the balance moments. 1 CBD Oil Dosage Calculator – Find Your Right Dose Today; 2 CBD Chart. 2-19 FIGURES Figure 3. Oil & gas companies strive to meet business and performance goals. Separator gas Sp. Oil Tank Level Monitor. Nitrogen quantity units converter. Top bdrm has Huge Attic, Oil Tank 6 yrs old, White PVC fencing, New Oil tank, house also has gas. Corporate Office: 806 Island Ford Road McGaheysville, VA 22840. Envirosafe manufactures gasoline, oil and diesel fuel storage tanks in a variety of sizes, including 1,000 gallon, 5,000 gallon, 10,000 gallon, 15,000 gallon, and 20,000 gallon as well as custom sizes. MORE Evaporation/Subl. 2*turbulence factor. Enter the Height of the Cylinder in mm. 4 Fuel Lines, Gas Meters, Control Panels 3. 3 from our software library for free. 27 W/m2 oC - can be calculated as: Q = (2. If you need a handy calculation tool to find the surface area of a rectangular. They know the weather and they can calculate how much oil you are going to burn today. Tank Connection's tank capacity calculators allow you to easily find the ideal dimensions for any storage tank design. The tank has a radius of 8 feet and is 24 feet high, although the current oil level is only 16 feet deep. A rectangular tank is a generalized form of a cube, where the sides can have varied lengths. 5 Conclusion 3. 5 Foundation Radius round storage tank; Space inside to fit an Oil extractor and Fluid Buffers if you wish to do so. Tank Connection's tank capacity calculators allow you to easily find the ideal dimensions for any storage tank design. 27 W/m2 oC - can be calculated as: Q = (2. Use our oil tank chart to determine how much oil you have in your tank and how you can get the best price on home heating oil. The heating oil tank gauge is a clear glass or plastic cube marked with numbers that look a lot like what you'd find on the gas gauge of your car: F, ¾, ½, ¼. GALLONS OF OIL IN TANK. Pipe Weight per Foot Calculator. Our range of Hydraulic oil tanks come in three styles, v bottom(23,45,68,90,136 litre), round chassis mount(40, 60, 80,100,120, 125, 170, 200, 225 litre) and our "spacesaver" range for back of cab application(200 litre). x 25'7" shell length level gallons level gallons level gallons level gallons level gallons level gallons. Tank Size Calculator. All you need to do is follow the 4 steps below: 1. Internet of Things solutions for telemetry and control, while supporting REST, WebAPI, MQTT, Azure IOT Hubs and more. After calculating your estimated volume, you will also have an opportunity to request a quote on your new tank. Constantly changing market dynamics and operational challenges can make it difficult. Your one stop online shop for over 2000 eJuice brands and vape products. This can be used to convert between British and American recipes. Using this chart correctly will give you the approximate gallons in your heating oil tank. Transmission Pans. The second form calculates the minimum pipe size to limit pressure loss to a specified value. Access to 4,900 tank terminals world wide A market intelligence database Find tank capacity Get free access. Dry Bulk - Tank Capacity Calculator for figuring the tank size and tank volume of dry bulk storage tanks including: skirted tanks with door access, drive through skirted silo, elevated tanks with leg support and elevated tanks with structure support. 1 Types of CBD Oil Dosage Calculators. (based on 1. π x Radius2 x Length (3. 412 Btu = 1 Wh is introduced yielding: Equation 1. Tank Structure *. fax (866) 583-3035. Swc = connate water saturation. 32 litres of heating oil. For hazardous liquid storage and petroleum, add another$500 to $1,800 for soil testing and$500 to \$10,000+ for remediation. It is a vessel specific calculator designed for quantity calculation of tanks. Would you like a propane vs oil calculator (spreadsheet)? If so check out our Oil Vs Propane Fuel Calculator. The main experience and practice include: 1. Note: This tool is useful for any cylinder including cylindrical tanks.
|
2021-09-19 19:17:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.373454213142395, "perplexity": 4320.881775434687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00213.warc.gz"}
|
https://www.uplift-modeling.com/en/latest/api/metrics/uplift_auc_score.html
|
sklift.metrics.uplift_auc_score¶
sklift.metrics.metrics.uplift_auc_score(y_true, uplift, treatment)[source]
Compute normalized Area Under the Uplift Curve from prediction scores.
By computing the area under the Uplift curve, the curve information is summarized in one number. For binary outcomes the ratio of the actual uplift gains curve above the diagonal to that of the optimum Uplift Curve.
Parameters: y_true (1d array-like) – Correct (true) target values. uplift (1d array-like) – Predicted uplift, as returned by a model. treatment (1d array-like) – Treatment labels. Area Under the Uplift Curve. float
uplift_curve(): Compute Uplift curve.
perfect_uplift_curve(): Compute the perfect (optimum) Uplift curve.
plot_uplift_curve(): Plot Uplift curves from predictions.
qini_auc_score(): Compute normalized Area Under the Qini Curve from prediction scores.
|
2020-11-26 07:12:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890647292137146, "perplexity": 14440.375615641746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186761.30/warc/CC-MAIN-20201126055652-20201126085652-00682.warc.gz"}
|
https://sagawisdom.com/courses/type-well-curve-fundamentals/chapters/chapter-1-type-well-curves/lessons/1-01-introduction-to-type-well-curves
|
### Lesson 1.01 Introduction to Type-Well Curves
View Course Contents
Hide Course Contents
##### 01. Type-well Curve Fundamentals Lesson 1.01: Introduction to Type-well Curves
Hi everybody, my name's Bertrand Groulx and welcome to type-well curve fundamentals.
##### 02. Why Are Type-well Curves Important?
Now, why are type-well curves important? Well, they're used to Inform production forecasts, economic evaluations. We use them to scope out plays. We use them to build understanding of production influencing factors like parameters or subsurface parameters. But most importantly, we should be focusing on using type-well curves as a vehicle to reduce risk and uncertainty and to justify and support our multi-million dollar decisions.
##### 03. What Happens If Type-well Curves Are Wrong?
Now, what happens when a type-well curve is wrong? Well typically, type of curves fall short. It's rare that we exceed our type-well curves. But when we do, it can negatively impact cash flow. So that's going to affect our planning. It's going to affect what we plan to invest that money in operations, things of the sort. It could mean reserve write downs, which reduces corporate value and it very often leads to negative market reactions.
##### 04. Market Reaction: Stock Price Impact
And if you don't believe me, it took me all of five minutes to find eight companies where investors did not react well to production guidance shortfalls. And each of these companies with the area circled in red, are when announcements were made that they fell short of production shortfalls. We're talking 50% of corporate value, 70%, 40%. These are really significant things that you need to be concerned about and why you need to have care and attention in the development of your type-well curves.
##### 05. Disclaimer and Objectives
So a little disclaimer and a review of our objectives. Now, the content of this presentation is intended to illustrate the complexities associated with type-well curve development. We're using monthly vendor public production data, whether it be Canada or the US, and we're demonstrating analytic techniques that may provide insights when developing type-well curves. Now, these are not the be all do all to end all, these are complementary. They're informative for workflows involving scientific modeling, forecasting tools and economic evaluation tools. The relevance of each topic really depends on what you're trying to accomplish. So we're really equipping you with a toolkit today.
##### 06. Clarification: Type-well Curve vs Type Curve
So let's get some clarity on terminology now. Type-well curves are often referred to as type curves. They are different. I am personally and to use them interchangeably throughout the course of this. So anytime I say type curves, I mean type-well curves, but type curves more properly refer to idealize production plots based on equations and or numerical simulation to which actual production results are compared. Now type-well curves actually use real production data and represent an average production profile for a collection of wells for a specified duration.
##### 07. Why are Type-well Curves Important?
Now, why are type-well curves important? Again, they're the foundation of evaluations. They're used for development planning, production performance comparisons. We use them to actually figure out what designs are optimal. So completion optimization analysis.
Now, the dangers of not understanding these complexities and failing to communicate how your Type-well curve was designed or developed can result in large statistical variability, inconsistent information used in development decisions and unattainable economic plans, especially in unforgiving times of low commodity prices.
##### 08. 6 Different Approaches
So why are they important? Because if you don't do them right, you can get very different answers. Here's an example of six different approaches I found at one company, and they're all using the same data from 85 wells. Put that into another perspective. And first of all let's orient you on this chart. This is showing your MCF per day per well. So that's an average profile. And they are time normalized in a variety of fashions. If we look at this from a cumulative revenue versus time perspective, you can see that they vary from $1.5 million dollars to$2 million dollars, that's a \$500 thousand dollar difference. Now, if I was a decision maker having a 33% difference in my type-well curves in the first year would make me very concerned. So that's why I want to know as a decision maker how the type of curve was developed.
##### 09. Presentation Outline
So today, we're gonna go through all of the different chart types that not only are used to present type-well curves, but also support the development of the type curves. We're gonna look at analog selection, which is by far the most important consideration in the process. We're gonna look at a variety of forms of normalization. Look at idealized type-well curves, comparing producing day versus calendar day. Condensing time. We need to consider operational and downtime factors on any idealized type curves. We're gonna look at the concept of survivor bias. We're gonna consider truncation, using sample size cut off so that we can forecast the average. Or we can use auto forecast tools to average the forecasts. Different vehicles for representing the uncertainty associated with a type-well curve. And then we're gonna look at some of the complementary aspects of using auto forecast tools and some of the analysis that they have for us in the development of type-well curves.
All right, with our agenda laid out, let's jump into chart types and orient ourselves on the different kinds of visualizations we're gonna use during the development of type-well curves.
### Bertrand Groulx
• Create a learning schedule
• Stay on track with regular email updates
• Share your learning commitment with management and peers
Next lesson will start in 5 seconds ... Cancel
Type-Well Curve Fundamentals Course
Chapter 1 - Type-Well Curves (1:10:12)
Chapter 2 - Dealing with Uncertainty & Scaling for Design (24:02)
Chapter 3 - PCD, IOD & Case Studies (1:06:39)
SAGA Wisdom
|
2021-10-24 09:12:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2135941982269287, "perplexity": 2252.3811557225995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00644.warc.gz"}
|
https://leetcode.ca/2017-11-29-730-Count-Different-Palindromic-Subsequences/
|
Formatted question description: https://leetcode.ca/all/730.html
# 730. Count Different Palindromic Subsequences (Hard)
Given a string S, find the number of different non-empty palindromic subsequences in S, and return that number modulo 10^9 + 7.
A subsequence of a string S is obtained by deleting 0 or more characters from S.
A sequence is palindromic if it is equal to the sequence reversed.
Two sequences A_1, A_2, ... and B_1, B_2, ... are different if there is some i for which A_i != B_i.
Example 1:
Input:
S = 'bccb'
Output: 6
Explanation:
The 6 different non-empty palindromic subsequences are 'b', 'c', 'bb', 'cc', 'bcb', 'bccb'.
Note that 'bcb' is counted only once, even though it occurs twice.
Example 2:
Input:
Output: 104860361
Explanation:
There are 3104860382 different non-empty palindromic subsequences, which is 104860361 modulo 10^9 + 7.
Note:
• The length of S will be in the range [1, 1000].
• Each character S[i] will be in the set {'a', 'b', 'c', 'd'}.
• Related Topics:
String, Dynamic Programming
Similar Questions:
## Solution 1. DP
First consider the case where we count duplicates as well.
Let dp[i][j] be the number of palindromic subsequences in S[i..j].
dp[i][j] = 0 if i > j
dp[i][i] = 1
If S[i] != S[j]:
dp[i][j] = dp[i + 1][j] + dp[i][j - 1] - dp[i + 1][j - 1]
We need to - dp[i + 1][j - 1] because the palindromic subsequences are counted twice already in dp[i + 1][j] and dp[i][j - 1].
If S[i] == S[j], then there are additional dp[i + 1][j - 1] + 1 cases where are the palindromic subsequences in S[(i+1)..(j-1)] wrapped with S[i] and S[j], plus one case S[i]S[j].
dp[i][j] = dp[i + 1][j] + dp[i][j - 1] - dp[i + 1][j - 1] + (dp[i + 1][j - 1] + 1)
= dp[i + 1][j] + dp[i][j - 1] + 1
So in sum, dp[i][j] = :
• 0, if i > j
• 1, if i == j
• dp[i + 1][j] + dp[i][j - 1] - dp[i + 1][j - 1], if S[i] != S[j]
• dp[i + 1][j] + dp[i][j - 1] + 1, if S[i] == S[j]
Now consider distinct count.
dp[i][j][k] is the number of distinct palindromic subsequences in S[i..j] bordered by 'a' + k.
If S[i] != S[j]:
dp[i][j][k] = dp[i+1][j][k] + dp[i][j-1][k] - dp[i+1][j-1][k]
If S[i] == S[j] && S[i] == 'a' + k:
dp[i][j][k] = 2 + sum( dp[i+1][j-1][t] | 0 <= t < 4 )
This is because we can wrap all the cases of dp[i+1][j-1][t] with S[i] and S[j] to form new palindromes (which won’t contain a and aa), and the +2 means a and aa.
So in sum, dp[i][j][k] =:
• 0, if i > j or i == j && S[i] != 'a' + k
• 1, if i == j && S[i] == 'a' + k
• dp[i+1][j][k] + dp[i][j-1][k] - dp[i+1][j-1][k], if S[i] != S[j]
• 2 + sum( dp[i+1][j-1][t] | 0 <= t < 4 ), if S[i] == S[j] && S[i] == 'a' + k
// OJ: https://leetcode.com/problems/count-different-palindromic-subsequences/
// Time: O(N^2)
// Space: O(N^2)
// Ref: https://leetcode.com/problems/count-different-palindromic-subsequences/discuss/272297/DP-C%2B%2B-Clear-solution-explained
int memo[1001][1001][4];
class Solution {
int mod = 1e9 + 7;
string s;
int dp(int first, int last, int ch) {
if (first > last) return 0;
if (first == last) return s[first] - 'a' == ch;
if (memo[first][last][ch] != -1) return memo[first][last][ch];
int ans = 0;
if (s[first] == s[last] && s[first] - 'a' == ch) {
ans = 2;
for (int i = 0; i < 4; ++i) ans = (ans + dp(first + 1, last - 1, i)) % mod;
} else {
ans = (ans + dp(first, last - 1, ch)) % mod;
ans = (ans + dp(first + 1, last, ch)) % mod;
ans = (ans - dp(first + 1, last - 1, ch)) % mod;
if (ans < 0) ans += mod;
}
return memo[first][last][ch] = ans;
}
public:
int countPalindromicSubsequences(string S) {
s = S;
memset(memo, -1, sizeof(memo));
int ans = 0;
for (int i = 0; i < 4; ++i) ans = (ans + dp(0, S.size() - 1, i)) % mod;
return ans;
}
};
Java
class Solution {
public int countPalindromicSubsequences(String S) {
final int MODULO = 1000000007;
int length = S.length();
int[][] dp = new int[length][length];
for (int i = 0; i < length; i++)
dp[i][i] = 1;
for (int i = length - 2; i >= 0; i--) {
char c1 = S.charAt(i);
for (int j = i + 1; j < length; j++) {
char c2 = S.charAt(j);
if (c1 == c2) {
int low = i + 1, high = j - 1;
while (low <= high && S.charAt(low) != c1)
low++;
while (low <= high && S.charAt(high) != c2)
high--;
if (low > high)
dp[i][j] = dp[i + 1][j - 1] * 2 + 2;
else if (low == high)
dp[i][j] = dp[i + 1][j - 1] * 2 + 1;
else
dp[i][j] = dp[i + 1][j - 1] * 2 - dp[low + 1][high - 1];
} else
dp[i][j] = dp[i][j - 1] + dp[i + 1][j] - dp[i + 1][j - 1];
dp[i][j] %= MODULO;
if (dp[i][j] < 0)
dp[i][j] += MODULO;
}
}
return dp[0][length - 1];
}
}
|
2022-08-09 04:09:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2844919264316559, "perplexity": 12523.397477115444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00177.warc.gz"}
|
https://cdsweb.cern.ch/collection/ATLAS%20Theses?ln=de&as=1
|
# ATLAS Theses
Letzte Einträge:
2021-10-07
11:05
The Read Out Controller ASIC for the ATLAS Experiment at LHC / Popa, Stefan The ATLAS Experiment at LHC is used for fundamental research in particle physics [...] CERN-THESIS-2021-159 - 215 p.
2021-10-04
17:42
Hardware and Firmware Development for an FPGA-based Multipurpose Board Targeted for High Energy Physics Experiments / Barros Marin, Manoel This thesis (PFC) describes the hardware and firmware development of an FPGA- based multi-purpose board targeted for High Energy Physics (HEP) that has been carried out during my stay at CERN [...] CERN-THESIS-2012-503 - 236 p.
2021-10-04
01:29
Electroweak Production of Two Opposite-Sign W Bosons using the ATLAS Detector / Linck, Rebecca Two studies involving the production of two opposite-sign W-bosons are performed using proton-proton collisions at $\sqrt{s} = 13$ TeV recorded by the ATLAS experiment at the Large Hadron Collider (LHC) [...] CERN-THESIS-2021-151 - Ann Arbor : ProQuest Dissertations Publishing, 2021-10-02. - 464 p.
2021-10-02
16:32
Search for $W^{\prime}$ bosons with boosted and hadronic top-quark final states in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector / Lin, Kuan-Yu With the LHC proton-proton collision data at an energy of 13 TeV recorded by the ATLAS detector, this analysis searches for the production of a new particle $W^{\prime}$ with a yet-to-be-probed mass in the top quark plus bottom quark decay channel [...] CERN-THESIS-2021-150 -
2021-10-01
14:20
Search for new interactions in the top quark sector / Peixoto, Ana The top quark is the heaviest elementary particle discovered so far and the study of its unique properties provides not only an essential test of the Standard Model of Particle Physics, but also an important window into physics beyond it [...] CERN-THESIS-2021-149 - 191 p.
2021-09-29
22:51
Search for HIGGS Bosons in the Standard Model and Beyond / Cohen, Hadar The detailed study of the Higgs sector is a crucial milestone for the comprehensive understanding of the latest discovered part in the Standard Model [...] CERN-THESIS-2021-148 - 195 p.
2021-09-27
18:24
Design and Development of Depleted Monolithic Active Pixel Sensors with Small Collection Electrode for High-Radiation Applications / Moustakas, Konstantinos Depleted monolithic active pixel sensors (DMAPS) have emerged as a low material and low cost alternative to the established hybrid technology [...] CERN-THESIS-2021-146 - 202 p.
2021-09-23
14:19
/ Alhawiti, Fawaz Mutlaq S There is a need to answer the questions that the Standard Model theory SM fails to explain [...] CERN-THESIS-2021-143 -
2021-09-21
21:50
Design Testing of the Small Monitored Drift Tubes for the Phase II Upgrade of the ATLAS Muon Spectrometer / Minnella, Joseph Dominick The new high luminosity environment at the Large Hadron Collider necessitates an upgrade of detector technology [...] CERN-THESIS-2021-142 - 35 p.
2021-09-20
18:19
Search for Higgs boson pair production in the single-lepton $WWb\bar{b}$ channel with the ATLAS detector / D'Amico, Valerio A search for the Standard Model (SM) Higgs boson pair production in the $WWb\bar{b}$ single-lepton channel is presented [...] CERN-THESIS-2020-365 - 168 p.
|
2021-10-22 22:58:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428174257278442, "perplexity": 3033.9988344616495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00412.warc.gz"}
|
https://www.physicsforums.com/threads/getting-new-irreducible-representations-from-old-ones.948551/
|
# Getting new irreducible representations from old ones
• A
## Main Question or Discussion Point
Suppose I had some group G, and I could classify all of its irreducible K-representations for some K = R,C, or H. Given that information (how) can I classify its irreducible K-representations for all K.
i.e. suppose I knew all the irreducible real representations of G, (how) could I then get all the irreducible complex and quaternionic representations?
Related Linear and Abstract Algebra News on Phys.org
fresh_42
Mentor
A standard method in case of linear representations on a real vectorspace $V_\mathbb{R}$ is to construct $V_\mathbb{C} = V_\mathbb{R} \otimes_\mathbb{R} \mathbb{C}$ and define for a given $\varphi_\mathbb{R} \, : \,G \longrightarrow GL(V_\mathbb{R})$
$$\varphi_\mathbb{C} \, : \,G \longrightarrow GL(V_\mathbb{C}) \quad \text{ by } \quad \varphi_\mathbb{C}(g)(\lambda\cdot v) = \varphi_\mathbb{R}(g)(v) \otimes \lambda$$
I'm not sure, however, whether they automatically will be irreducible again, as there are simply more eigenvalues available, so I doubt it.
Infrared
Indeed, let the cyclic group of order 3 act on $\mathbb{R}^2$ with a generator corresponding to rotation by $2\pi/3$. This representation is irreducible but its complexification isn't.
|
2020-04-01 11:02:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395529627799988, "perplexity": 667.9687606571923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00129.warc.gz"}
|
https://www.physicsforums.com/threads/electric-force-with-a-moving-conductive-bar-in-a-magnetic-field.489440/
|
# Electric force with a moving conductive bar in a magnetic field
1. ### slain4ever
63
1. The problem statement, all variables and given/known data
A conducting bar is moved to the right in a magnetic field going into the page, an electric field is set up in the bar. Is this electric field upwards or downwards.
3. The attempt at a solution
i think that because a current is being induced in the bar you use the right hand rule but the current goes the other way in the thumb therefore the electric field is up the bar. Is this correct?
2. ### Andrew Mason
6,829
The electric field is given by:
$$\vec{E} = \vec{v} \times \vec{B}$$
So use the right hand rule for cross product (use the index finger for v and the middle finger for B and the thumb will give the direction of E).
AM
3. ### slain4ever
63
is that a yes, by the way i think you misunderstood, i use the right hand palm rule, not the flemming thing.
4. ### Andrew Mason
6,829
If you use the right hand two-finger-thumb rule to determine cross product, you will not be confused.
AM
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook
Have something to add?
|
2015-03-27 05:26:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067702770233154, "perplexity": 568.8121804353664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295084.53/warc/CC-MAIN-20150323172135-00078-ip-10-168-14-71.ec2.internal.warc.gz"}
|
http://www.fixya.com/support/t21670711-calculate
|
Question about Stamina Products Stamina Ux2 Air Resistance Upright Exercise Bike
# Won't calculate The meter shows all 0's and won't calculate when Riding?
Posted by Anonymous on
×
my-video-file.mp4
×
## Related Questions:
### My BA II plus professional showing wrong answer while computing "N" during TVM calculation
Make sure you have P/Y set to 1 and BGN/END set to END. With those settings, pressing 9 I/Y 9 2 0 FV 1 0 0 +/- PMT CPT N should produce 7.00 .
Aug 25, 2012 | Texas Instruments BA-II Plus Pro...
### Casio hr100tm won't print, shows E, won't feed paper. It calculates in On mode only.
mi sumadora, de repente se queda en E y no me deja seguir sumando cantidades que solucion tiene esto
Aug 22, 2012 | Casio HR-100TMPlus Calculator
### Meter to kilometer
There are 1000 meters in a kilometer and 3600 seconds in an hour (60 seconds per minute, 60 minutes per hour).
Press 8 0 0 / 5 0 * 3 6 0 0 / 1 0 0 0 =
or 8 0 0 / 1 0 0 0 * 3 6 0 0 / 5 0 =
The result should be 57.6 kph.
Nov 11, 2011 | Casio FX-115ES Scientific Calculator
### Convert 8000 square Feet in Metric
One foot is exactly 0.3048 meters, so 8000 square feet is 8000*.3048^2 square meters.
Pressing 8 0 0 0 * . 3 0 4 8 ^ 2 ENTER will give you about 743.
May 11, 2011 | Texas Instruments TI-83 Plus Calculator
### Using the t1-83 show me how to do the following 800(1+0.02)to the 20th
Almost exactly the way you typed it.
8 0 0 ( 1 + 0 . 0 2 ) ^ 2 0 ENTER
^ is the key just above the divide key.
Feb 16, 2011 | Texas Instruments TI-83 Plus Calculator
### How to convert cbm into sq meter
Now cubic meter calculator can be defined as a simple device which is used to convert a given value in meters to it's corresponding cubic meter form. Other than this we can always add the formulaes of the structures to get the volume of the structure by simply entering it's side length or height etc..
Let us consider simple examples :
Suppose that we have to convert 5 meters to its cube. The only thing we would do is that we would enter this value of 5 in the calculator and calculator with do the calculation as $\displaystyle{{5}}^{{3}}$ and will show the result to you as 125 $\displaystyle{{m}}^{{3}}$ .
Now if we have added formulaes to this calculator , like for calculating the volume of cube,cylinder and cone (say). Then we can use this calculator in three different cases.
Visit anthony's tuition class. Specialist in H2 Physics Tuition and H2 Maths Tuition.
Feb 10, 2011 | Office Equipment & Supplies
### My calculator shows 0.00000000 instead of 0 every time. How should I fix it? All the functions works properly, but there are too many zeros for me.
At the top left of the calculator, you will see a sliding switch that is labeled: DECIMAL A,0,2,4,6,F. Move the indicator to the "0". This will remove all excess zeros. The numbers represent how many decimal places the screen will show. I have the manual for this calculator if you need it. LBrown18062@hotmail.com
Jan 03, 2011 | Citizen CX80 Scientific Calculator
### Percenteg Function problem in my Casio MJ - 120 Model Calculator
1. Press (AC)
2. Hold (Rate) for about 2-3 second
3. Press (Set / TAX+)
4. Input the tax rate
5. Press (Set / TAX+)
6. Hold (Rate) for about 2-3 second again
7. Press (Set / TAX+) again
And Then you can test the rate .
From Thailand
Mar 12, 2009 | Casio Office Equipment & Supplies
## Open Questions:
#### Related Topics:
25 people viewed this question
Level 3 Expert
Level 3 Expert
|
2018-06-22 14:16:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3147850036621094, "perplexity": 3416.365086728154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864482.90/warc/CC-MAIN-20180622123642-20180622143642-00237.warc.gz"}
|
https://zbmath.org/?q=an:0718.11048
|
# zbMATH — the first resource for mathematics
The number of integral points on arcs and ovals. (English) Zbl 0718.11048
Let $$C$$ be an arc of a convex curve $$y=F(x)$$ lying in the unit square, and let $$R(N)$$ be the number of integer points on $$NC$$ (rational points $$(m/N,n/N)$$ on $$C$$). The authors prove results of the form $R(N) < B(\varepsilon) N^{\theta +\varepsilon} \tag{1}$ for $$N$$ sufficiently large. V. Jarník [Math. Z. 24, 500–518 (1925; JFM 51.0153.01)] constructed curves with $$R(N)\geq AN^{2/3}$$ for any given $$N$$, so the conditions “$$N$$ sufficiently large” and “$$B(\varepsilon)$$ depending on $$C$$” cannot both be dropped. H. P. F. Swinnerton-Dyer [J. Number Theory 6, 128–135 (1974; Zbl 0285.10020)] gave (1) with $$\theta =3/5$$ when $$F(x)$$ has three continuous derivatives. W. M. Schmidt [Monatsh. Math. 99, 45–72 (1985; Zbl 0551.10026)] made $$B(\varepsilon)$$ independent of $$C$$ for $$F^{(3)}$$ non-vanishing.
Algebraic curves parametrised by polynomials of degree $$d$$ over $${\mathbb Q}$$ have $$R(N)\geq AN^{1/d}$$ for infinitely many $$N$$. The authors show (Theorems 1 and 2) that all other real-analytic curves satisfy (1) with $$\theta =0$$. The constant $$B(\varepsilon)$$ depends on $$C$$ ineffectively. For irreducible algebraic curves of degree $$d$$, (1) holds with $$\theta =1/d$$ and $$B(\varepsilon)$$ depending only on $$d$$ (Theorem 5). For an arbitrary curve (1) holds with $$\theta =1/2+8/3(d+3),$$ provided that $$F(x)$$ has $$D=(d+1)(d+2)/2$$ continuous derivatives, with $$B(\varepsilon)$$ depending on upper bounds for the derivatives of $$F(x)$$ (Theorem 6). Theorem 8 is a slightly weaker result in which $$B(\varepsilon)$$ depends on the number of zeros of $$F^{(D)}(x)$$. For $$D\geq 325$$ Swinnerton-Dyer’s exponent is greatly improved. The authors show that $$\varepsilon$$ is needed in the exponent by constructing an infinitely differentiable $$F(x)$$ for any given value of $$N$$, with $$F^{(3)}(x)$$ non-negative, but $$R(N)\geq A(N \log N)^{1/2}.$$
The method is to show that if $$D$$ integer points lie close on a smooth curve, then a certain determinant of monomials in the coordinates is small. Being an integer, the determinant must be zero, and the $$D$$ points lie on an algebraic curve of degree $$d$$ (for algebraic $$C$$ the method is modified to ensure that $$C$$ is not a component). The effective results use induction on the degree and intersection number arguments.
##### MSC:
11P21 Lattice points in specified regions 11D99 Diophantine equations 11H06 Lattices and convex bodies (number-theoretic aspects) 11G99 Arithmetic algebraic geometry (Diophantine geometry) 14G05 Rational points 14G99 Arithmetic problems in algebraic geometry; Diophantine geometry
##### Keywords:
arcs; ovals; convex curve; number of integer points; algebraic curves
Full Text:
##### References:
[1] E. Bombieri, Le grand crible dans la théorie analytique des nombres , Astérisque (1987), no. 18, 103, Soc. Math. France, Paris. · Zbl 0618.10042 [2] S. D. Cohen, The distribution of Galois groups and Hilbert’s irreducibility theorem , Proc. London Math. Soc. (3) 43 (1981), no. 2, 227-250. · Zbl 0484.12002 [3] D. Hilbert and A. Hurwitz, Über die diophantischen Gleichungen vom Geschlecht Null , Acta Mathematica 14 (1890-1891), 217-224. [4] V. Jarnik, Über die Gitterpunkte auf konvexen Curven , Math. Z. 24 (1926), 500-518. · JFM 51.0153.01 [5] D. J. Lewis and K. Mahler, On the representation of integers by binary forms , Acta Arith. 6 (1961), 333-363. · Zbl 0102.03601 [6] C. Posse, Sur le terme complémentaire de la formule de M. Tchebychef donnant l’expression approchée d’une intégrale définie par d’autres prises entre les mêmes limites , Bull. Sci. Math. (2) 7 (1883), 214-224. · JFM 15.0237.01 [7] P. Sarnak, Torsion points on varieties and homology of Abelian covers , manuscript, 1988. [8] W. M. Schmidt, Integer Points on Curves and Surfaces , Monatsh. Math. 99 (1985), no. 1, 45-72. · Zbl 0551.10026 [9] H. A. Schwarz, Verallgemeinerung eines analytischen Fundamentalsatzes , Annali di Mat. (2) 10 (1880), 129-136, rpt. Gesammelte Mathematische Abhandlungen, vol. 2, J. Springer, Berlin, 1890, pp. 296-302. [10] H. P. F. Swinnerton-Dyer, The number of lattice points on a convex curve , J. Number Theory 6 (1974), 128-135. · Zbl 0285.10020
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-09-26 20:07:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684613108634949, "perplexity": 688.9430808205715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00383.warc.gz"}
|
https://www.physicsforums.com/threads/stokes-law-why-does-viscous-drag-depend-upon-the-radius-of-the-spherical-body.977207/
|
# Stokes' Law: Why does viscous drag depend upon the radius of the spherical body?
Kaushik
Stokes' Law gives us the value fo viscous force when a spherical body is under motion inside a fluid.
##F_{viscous} = 6\pi\eta av## (where ##a## is the radius of the spherical body and ##v## is the velocity with which it is moving)
What is the reason for the Viscous drag to depend upon the radius of the spherical body?
Homework Helper
The more area, the more grip !
Kaushik
Kaushik
The more area, the more grip !
Is it like, more the area, more fluid in contact. Hence, it pulls more fluid with it, consequently the reaction force by the fluid on the body is more?
|
2023-03-29 00:38:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9465908408164978, "perplexity": 1452.6368557372462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00444.warc.gz"}
|
https://zbmath.org/?q=an:1044.35080
|
# zbMATH — the first resource for mathematics
The modified KdV equation on a finite interval. (English. Abridged French version) Zbl 1044.35080
Summary: We analyse an initial-boundary value problem for the mKdV equation on a finite interval by expressing the solution in terms of the solution of an associated matrix Riemann-Hilbert problem in the complex $$k$$-plane. This Riemann-Hilbert problem has explicit $$(x,t)$$-dependence and it involves certain functions of $$k$$ referred to as “spectral functions”. Some of these functions are defined in terms of the initial condition $$q(x,0)=q_0(x)$$, while the remaining spectral functions are defined in terms of two sets of boundary values. We show that the spectral functions satisfy an algebraic “global relation” that characterize the boundary values in spectral terms.
##### MSC:
35Q53 KdV equations (Korteweg-de Vries equations) 35Q15 Riemann-Hilbert problems in context of PDEs 37K10 Completely integrable infinite-dimensional Hamiltonian and Lagrangian systems, integration methods, integrability tests, integrable hierarchies (KdV, KP, Toda, etc.)
Full Text:
##### References:
[1] A. Boutet de Monvel, A.S. Fokas, D. Shepelsky, The mKdV equation on the half-line, J. Inst. Math. Jussieu, in press · Zbl 1057.35050 [2] A. Boutet de Monvel, D. Shepelsky, Initial boundary value problem for the modified KdV equation on a finite interval, Preprint · Zbl 1137.35419 [3] Fokas, A.S., A unified transform method for solving linear and certain nonlinear pdes, Proc. roy. soc. London ser. A, 453, 1411-1443, (1997) · Zbl 0876.35102 [4] Fokas, A.S., On the integrability of linear and nonlinear partial differential equations, J. math. phys., 41, 4188-4237, (2000) · Zbl 0994.37036 [5] Fokas, A.S., Integrable nonlinear evolution equations on the half-line, Comm. math. phys., 230, 1-39, (2002) · Zbl 1010.35089 [6] A.S. Fokas, A.R. Its, The nonlinear Schrödinger equation on the interval, Preprint
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-01-20 00:54:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7162756323814392, "perplexity": 1189.290274166389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519843.24/warc/CC-MAIN-20210119232006-20210120022006-00306.warc.gz"}
|
https://www.physicsforums.com/threads/tricky-eigen-vectors-question.163833/
|
# Tricky eigen vectors question
I am trying to find eigen values and eigen vectors for A
Its 2X2 matrix. A first row (16 -10) second row (-10 24)
I got Eigen values as 30.77 and 9.22 but when i try to find eigen vectors here are the equations I end up with
-14.77v1 - 10v2= 0
-10v1 - 6.77v2 = 0
Kinda confused how to proceed with this.
Thanks
Use 9.23, you rounded wrong.
The procedure for finding associated eigenvectors is to find the nullspace of A-λI. So you have to solve the nullspace of that matrix you wrote up to find the 30.77-eigenspace. Then you'll need to do the process again for 9.23.
Does this clear things up for you? Or do you need help with solving the nullspace? Because that should be easy.
these two equations are not compatible, because you have rounded your eigenvalues, but if you would have used the ratios as an eigenvalued, you'd see that these thwo equations are exactly the same.
When you are solveing for eigenvectors you have to use either of these equation, because they are same, and then if your states are normalizabe, you have to normalize it.
v1^2+v2^2=1
that's your second equation in system!
Hurkyl
Staff Emeritus
|
2022-07-05 15:02:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072219491004944, "perplexity": 705.0472389491754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00274.warc.gz"}
|
https://publications.mfo.de/handle/mfo/3554?show=full
|
## Mini-Workshop: Computations in the Cohomology of Arithmetic Groups
dc.date.accessioned 2019-10-24T15:23:40Z dc.date.available 2019-10-24T15:23:40Z dc.date.issued 2016 dc.identifier.uri http://publications.mfo.de/handle/mfo/3554 dc.description.abstract Explicit calculations play an important role in the theoretical development of the cohomology of groups and its applications. It is becoming more common for such calculations to be derived with the aid of a computer. This mini-workshop assembled together experts on a diverse range of computational techniques relevant to calculations in the cohomology of arithmetic groups and applications in algebraic $K$-theory and number theory with a view to extending the scope of computer aided calculations in this area. dc.title Mini-Workshop: Computations in the Cohomology of Arithmetic Groups dc.rights.license Dieses Dokument darf im Rahmen von § 53 UrhG zum eigenen Gebrauch kostenfrei heruntergeladen, gelesen, gespeichert und ausgedruckt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. de dc.rights.license This document may be downloaded, read, stored and printed for your own use within the limits of § 53 UrhG but it may not be distributed via the internet or passed on to external parties. en dc.identifier.doi 10.14760/OWR-2016-52 local.series.id OWR-2016-52 local.subject.msc 57 local.subject.msc 55 local.subject.msc 18 local.subject.msc 19 local.subject.msc 20 local.subject.msc 11 local.sortindex 994 local.date-range 30 Oct - 05 Nov 2016 local.workshopcode 1644c local.workshoptitle Mini-Workshop: Computations in the Cohomology of Arithmetic Groups local.organizers Eva Bayer-Fluckiger, Lausanne; Philippe Elbaz-Vincent, Grenoble; Graham Ellis, Galway local.report-name Workshop Report 2016,52 local.opc-photo-id 1644c local.publishers-doi 10.4171/OWR/2016/52 local.ems-reference Bayer-Fluckiger Eva, Elbaz-Vincent Philippe, Ellis Graham: Mini-Workshop: Computations in the Cohomology of Arithmetic Groups. Oberwolfach Rep. 13 (2016), 2941-2973. doi: 10.4171/OWR/2016/52
Report
|
2023-03-23 04:24:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48051732778549194, "perplexity": 10379.43764475561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00722.warc.gz"}
|
https://www.infoq.com/articles/Automated-Builds-2/
|
Facilitating the spread of knowledge and innovation in professional software development
Contribute
### Choose your language
InfoQ Homepage Articles Automated Builds: How to Get Started
# Automated Builds: How to Get Started
## Introduction
The first part of this series discussed some of the benefits of automating your build and deployment processes. There are many reasons you may want to do this - to allow your developers to focus on core business instead of administration, to reduce the potential for human error, to reduce the time spent on deployment, and a variety of others. Whatever your motivations are, automating your build process is always the right answer.
In this article, we will take a common example of a corporate web application for a fictional financial institution, and walk through fully automating their build process.
## Case Description
Our company is 3rd National Bank, a local financial institution. Our online banking application consists of the front-end web application (ASP.NET); a RESTful service (WebAPI) for connecting from mobile applications; a series of internal services (WCF) which use a traditional domain-driven design to separate out business logic, domain objects, and database access (Entity Framework); and a SQL Server database.
The software team uses Mercurial as their source control system, and delivers features regularly, using a feature-branch strategy - a branch is created for each new feature or bug, and once tested, the code is merged into the main line for release.
Currently all of the build and deployment steps are done manually by the software team, causing developers to spend several hours every week maintaining their repositories and servers instead of writing code. We’re trying to change that, and automate as much of the process as possible.
## Build Scripts
Build scripts are the first step toward automating your build. These scripts come in all shapes and sizes – they can be shell or batch scripts, XML-based, or written in a custom or an existing programming language; they can be auto-generated or hand-coded; or they can be totally hidden inside an IDE. Your process may even combine multiple techniques. In this example, we'll use NAnt as our build script engine.
In our environment, we have separate Visual Studio solutions for the front-end web application, the external service, and the internal service application, and a database solution for the SQL database. We’ll create a single master build script file, 3rdNational.Master.build, which looks something like this:
<project name="3rd National Bank" default="build"> <target name="build"> <call target="createStagingEnvironment" /> <nant buildfile="BankDB/BankDB.build" target="build" /> <nant buildfile="ServiceLayer/ServiceLayer.build" target="build" /> <nant buildfile="OnlineBanking/WebUI.build" target="build" /> <nant buildfile="ExternalServices/ExternalServices.build" /> </target></project>
This script doesn’t actually do anything – instead it just makes calls to each of the four solutions. Each solution gets its own build file, which contains all the code required to compile and prepare its part of the application.
Now let's take a look at a build script for one of these solutions. Each solution follows the same basic steps: prepare, compile, and stage. Here is a basic build script for ServiceLayer.build - the syntax for this is pretty straightforward:
<project name="ServiceLayer"> <property name="msbuildExe" value="c:\windows\microsoft.net\framework\v4.0.30319\msbuild.exe" /> <target name="build"> <call target="prepare" /> <call target="compile" /> <call target="stage" /> </target> <target name="prepare"> <!-- Implementation omitted --> </target> <target name="compile"> <exec program="\${msbuildExe}"> <arg value="ServiceLayer.sln" /> <arg value="/p:Configuration=Release" /> <arg value="/t:rebuild" /> </exec> </target> <target name="stage"> <copy todir="../deploy/BankWcf"> <include name="WcfServices/**/*.svc" /> <include name="WcfServices/**/*.asax" /> <include name="WcfServices/**/*.config" /> <include name="WcfServices/**/*.dll" /> </copy> </target></project>
The preparation steps may involve building an AssemblyInfo file, or rebuilding proxies, or any number of other things. The compilation step in this case is simply calling MSBuild, the build engine Visual Studio uses to compile a solution. Finally, after everything builds successfully, the last step copies out the appropriate files into a staging area, to be picked up later.
We do the same thing for the other three solutions, modifying them appropriately based on the different project types.
Writing your build scripts is just like writing any other kind of code – there are endless ways of accomplishing the same result. You can use the command-line compiler executables directly instead of MSBuild. You can build the projects individually rather than building the full solution. You can use MSDeploy to stage out or deploy your application instead of defining a filter and copying files. In the end, it's all about what you're comfortable with. As long as your scripts produce consistent output, there's no wrong way to write them.
## Continuous Integration
Now that we have build scripts, we need something that will call them. We could run our build scripts from the command line – but since we're trying to automate everything, we need a machine to run the scripts when appropriate. This is where continuous integration comes in.
Let's use TeamCity, a product by JetBrains, for our CI. It has a very reasonable pricing model, and offers a fantastic user experience for setting up projects. After an easy installation on our new build server, we're ready to get started.
In TeamCity, your first step is setting up a project. The project consists of a name, along with a collection of build configurations. Let’s create a project called “3rd National Bank”.
We’re going to want to set up a build template, which will represent the settings used for the mainline as well as any branches we want to put under CI. We’ll set up our version control settings, selecting Mercurial as our source code repo, the default branch, credentials, and a place to check the files out to. Next is a build step, selecting NAnt and our master build file. If we have a unit test project, we’ll simply add another build step to run NUnit or MSTest or whatever we’re using. Finally, we’ll select a build trigger tied to our version control, which means the build will run every time someone pushes code to the main repository.
There are lots of other useful things you can do in TeamCity, like defining failure conditions based on build output, dependencies on other builds, and custom parameters, which you can explore as you need. But we’ve got all we need for a basic build now.
Let’s create a new build configuration from this template, called “Main Line”. This will represent the top of the version control tree, where stable production-ready code lives. Since we also have feature branches out there, we can create as many more build configurations from the template as we need, one for each feature, and we should only need to make minor tweaks to the source control settings. We now have not only our mainline, but every open feature building automatically upon code checkin, all in just a couple minutes.
When a feature is done and merged into the mainline for release, the build configuration for that branch can simply be deleted.
## Deployment
Now that our CI system has built our code, run our tests, and staged out a release, we can talk about deployment. Like anything else, there are many different strategies you can use to deploy your applications. Here are a few basic strategies you may want to use to deploy a web application in IIS:
• Simply backup your existing applications and copy the new code on top. You never have to worry about touching your configuration.
• Copy your code out to a brand new versioned directory on your web server. You can do this in advance. When you are ready, re-point IIS to the new directory. You can take an extra step of having a "staging" web application in IIS that you point to the new version, with which you can run some preliminary tests against prior to making the switch.
• Don’t hide the versions; include them in your URL: http://example.com/v3.4/ and http://example.com/v3.5/ - the root application of http://example.com/ will send you to the newest application using a simple config setting or IIS setting. 3.4 will remain alive, and only customers going to the root application will see the new one. This gives you the opportunity to not interrupt existing sessions. After an hour or so, when all the sessions on 3.4 are gone, you can safely remove v3.4 from IIS.
Your team can determine what's best for you - it depends on your organization's policy toward outage windows and uptime requirements, as well as your database design strategy. For our example, we'll assume we have a 1-hour weekly outage window, so we'll pick the simple file copy strategy which gives us time to back up and deploy the code and database and test it, prior to turning things back on.
Your CI system has staged out your release, so it's simply a matter of getting these files out to your production servers and deploying your database changes. If you have file system access to your build and web servers from your desktop, copying the files can be as simple as executing a batch file that looks like:
rem BACKUProbocopy /E \\web1\www\BankWeb \\web1\Backups\BankWeb\%DATETIME%robocopy /E \\web1\www\BankRest \\web1\Backups\BankRest\%DATETIME%robocopy /E \\svc1\www\BankWcf \\svc1\Backups\BankWcf\%DATETIME%rem DEPLOYrobocopy /E \\build1\BankWeb\deploy \\web1\www\BankWebrobocopy /E \\build1\BankRest\deploy \\web1\www\BankRestrobocopy /E \\build1\BankWcf\deploy \\svc1\www\BankWcf
...etc...
If you don't have full file system access, you'll need to find a more creative way to deploy your files. Of course, you can Remote Desktop into your server or an intermediary server to execute a batch file or manually copy files, but remember we're trying to automate this, so the fewer steps, the better. Ideally, you'll have a trusted system in the middle, which has the ability to deploy files to the web servers after it authenticates you. DubDubDeploy is one option that copies files from a trusted server over HTTP to allow you to deploy without access to the web server's file system.
Deploying a database can be done many ways, again depending on your organization. In this example, we are using a database project, so it is as simple as executing a single command which takes the project, automatically compares to your production database, and executes the change script, along with any custom SQL you’ve written that builds seed data. If you're comfortable letting the system do this on its own, it's just a matter of executing a command:
msbuild.exe /target:Deploy /p:TargetDatabase=3rdNational;TargetConnectionString=”Server=db1;Integrated Security=True”;Configuration=release BankDb.sqlproj
Of course, you can execute this any number of ways - you can add it to your NAnt script as a target, or add it to TeamCity, or run it manually, or put it in a deployment batch file.
## Conclusion
We've come a long way; when we started building, running tests, staging, backing up, and deploying our code was all done manually. Now there are scripts that compile our code, a system that continuously and consistently executes these scripts and runs our unit tests, and a simple repeatable deployment task.
If you don't have the time to set up everything at once, you don't have to. You can do this a little at a time, and still benefit from each step. For example, you can probably put together a basic build script for your entire set of applications in less than an hour. Maybe another hour to test and debug it to ensure it builds things the same way you're used to. Now, without even adding CI or other automation, you've made it easier to build and stage your app, so next time you deploy manually, you don't even have to open your IDE. Maybe you'll have time next week or next month to create a simple CI project, which you can improve the following month. Before you know it, you'll have a fully automated process.
I've specifically used .NET, NAnt, and TeamCity for these examples, but the fundamental principles can be applied to anything. Whatever operating system, programming languages, server technologies, source control strategies, and team structure you have, automation is possible, affordable, and well worth the effort.
## About the Author
Joe Enos is a software engineer and entrepreneur, with 10 years’ experience working in .NET software environments. His primary focus is automation and process improvement, both inside and outside the software world. He has spoken at software events across the United States about build automation for small software development teams, introducing the topics of build scripts and continuous integration.
His company’s first software product, DubDubDeploy, was recently released - the first in a series of products to help improve software teams manage their build and deployment process. His team is currently working on fully automating the .NET build cycle.
Style
## Hello stranger!
You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
by 高 翌翔,
• ##### Suggest adding a hyperlink to "The first part"
by 高 翌翔,
Your message is awaiting moderation. Thank you for participating in the discussion.
The hyperlink of "The first part" is "http://www.infoq.com/articles/Automated-Builds".
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
|
2021-11-29 06:45:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27959275245666504, "perplexity": 2315.7389496187884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358688.35/warc/CC-MAIN-20211129044311-20211129074311-00438.warc.gz"}
|
http://hal.in2p3.fr/in2p3-00012419
|
# $b \rightarrow s\gamma$ using a sum of exclusive modes
Abstract : This paper describes preliminary results on the inclusive process b --> s gamma obtained from 20.7 fb^-1 of data recorded with the BaBar detector during 1999-2000. Signal event yields are found from a combination of twelve exclusive decay channels after subtracting continuum and BBbar backgrounds. Cross-feed from incorrectly reconstructed b --> s gamma events is also removed. Branching fractions in bins of hadronic mass are calculated using corrected Monte Carlo signal efficiencies; this is equivalent to measuring the gamma energy spectrum. We measure the first moment of the gamma energy spectrum constraining the HQET parameter LambdaBar = (0.37 +/- 0.09 (stat) +/- 0.07 (syst) +/- 0.10 (model)) GeV/c^2. A fit to the hadronic mass spectrum gives B(b --> s gamma) = (4.3 +/- 0.5 (stat) +/- 0.8 (syst) +/- 1.3 (model))x 10^-4 for the inclusive branching fraction. We also constrain the HQET parameter lambda_1.
Document type :
Conference papers
http://hal.in2p3.fr/in2p3-00012419
Contributor : Claudine Bombar <>
Submitted on : Tuesday, January 14, 2003 - 2:59:58 PM
Last modification on : Wednesday, January 6, 2021 - 6:22:02 PM
### Citation
B. Aubert, D. Boutigny, J.M. Gaillard, A. Hicheur, Y. Karyotakis, et al.. $b \rightarrow s\gamma$ using a sum of exclusive modes. International Conference on High Energy Physics 31 ICHEP 2002, Jul 2002, Amsterdam, Netherlands. pp.1-21. ⟨in2p3-00012419⟩
Record views
|
2021-09-19 03:14:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5123788118362427, "perplexity": 11169.428701845307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00257.warc.gz"}
|
http://math.stackexchange.com/questions/98074/how-is-parsing-a-word-of-a-formal-language-wrt-its-formal-grammar-defined
|
How is parsing a word of a formal language wrt its formal grammar defined?
In computer science and linguistics, parsing, or, more formally, syntactic analysis, is the process of analyzing a text, made of a sequence of tokens (for example, words), to determine its grammatical structure with respect to a given (more or less) formal grammar.
Is there a formal definition, or a more clear one, for parsing a word of a formal language wrt its formal grammar defined? I am not able to understand the meaning of parsing, because I am stuck at the term "grammatical structure" of a word wrt the formal grammar.
2. About analytic grammar and parsing from Wikipedia:
Though there is a tremendous body of literature on parsing algorithms, most of these algorithms assume that the language to be parsed is initially described by means of a generative formal grammar, and that the goal is to transform this generative grammar into a working parser. Strictly speaking, a generative grammar does not in any way correspond to the algorithm used to parse a language, and various algorithms have different restrictions on the form of production rules that are considered well-formed.
An alternative approach is to formalize the language in terms of an analytic grammar in the first place, which more directly corresponds to the structure and semantics of a parser for the language.
What do "generative grammar" and "analytic grammar" mean?
Isn't that true that formal grammars are always "generative" in the sense that they generate formal languages?
Thanks and regards!
-
|
2016-06-28 22:40:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210996985435486, "perplexity": 644.0860589065915}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00196-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/find-particular-solution-differential-equation-2y-e-x-y-dx-y-2x-e-x-y-dy-0-given-that-x-0-when-y-1-general-particular-solutions-differential-equation_4049
|
# Find the particular solution of the differential equation 2y e^(x/y) dx + (y - 2x e^(x/y)) dy = 0 given that x = 0 when y = 1. - Mathematics
Find the particular solution of the differential equation
2y ex/y dx + (y - 2x ex/y) dy = 0
given that x = 0 when y = 1.
#### Solution
2ye^(x/y)dx+(y-2xe^(x/y))dy=0
=>dx/dy=(2xe^(x/y-y))/(2ye^(x/y))
Given differential equation is a homogeneous differential equation.
∴ Put x = vy
dx/dy=v+y (dv)/dy
v+y(dv)/dy=(2ve^v-1)/(2e^v)
=>y(dv)/dy=(2ve^v-1)/(2e^v)-v
=>y(dv)/dy=-1/(2e^v)
=>2e^vdv=-1/ydy
Integrating on both the sides
=>2inte^vdv=-int1/ydy
=>2e^v=-log|y|+logC
=>2e^v=log|c/y|
=>2e^(x/y)=log|c/y|
Given that at x = 0, y = 1
2e^0= log|c/1|
⇒ C = e2
:.2e^(x/y)=log""e^2/y
=>logy=-2e^(x/y)+2
=>y=e^2-2e^(x/y)
Concept: General and Particular Solutions of a Differential Equation
Is there an error in this question or solution?
|
2021-12-09 13:26:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6749390959739685, "perplexity": 989.0778937673601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00582.warc.gz"}
|
https://mail.scipy.org/pipermail/scipy-user/2012-August/032851.html
|
# [SciPy-User] generic expectation operator
josef.pktd@gmai... josef.pktd@gmai...
Wed Aug 22 16:03:02 CDT 2012
On Wed, Aug 22, 2012 at 4:49 PM, nicky van foreest <vanforeest@gmail.com> wrote:
> Hi,
>
> For numerous purposes I need to compute the expectation of some
> function with respect to some probability distribution. i.e. E f(X) =
> \int f(x) dG(x), where f is the function and G the distribution. When
> G is continuous, this is simple, just use scipy.integrate.quad.
> However, if G is discrete, things become more complicated since quad
> often misses the spikes in the density. To deal with this problem, in
> part, I came up with the following code, but it is not completely ok
> yet, see below.
>
> #!/usr/bin/env python
>
> import scipy.stats as stats
> import scipy.integrate
> from math import sqrt
>
> def E(f, X):
> if hasattr(X,'dist'):
> if isinstance(X.dist, stats.rv_discrete): # hint from Ralph
> g = X.pmf
> else:
> g = X.pdf
> return scipy.integrate.quad(lambda x: x*g(x), X.dist.a, X.dist.b)
> else:
> return sum(f(x)*p for x,p in zip(X.xk, X.pk))
>
> # now the following works:
> X = stats.norm(3,sqrt(3))
> print E(sqrt, X)
>
> # this is ok too
> grid = [50, 100, 150, 200, 250, 300]
> probs=[2,3,2,1.,0.5,0.5]
> tot = sum(probs)
> probs = [p/tot for p in probs]
> X = stats.rv_discrete(values=(grid,probs), name='sizeDist')
> print E(sqrt, X)
>
> # but this is not ok
>
> X = stats.poisson(3)
> print E(sqrt, X)
>
> ------------------------------------------
>
> This is the output:
>
> 11.0383463182
> (7.771634331185016e-19, 1.916327339868129e-17)
> (2.9999999999999996, 1.1770368678494054e-08)
>
> So indeed, quad misses the spikes in the poisson distribution.
>
> Is there any nice way to resolve this? Moreover, is there a better way
> to build the expectation operator? Perhaps, if all works, it would be
> a useful add-on to distributions.py, or else I can include it in the
> distribution tutorial, if other people also think this is useful.
quad works only for continuous functions
did you look at
>>> stats.poisson.expect
<bound method poisson_gen.expect of
<scipy.stats.distributions.poisson_gen object at 0x0320FC90>>
>>> stats.norm.expect
<bound method norm_gen.expect of <scipy.stats.distributions.norm_gen
object at 0x031EB890>>
improvements welcome, especially making it more robust
Josef
>
> thanks
>
> Nicky
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
|
2016-05-25 09:27:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834640383720398, "perplexity": 13759.506091263493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274324.89/warc/CC-MAIN-20160524002114-00233-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/45439-factoring-sum-difference-cubes.html
|
# Math Help - Factoring Sum and Difference of Cubes
1. ## Factoring Sum and Difference of Cubes
There are so many different lectures on how to factor the sum and difference of cubes.
What is the easiest method to use?
For example:
How do you factor the following two questions:
(1) x^6 - 1
(2) x^3 - 125
2. so many? only two I know of ...
$a^3 + b^3 = (a + b)(a^2 - ab + b^2)$
$a^3 - b^3 = (a - b)(a^2 + ab + b^2)$
$x^6 - 1 = (x^2)^3 - 1^3$ see the "a" and "b" ?
$x^3 - 125 = x^3 - 5^3$ ditto?
now use the factoring pattern for
$a^3 - b^3$.
3. Hello, magentarita!
There is a trick for memorizing the cube-factoring.
There are two forms:
. . $\begin{array}{cccc}\text{Sum of cubes:} & a^3+ b^3 &=& (a+b)\,(a^2-ab+ b^2) \\
\text{Diff. of cubes:} & a^3-b^3 &=& (a-b)\,(a^2 + ab + b^2) \end{array}$
They can be combined: . $a^3 \pm b^3 \;=\;(a \pm b)\,(a^2 \mp ab + b^2)$
We must memorize the letters: . $a^3 \qquad b^3 \;\;=\;\;(a\qquad b)\,(a^2 \qquad ab \qquad b^2)$
To place the signs, remember the word SOAP:
. . . . . . . . . . . . . . .Opposite
. . . . . . . . . . . . . . . . . $\downarrow$
. . $a^3 \;{\color{red}\pm} \;b^3 \;\;=\;\; (a \;{\color{red}\pm} \;b)\,(a^2 \;{\color{red}\mp} \;ab \;{\color{red}+} \;b^2)$
. . . . . . . . . . . . $\uparrow\qquad \qquad \quad\;\:\uparrow$
. . . . . . . . . . Same . . . . Always Positive
|
2015-07-04 21:47:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017537236213684, "perplexity": 345.5974422686079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00105-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://physicalsciences.leeds.ac.uk/events/event/264/multiplicative_quiver_varieties_and_generalised_ruijsenaars-schneider_models
|
# Multiplicative quiver varieties and generalised Ruijsenaars-Schneider models
#### Maxime Fairon, University of Leeds. Part of the integrable systems seminars series.
It has been observed by Van den Bergh that given an associative algebra, it is possible to define on it a structure called a double Poisson bracket, such that it induces a Poisson bracket on the corresponding representation spaces of the algebra. In particular, this can be defined for the preprojective algebra of an arbitrary quiver, and it yields the standard symplectic form on the corresponding quiver varieties. In his work, he also introduced the concept of a double quasi-Poisson bracket, and showed that the multiplicative preprojective algebra of an arbitrary quiver can be endowed with such a structure, defining the so-called multiplicative quiver varieties from its representations. My aim is to explain what are the different notions that appear both at the algebraic and at the geometric levels, and see their relation to the theory of integrable systems.
As an application, I will describe how the RS model can be recovered from the one-loop quiver with a simple framing, and how to get new generalisations by looking at cyclic quivers. I will also explain how this construction can be naturally adapted to spin versions. In particular, I will outline how this formalism gives the complete set of Poisson brackets for the (complex) trigonometric spin RS model, answering a problem posed by Arutyunov and Frolov about 20 years ago.
This is joint work with Oleg Chalykh.
Maxime Fairon, University of Leeds
#### Related events
See all School of Mathematics events
|
2018-03-24 04:13:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8413127660751343, "perplexity": 380.4219398480762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649683.30/warc/CC-MAIN-20180324034649-20180324054649-00415.warc.gz"}
|
http://consorziovalledelbelice.it/definite-integrals-worksheet.html
|
Definite Integrals Worksheet
Math Calculus Worksheet Chap 4: Integration Section: Name: Mr. Worksheet by Kuta Software LLC Calculus Mixed Review- Indefinite & Definite Integrals (FTC 1 & 2) Name_____ ID: 1 Date_____ ©G r2I0R2o0J HKGuktIaB VSPopfztxwbaDrweV JLSLFCU. 878 pounds of sand used from the tank in the first during the time interval 1 < t < 3 1 3 2. Definite Integral 1. The Definite Integral. A Flash movie illustrating the evaluation of a definite integral using the definition. Definite integration results in a value. A tutorial on the definition of definite integrals, properties of definite integrals, relationship between definite integrals and areas and the use of technology to evaluate definite integrals using the definition. Education Focus Hub. A definite integral retains both lower limit and the upper limit on the integrals and it is known as a definite integral because, at the completion of the problem, we get a number which is a definite answer. Talk about finding a definite integral over a function that is not continuous. Some of the worksheets for this concept are 06, 3 cchhaaptteerr 1133 2 1 definite integrals, Work definite integrals, Evaluating definite integrals, November 18 2014 work 19 integration by, Integration by substitution date period, Math 229 work, Integration by substitution. Step 1: Set up the integral. Area Bound by a Curve. I just started learning about nets and I found that the main motivation for defining nets was the need for a unified definition of the definite integral real-analysis definite-integrals riemann-sum nets. APPLICATIONS OF THE DEFINITE INTEGRAL. About This Quiz & Worksheet. closed interval a, b, then the area of the. Displaying top 8 worksheets found for - Indefinite Integrals Substitution. ©2005 BE Shapiro Page 3 This document may not be reproduced, posted or published without permission. Chapter 4: Numerical Integration. Many useful functions are naturally described by integration of known functions. Here's a simple but handy rule for doing this that looks complicated but is really very easy. These properties can be used to change the form of a complicated integral into a simpler function. Section 5-7 : Computing Definite Integrals. Articles worksheets. Mean Value Theorem for integrals; average value. 344 2 32 2 32 dx xx 2 34 2 2 1 1 3 44 5 57 5. Definite integrals calculator. homework and exam 1 practice questions MOVIES Trig Substitution Examples. By that, I mean any of the following that come immediately from basic differentiation rules. Use left-endpoint to estimate the area under the curve of 𝑓(𝑥) = ' from [2, 5]. Use the worksheets. In this lesson, we will learn U-Substitution, also known as integration by substitution or simply u-sub for short. Find the shaded area as a definite integral. 66 16 Definite Integrals p. Definite Integral Worksheet. If an integral has upper and lower limits, it is called a Definite Integral. The definite integral as the area of a region. Integration is the process of finding the definite or indefinite integral of a function. Determine $$\int e^x \cos x dx. R sin10 xcosxdx 7. Then evaluate each integral (except for the 4th type of course). 2 Def Int Num Int Name Date Period Worksheet 4. R 3t2(t3 +4)5 dt 3. One should always assume an integral is easy until good evidence suggests otherwise. Comparing rates worksheet. Online definite integrals calculator. 1 Indefinite Integrals Calculus. Here we're getting a formula for calculating definite integrals. Sometimes this is a simple problem, since it will be apparent that the function you wish to integrate is a derivative in some straightforward way. 72-74 (Worksheet) 19 Review 20 TEST UNIT 7 7. Suppose that we have a function f whose integral is another function F :. 1) ò-12x-5dx 2) ò-12 x4 dx 3) ò 10x4dx 4) ò 2 (-2x - 5). Definite Integrals Calculator. In physics, the area under a velocity vs. doc, 28 KB. Definite Integral Worksheet. Hint: Some Of These Problems Require A Simple Antiderivative, Some Require The Use Of Substitution, And Some Require Integration-by-parts. The student will be given a definite integral and be asked to substitute a variable in, which should make the integral easier to evaluate. The definite integral, the limit of a Riemann sum, can be interpreted as the area under a curve. The Greek letters of summation become Roman letters in the limit. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). The course states the importance of mathematics. • I use Worksheet 1 after students first encounter the definite integral as signed area. Z (2v5=4 +6v1=4 +3v 4)dv 10. 28B MVT Integrals 4 EX 2 Find the values of c that satisfy the MVT for integrals on [0,1]. Worksheet: Definite Integral Properties and Estima ting Definite Integrals 1. A definite integral in is equal to the signed area between the curve and the x-axis. Designed for all levels of learners, from beginning to advanced. Use the programs on your calculator to find the value of the sum accurate to 3 decimal places. Author: Created by phildb. Integration-using partial fractions. Thus, if all of the areas within the interval will. 2\u2014Definite. The result of finding an indefinite integral is usually a function plus a constant of integration. Then, plug in your upper and lower. You will then be told whether the answer is correct or not. Evaluate the following definite integrals without a calculator. List of Definite Integrals - Free download as PDF File (. Make the substitution to obtain an integral in u 5. (Exercises for Sections 5. The worksheets are offered in developmentally. Score: Sheet 2. This is given as. Follow these steps: • Click in a blank space and type &. 12 2f(x) Given f _ f (x)dr = —6 and j 4, find the values of each of the following definite integrals. Calculus math curriculum is a study of integration and differentiation with limits and continuity. a Express f() in the form xa(x + b)2 + c, stating the values of the constants a, b and c. This website is dedicated to provide free math worksheets, word problems, teaching tips, learning resources and other math activities. Unlike the Inde nite Integral, which is a function, the De nite Integral is a numerical value. (As you can guess, there are also. If you'd like to explore the graph shown in the video (including taking a look at what's inside the "visual" folder), click here. Simplifying Exponents of Polynomials Worksheet Substitution Worksheet Simplifying Exponents of Variables Worksheet Algebra Worksheets List. The problems found in this quiz will challenge you to demonstrate your. This page is prepared by expert faculty member of entrancei , we have carefully selected all important formula and equations of chapter Definite Integral and uploaded the pdf of formula sheet for class 12th maths chapter Definite Integral. R (√ x−1)2 √ x dx 9. 1 Triple Integrals in Clindrical or Spherical Coordinates. 6 continued worksheet 5. every instant, it would become a definite integral. using Leibnitz rule, 17. x a x b f, x f a, b, x 1 2 1 −2 −3 −4 f(x) = 2x y. chcc(e — 4— x 2 from x = —2 to x = 2. 62 -63 ( Worksheet ) 14 Riemann Sums p. Worksheet: Definite Integrals; Check your answers here: Integral Calculator; 5: 24 Sep 2020 (Thu) Notes: Integration by Substitution; Worksheet: Integration by. Worksheets are 201 nya 05, Work definite integrals, 06, Evaluating definite integrals Once you find your worksheet, click on pop-out icon or print icon to worksheet to print or download. About This Quiz & Worksheet. Integrals are used to measure the area between the x-axis and the curve in problem over a particular interval. For example, to calculate the area under the graph of on the interval, one would first take the integral as follows and evaluate at the end points: Volume of a solid See also: Solid of revolution, Volume by cross sections and Multiple integral. Step 1: Set up the integral. This one-page worksheet contains approximately six multi-step problems. Follow these steps: • Click in a blank space and type &. Then evaluate each integral (except for the 4th type of course). This calculus video tutorial explains how to calculate the definite integral of function. For a given function over a given x -interval, the limit of the Riemann sum as the number of rectangles approaches infinity is called a definite integral. Class 12 Maths formulas Definite Integral Formulas. JEE Main Mathematics Definite and Indefinite Integrals. Definite Integrals Worksheet. If rounding, be sure to include at least 5 significant figures. Comparing rates worksheet. 2 answers File Uploaded 01/8/21, 10:04 WS 6. Z x 1 x 2 dx 13. Free PDF download of RD Sharma Solutions for Class 12 Maths Chapter 20 - Definite Integrals solved by Expert Mathematics Teachers on Vedantu. Rewrite the given integral using the properties of integrals. u-substitution or change of variables in definite and indefinite integrals. 4—Integration by u-Substitution and Pattern Recognition 13. It also explains the difference between definite integrals and indefinite integra. Lets try to find the area under a function for a given interval. out of the integral. Definite Integral 2. ©2005 BE Shapiro Page 3 This document may not be reproduced, posted or published without permission. The velocity graph v (t) of a braking 80 car is shown. AP Calculus AB Ch 7: Area And Volume Worksheet Page | 1 7. For the function below, a is the lower limit and b is the upper limit of the interval. Definite Integral Directions: Use the integers 1-9, without repetition, to fill in the empty spaces so that the equation with the definite integral is true. The resolution is to perform a technique called changing the limits. If Ÿ 30 100 f HxL „x =A and Ÿ 50 100 f HxL „x =B, then Ÿ 30 50 f HxL „x = (A) A + B (B) A - B (C) 0 (D) B - A (E) 20 2. In our previous lesson, Fundamental Theorem of Calculus, we explored the properties of Integration, how to evaluate a definite integral (FTC #1), and also how to take a derivative of an integral (FTC #2). Fun! 3) Finally - do you need an entire lesson for teaching Riemann Sums and Definite Integrals? Try this from Jean Adams!. 4—Integration by u-Substitution and Pattern Recognition 13. Integral calculus and particularly the definite integral is the mathematics of such curves. As the name suggests, while indefinite integral refers to the evaluation of indefinite area, in definite integration. Author: Pratima Nayak. ] The Definite Integral of a Continuous Function = Net "Area" Under a Curve [6 min. Use definite integrals to transfer all the numbers in a recipe into a calculus problem. But because this is a definite integral, you still need to express the limits of integration in terms of u rather than x. In this antiderivative worksheet, students compute antiderivatives, as well as definite and indefinite integrals. The worksheets are offered in developmentally. Math Calculus Worksheet Chap 4: Integration Section: Name: Mr. Definite integral could be represented as the signed area in the XY-plane bounded by the function graph as shown on the image below. Definite integrals. Note : The indefinite integral is f ( x ) dx a function of x, where as b definite integral f ( xisdx ) a number. Ch 7 Applications of Definite Integrals AP Questions - due date April 23-24, 2019 see links below: Sec 7. 2 Def Int Num Int Name Date Period Worksheet 4. Therefore the improper integral converges if and only if the improper integrals are convergent. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. While most people nowadays use the words antidifferentiation and integration interchangeably, according to Wikipedia , antidifferentiation is the process we use when we are asked to evaluate an indefinite integral; having to add a. 4) IBP-Definite Integral Example. As an example, here’s how you would evaluate the definite integral of sin(x)2 from 0 to n/2. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. Levels of Difficulty : Elementary Intermediate Advanced. Worksheet 14: Even More Area and Definite Integral Exercises [PDF] 15. 90 ATAR student!. ( 2 3) 3 200. 12 2f(x) Given f _ f (x)dr = —6 and j 4, find the values of each of the following definite integrals. Microsoft Excel Worksheet. Then we apply the formula: Work out the first bit. Step 1: Set up the integral. Mixed Integration Worksheet Part I: For each integral decide which of the following is needed: 1) substitution, 2) algebra or a trig identity, 3) nothing needed, or 4) can’t be done by the techniques in Calculus I. Definite Integrals Worksheet. The Definite Integral. Lesson Worksheet: Definite Integrals as Limits of Riemann Sums Mathematics • Higher Education In this worksheet, we will practice interpreting a definite integral as the limit of a Riemann sum when the size of the partitions tends to zero. The resolution is to perform a technique called changing the limits. The result of this research was producing bilingual scientific-worksheet of indefinite integral materials that oriented for learning achievement improvement. Check the formula sheet of integration. Integration Worksheets 1 Solve the following integrals: 1 2 3 4 5 6 7 8 2 Calculate the following integrals: 1 2 3 4 5 6 7 8 3 Solve the following exponential. C: Represent the limiting case of the Riemann Sum as a definite integral. Definite article the. Find the new limits of integration. Interchanging and splitting limits, 20. Substitution in Definite Integrals Worksheets This Calculus - Definite Integration Worksheet will produce problems that involve using substitution in definite integrals to make them easier to evaluate. area, volume, net displacement, and net change. Articles: worksheets exercises, handouts to print. Free Calculus worksheets created with Infinite Calculus. Then evaluate each integral (except for the 4th type of course). Express the limit as a definite integral on the given interval slader. “A definite integral is an integral with lower and upper limits”. Definite Integrals Calculator. Compute the following using the properties of de nite integrals: (a) Z 2 1 g(x)dx (b) Z 2 0 [2f(x) 3g(x)]dx (c) Z 1 1 g(x)dx (d) Z 2 1 f(x)dx+ Z 0 2 g(x)dx (e) Z 2 0 f(x)dx+ Z 1 2 g(x. Mixed Integration Worksheet Part I: For each integral decide which of the following is needed: 1) substitution, 2) algebra or a trig identity, 3) nothing needed, or 4) can’t be done by the techniques in Calculus I. The fundamental theorem of calculus. According to the Fundamental Theorem of Calculus, ∫baF(x). 1 Integral as Net Change Worksheet : Motion on the Line. Problem solving videos included. S v (DOLL — 2 + sinc on 0 < < with n = 4. ) (b) Use the result of part (a) above, and the Fundamental Theorem of Calculus, to again. First we need more notation. Given = 10 and J 2 g(x)dx = —2 , find the values of each ofthe following definite integrals, if possible, by reuriting the given integral using the properties of integrals. Evaluate each of the following integrals. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). ] A Note on the Definite Integral of a Discontinuous Function [6 min. Definite Integral Calculator computes definite integral of a function over an interval using numerical integration. 11/15--Definite Integrals Worksheet note: on the solutions I did 28 instead of 29. homework and exam 1 practice questions MOVIES Trig Substitution Examples. After you have chosen the answer, click on the button Check Answers. Check the formula sheet of integration. The practicality. 1 Definite integrals involving trigonometric functions We begin by briefly discussing integrals of the form Z 2π 0 F(sinat,cosbt)dt. In physics, the area under a velocity vs. Articles worksheets. There are currently 400 rabbits living on island. 4—Integration by u-Substitution and Pattern Recognition 13. 2 5 5 5 5 x x x dx x x 9 9 31 22 4 4 1 2 2 20 40 3. R sinx (cosx)5 dx 8. A graph of each function is shown as a visual guide. The Definite Integral. The output of a definite integral is anumber, which expresses thenet areaor signed areabetween the curvey Thedxin the integral is thedifferential. It is intended to help students anticipate the formula for the derivative of a function defined as an integral; that is, the Second Fundamental Theorem of Calculus. So, to evaluate a definite integral the first thing that we’re going to do is evaluate the indefinite integral for the function. 2 pdf template or form online. Example 10. John Wilson. To find an exact area, you need to use a definite integral.$$ From the table of integrals we read: $\int \cos x = \sin x + C$. ) When calculating an inde nite integral, it is. So put the limit on the function. But because this is a definite integral, you still need to express the limits of integration in terms of u rather than x. 2 Def Int Num Int Name Date Period Worksheet 4. Score: Sheet 2. Free interactive exercises to practice online or download as pdf to print. The practicality. Integral Worksheets. Alexandrina 2013-03-12 14:29:15. Do not use your calculator! b) is your answer to part (a) an overestimate or an underestimate? Justify your answer. 2 Worksheet 1. notebook 1 April 12, 2018 ∫(2esin t + 2)dt ≈ 21. time graph represents displacement, so the definite integral of velocity gives displacement. Showing top 8 worksheets in the category - Definite Integrals. By using a definite integral find the area of the region bounded by the given curves : By using a definite integral find the area of the region bounded by the given curves : By using a definite integral find the volume of the solid obtained by rotating the region bounded by the given curves around the x-axis :. Clear, easy to follow, step-by-step worked solutions to all worksheets below are available in the Online Study Pack. From your other forums, it seems you haven't encountered integrals, but rather only primitives. The figure given below illustrates clearly the difference between definite and indefinite integration: Some of the important properties of definite integrals are listed below. Calculus - definite integrals. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. Free Table of Integrals to print on a single sheet side and side. A biologist uses a model which predicts the population will increase 2t+5rabbitsperyearwheret represents the number of years from today. In physics, the area under a velocity vs. 5) a) 3 + 9 – 5 = 7 cm in the positive direction b) 3 + 5 + 9 = 17 cm c) at approximately B because that is where the slope is greatest. Use an integral (or integrals) to compute the area of the triangle in the xy-plane which has vertices (0;0), (2;3), and ( 1;6). This video has a couple of examples of calculating relatively simple definite integrals. the Riemann Sum as a definite integral. If x is restricted to lie on the real line, the definite integral is known as a Riemann integral (which is the usual definition encountered in elementary textbooks). In definite integral the limits of the integral will be given. Learning Objectives. Ch 7 Applications of Definite Integrals AP Questions - due date April 23-24, 2019 see links below: Sec 7. *Volume as a Definite Integral Concepts Worksheet: Students make connections between graphs and analytical notation. area, volume, net displacement, and net change. Class 12 Maths formulas Definite Integral Formulas. quad_vec(f, a, b[, epsabs, epsrel, norm, …]) Adaptive integration of a vector-valued function. For example, to calculate the area under the graph of on the interval, one would first take the integral as follows and evaluate at the end points: Volume of a solid See also: Solid of revolution, Volume by cross sections and Multiple integral. Evaluate a definite integral in an instant, or plot an integral with varying bounds. Go back to Worksheet 5. The figure given below illustrates clearly the difference between definite and indefinite integration: Some of the important properties of definite integrals are listed below. Express the limit as a definite integral on the given interval slader. So put the limit on the function. Previous Years Examination Questions 1 Mark Questions 4 Mark Questions 6 Mark Questions. Definite integral could be represented as the signed area in the XY-plane bounded by the function graph as shown on the image below. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). 1see Simmons pp. Question 5: Can a definite integral be negative? Answer: A definite integral can be negative because integrals measure the area between the x. Day 117: I can set up definite integrals, integrate, and evaluate to find a volume of revolution about an axis adjacent to the bounded region. The fundamental theorem of calculus. The definition of indefinite integrals is that: let f. $$In some cases, integration by parts must be repeated. “A definite integral is an integral with lower and upper limits”. CHAPTER 5 WORKSHEET INTEGRALS Name Seat # Date Trapezoidal Rule 1. Definite Integrals Worksheet. In a hurry? Browse our pre-made printable worksheets library with a variety of activities and quizzes for all K-12 levels. In this Section we introduce definite integrals, so called because. divergent if the limit does not exist. The trickiest thing is probably to know what to use as the $$u$$ (the inside function); this is typically an expression that you are raising to a power, taking a trig function of, and so on, when it’s not just an “$$x$$”. Compute the definite integral 1 2 3 x3 −2 dx. ( x) − 3 x 5 d x. A biologist uses a model which predicts the population will increase 2t+5rabbitsperyearwheret represents the number of years from today. Tap for more steps. The definite integral as the area of a region. ©2005 BE Shapiro Page 3 This document may not be reproduced, posted or published without permission. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). R sinx (cosx)5 dx 8. The Definite Integral. 62-63 (Worksheet) 14 Riemann Sums p. Consider a function of 2 variables z=f(x,y). Find where. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. Concept explanation. net 13 Riemann Sums p. Write an expression for the area under this curve between a and b. Free definite integral calculator - solve definite integrals with all the steps. AP Calculus Riemann Sum to Integral Worksheet Convert each limit of a Riemann sum to a definite integral, and evaluate. The interval [a, b] is called the range of integration, and the values a and b are known as the lower and upper limits respectively. Definite integrals - sine function For any of the following questions there is exactly one correct answer. 2\u2014Definite. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). is constant with respect to. (1) Our method is easily adaptable for integrals over a different range, for example between 0 and π or between ±π. 6 worksheet 5. 62-63 (Worksheet) 14 Riemann Sums p. Evaluate and interpret. the left side, the intervals on which f(x) is negative give a negative value to the integral, and these “negative” areas lower the overall value of the integral; on the right the integrand has been changed so that it is always positive, which makes the integral larger. Consider the integral Z 4 0 16 2t dt. Free-Response Definite Integrals: You will not commonly be asked to evaluate common definite integrals on the free-response, but rather you will be asked. Introduction to Integration - Calculus math review. Compute the following using the properties of de nite integrals: (a) Z 2 1 g(x)dx (b) Z 2 0 [2f(x) 3g(x)]dx (c) Z 1 1 g(x)dx (d) Z 2 1 f(x)dx+ Z 0 2 g(x)dx (e) Z 2 0 f(x)dx+ Z 1 2 g(x. While most people nowadays use the words antidifferentiation and integration interchangeably, according to Wikipedia , antidifferentiation is the process we use when we are asked to evaluate an indefinite integral; having to add a. MATH 229 Worksheet Integrals using substitution Integrate 1. 67 17 Definite Integrals p. An integral of the form intf(z)dz, (1) i. Definite Integral Definition. Type in any integral to get the solution, free steps and graph This website uses cookies to ensure you get the best experience. Given the definite integral ³ 8 0 x2dx, a) use the Trapezoidal Rule with four equal subintervals to approximate its value. L6 Applications of the Derivative and Definite Integral WORKSHEET KEY. Hint: Some Of These Problems Require A Simple Antiderivative, Some Require The Use Of Substitution, And Some Require Integration-by-parts. In this definite integral worksheet, students use substitution to simplify integrals. 66 16 Definite Integrals p. The calculator will evaluate the definite (i. Definite integrals on adjacent intervals. As the name suggests, while indefinite integral refers to the evaluation of indefinite area, in definite integration. worksheets for just 19. Definite Integrals. The copyright holder makes no representation about the accuracy, correctness, or. ANSWER KEY INTEGRALS a) = 13 4 -3 4 4. Worksheet # 24: De nite Integrals and The Fundamental Theorem of Calculus 1. When the definite integral exists (in the sense of either the Riemann integral or the more advanced Lebesgue integral ), this ambiguity is resolved as both the proper and improper. MATH 3B Worksheet: Riemann sums and de nite integrals Name: Perm#: 1. Make the substitution to obtain an integral in u 5. Worksheet 12: Riemann Sums, Integrals, and the Fundamental Theorem of Calculus [PDF] 13. Now use these values of u as your new limits of integration:. 59 -61 ( Worksheet ) 12 QUIZ 2 13 Riemann Sums p. Here we're getting a formula for calculating definite integrals. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. Example 1: Find the definite integral value of the function, f(x) = 3x 2 – 2x + 1 from ‘x’ ranging from 0 to 2. 62-63 (Worksheet) 14 Riemann Sums p. Integration Problems. Important Questions for Class 12 Maths Class 12 Maths NCERT Solutions Home Page. The trickiest thing is probably to know what to use as the $$u$$ (the inside function); this is typically an expression that you are raising to a power, taking a trig function of, and so on, when it’s not just an “$$x$$”. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. Free definite integral calculator - solve definite integrals with all the steps. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Remember that F(x) is a primitive of f(t), and we already know how to find a lot of primitives! The last step is to specify the value of the constant C. using Leibnitz rule, 17. Whenever we are calculating area in a given interval, we are using definite integration. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). On this worksheet you will use substitution, as well as the other integration rules, to evaluate the the given de nite and inde nite integrals. Z (2x 5)(3x+1)dx 15. Integral I: The integrand is discontinuous at x= 0, and the integral is therefore given as the sum of two improper integrals: Z 1 1 dx x 2 = Z 0 1 dx x + Z 1 0 dx x2: The the second integral on the right hand side is R 1 0 1 xp for p= 2 1, and so is divergent (the rst one is too). Worksheet 14: Even More Area and Definite Integral Exercises [PDF] 15. Class 12 Maths formulas Definite Integral Formulas. Take your time with this section, there's a lot to digest, but it's worth your time. The PDF file has two. by Fernando Daniel. A biologist uses a model which predicts the population will increase 2t+5rabbitsperyearwheret represents the number of years from today. Browse other questions tagged integration definite-integrals or ask your own question. In this worksheet, we will practice using properties of definite integration, such as the order of integration limits, zero-width limits, sums, and differences. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. Hint: Some Of These Problems Require A Simple Antiderivative, Some Require The Use Of Substitution, And Some Require Integration-by-parts. The definite integral; Riemann sums, area, and properties of the definite integral. Motion revisited. Microsoft Excel Worksheet. Integrating quadratic denominator. (Exercises for Sections 5. Experiments with C++ SIMD instruction to approximate definite integrals by saving CPU time. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). 25 3 4 3 12 4 tt t t dt 1. Learning Objectives. Jump to navigation Jump to search. List of definite integrals. Interchanging and splitting limits, 20. The Definite Integral. Z b a f(x) dx = Z a b f(x) dx x = b a n = a b n 2. Worksheet Definite Integrals. You will then be told whether the answer is correct or not. ] Finding Definite Integrals [10. For Indefinite integral is one of the types of integration. Definite integrals of functions which may have negative values over the interval of integration; The definite integral as a limit of a sum -riemann. Thus, if all of the areas within the interval will. The following is a quiz to review integral formulas and do simple substitutions. Integration Worksheet - Basic, Trig, Definite Integral Notes Definite Integrals Notes Definite Integrals Notes filled in. The copyright holder makes no representation about the accuracy, correctness, or. a) tdt 1 ∫4 b) xdx 4 ∫4+xdx 4 ∫1 2) Evaluate the following definite integrals using areas. 66 16 Definite Integrals p.$$ From the table of integrals we read: $\int \cos x = \sin x + C$. The ability to carry out integration by substitution is a skill that develops with practice and experience. pdf), Text File (. This page is prepared by expert faculty member of entrancei , we have carefully selected all important formula and equations of chapter Definite Integral and uploaded the pdf of formula sheet for class 12th maths chapter Definite Integral. Answer: Yes, definite integrals can be negative. Concept #1: Fundamental Theorem. The Definite Integral. 67 17 Definite Integrals p. Here we're getting a formula for calculating definite integrals. 2\u2014Definite. the Riemann Sum as a definite integral. ) When calculating an inde nite integral, it is. The AP Calculus Problem Book Publication history: First edition, 2002 Second edition, 2003 Third edition, 2004 Third edition Revised and Corrected, 2005. Definite Integrals Worksheet. Find%the%areaunder%the%curve,% ,%above%the%x:axis,%and%between%x%=1%and%x%=2. %%Thatis,%in%full% “definite%integral”%form,%we%need%to%evaluate% € 1. Get Free Access See Review. Evaluate the definite integral using integration by parts with Way 2. pdf View Download. The formal definition of a definite integral is stated in terms of the limit of a Riemann sum. Approximate area video demonstrating how the area under a curve can be approximated using a summation of rectangles. Microsoft Excel Worksheet. Evaluate the definite integral using integration by parts with Way 2. 1A – Antiderivatives and Indefinite Integration Objectives: 1. You might like to read Introduction to Integration first! A Definite Integral has start and end values: in other words there is an interval [a, b]. Learning Objectives. 8 worksheet 5. Worksheet 4. Each function, f(x), is defined as a piecewise function, so you will need to separate the integral into two integrals, according to the domain covered by each piece. homework and exam 1 practice questions MOVIES Trig Substitution Examples. More Definite Integrals worksheet & answers Evaluating Definite Integrals worksheet & answers Definite Integrals Formulas Похожие запросы для definite integral worksheet with answers. If an integral has upper and lower limits, it is called a Definite Integral. 11: Integrals Worksheet See all chapters. We want to focus on the definite integral of a polynomial function. Definite integrals u-sub 32. pdf - Calculus Maximus WS 4. All Chapters; Ch. Discovery worksheet on determinging convergence of improper integrals by comparison; Practice worksheet on choosing which technique of integration to use (includes solutions) Applications of integration: Practice worksheet on finding areas of regions and volumes of solids; A variety of applications of integration (getting the big picture of. C: Represent the limiting case of the Riemann Sum as a definite integral. Includes solution sheet. Created Date: 11/5/2012 10:54:12 PM. x a x b f, x f a, b, x 1 2 1 −2 −3 −4 f(x) = 2x y. Register free for online tutoring session to clear your doubts. INTRODUCTION. It depends on how many discontinuities there are and how bad they are. chcc(e — 4— x 2 from x = —2 to x = 2. 34) Write these notes on your paper: The Definite Integral as an Area: a) When f (x) ≥ 0 and a < b: The area under the graph of f and above x-axis between a and b = ∫ b f(x) dx a b) When f (x) Is Not Positive: When f (x) is positive for some x values and negative for others, and a < b: ∫ b f(x) dx is the sum of areas above the x-axis, counted positively, and areas below the a x-axis, counted negatively. But because this is a definite integral, you still need to express the limits of integration in terms of u rather than x. The De nite Integral, as has been stated already, has wide-ranging. Q1: Suppose 𝑓 has absolute minimum value 𝑚 and absolute maximum value 𝑀. Worksheet #1. Quantifiers worksheets and online activities. Hint: Some Of These Problems Require A Simple Antiderivative, Some Require The Use Of Substitution, And Some Require Integration-by-parts. Substitute u back to be left with an expression in terms of x Steps. net 13 Riemann Sums p. Our downloadable and printable Calculus Worksheets cover a variety of calculus topics including limits, derivatives, integrals, and more. Integration Worksheet - Substitution Method Solutions (a)Let u= 4x 5 (b)Then du= 4 dxor 1 4 du= dx (c)Now substitute Z p 4x 5 dx = Z u 1 4 du = Z 1 4 u1=2 du 1 4 u3=2 2 3 +C = 1. If the interval is infinite the definite integral is called an improper integral and defined by using appropriate limiting procedures. b State the coordinates of the turning point of the curve y = f(x). The input variables, weight and wvar, are used to weight the integrand by a select list of functions. Definite Integrals with u-Substitution – Classwork When you integrate more complicated expressions, you use u-substitution, as we did with indefinite integration. It begins by introducing the concept of the antiderivative and basic techniques for indefinite integration. Evaluate the definite integral using integration by parts with Way 2. worksheets for just \$19. region bounded by the graph of f, the x-axis, and the vertical lines x a and x b is given. Applying definite integrals. Live worksheets > English > English as a Second Language (ESL) > Quantifiers. C: Represent the limiting case of the Riemann Sum as a definite integral. Determine indefinite integrals of the form $$\int f(ax +b) \ dx$$ Determine $$f(x)$$ given $$f'(x)$$ and $$f(a) = b$$ an initial condition in a range of practical and abstract applications including coordinate geometry, business and science. You would need an overloaded integral function: double integral(double(*f)(double x), double(*g)(double x, double y), double a, double b, int n) {. If the limit is finite we say the integral converges, while if the limit is infinite or does not exist, we say the integral. When you approximate the area under a curve, the tops of the rectangles form a saw tooth shape that doesn’t fit perfectly along the smooth curving function. Right Riemann Sum. Hence, Z x7 dx = 1 8 x 8 + C. Learn properties of definite integrals along with its proof and video content given at the end. Definite Integral Notation Leibniz introduced the notation for the definite integral. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. They find the values of the upper and lower points and interpret points found on a graph. By using the formula for integration by parts we have $$\int x \sin x dx =x \cdot( – \cos x) + \int \cos x. In definite integral the limits of the integral will be given. If you'd like to explore the graph shown in the video (including taking a look at what's inside the "visual" folder), click here. Definite Integral Worksheet. is the area of the region in the xy-plane bounded by the graph of f, the x-axis, and the lines x = a and x = b, such that area above the x-axis adds to the total, and that below the x-axis subtracts from the total. 3: The Definite Integral Worksheet. A definition for derivative, definite integral, and indefinite integral (antiderivative) is necessary in understanding the fundamental theorem of calculus. Type in any integral to get the solution, free steps and Definite Integral Calculator. Calculus math curriculum is a study of integration and differentiation with limits and continuity. 173 pounds of sand added to the tank in the first 4 hours 0 4 2 ∫(5sin t +3√t)dt ≈ 14. Use indefinite integral notation for antiderivatives. Metric units worksheet. Performance tests for different methods to compute definite integrals. Quantifiers worksheets and online activities. In calculus of a single variable the definite integral for f(x)>=0 is the area under the curve f(x) from x=a to x=b. Use definite integrals to transfer all the numbers in a recipe into a calculus problem. Definite Integrals Worksheet. In a definite integral, there is an infinitive number of boxes Approximations - Over summation- bigger than actual area - Under summation- smaller than actual area - Trapezoidal summation- divide into trapezoids 1/2 (b1-b2)h Area as a Limit - Infinite number of boxes - First, find anti-derivative (integral). Whereas, a definite integral represents a number and identifies the area under the curve for a specified region. Author: Pratima Nayak. CHAPTER 5 WORKSHEET INTEGRALS Name Seat # Date Trapezoidal Rule 1. Integrals are of two types, one being Definite integral and the other being Indefinite integral. Z 4 z7 7 z4 +z dz 7. 2 )plemental Instruction Iowa State University 10 Evaluate the definite integrals usin f _1 3x2 2 dX Leader: Course: Instructor: Date: Kim Math 165 Johnston 11/6/12 and 11/7/12 2X + dx (a (£1) the d 3T1-(e +3 when -- applur g/ T) 61m. The validity obtained content, language, and illustration from expert judgment. Substitution for Definite Integrals Mean Value Theorem for Integrals. Free indefinite integral calculator - solve indefinite integrals with all the steps. Education Focus Hub. OTHER EXAMPLE Find the definite integral of the following. Fun! 3) Finally - do you need an entire lesson for teaching Riemann Sums and Definite Integrals? Try this from Jean Adams!. View, download and print Definite Integrals & Numeric Integration - Worksheet 4. All the immediate integrals. Thats the perimeter of the function y, from b to a. 1 Triple Integrals in Clindrical or Spherical Coordinates. ( 2 3) 3 200. Z (2v5=4 +6v1=4 +3v 4)dv 10. Question: MAT137 – Worksheet #2 Evaluate The Following Integrals. The mean value theorem for integrals state that somewhere "between" the inscribed. Alexandrina 2013-03-12 14:29:15. You might like to read Introduction to Integration first! A Definite Integral has start and end values: in other words there is an interval [a, b]. Definite Integrals Worksheet. 7 worksheet 5. Compute a definite integral. Z x(2x+3)dx 14. The definite integral is then developed by way of Riemann sums and the Fundamental Theorem of Calculus. For definite integrals, use numeric approximations. Calculus - definite integrals. Numerical Integration in your calculator; HW: Definite Integrals as Area Worksheet; Quiz 2 (Sections 5. Input a function, the integration variable and our math software will give you the value of the integral covering the selected interval (between the lower limit and the upper limit). This website is dedicated to provide free math worksheets, word problems, teaching tips, learning resources and other math activities. ∫(xdx3 +1) 23( ) 4. The same principle we use for calculating the definite integral. English Practice Downloadable PDF Grammar and Vocabulary Worksheets. a) Set up a definite integral to find the area of the region. MATH 229 Worksheet Integrals using substitution Integrate 1. Z (2t3 t2 +3t 7)dt 5. 1 Techniques of integration Name: _____ 1 Evaluate the following definite integrals: Same question, asked differently; Determine the area between the x-axis, the line !=1 and the line !=3 for the fucntion: 2 Evaluate the following definite integrals: 3 Evaluate the following definite integrals: %6sin2! ,!! " # %6sin2! ,!! " #. It is represented as;.$$ In some cases, integration by parts must be repeated. Module function, 16. On October 27, 2020, the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the U. 7 worksheet 5. Printable exercises. 2 Definite Integrals & Numeric Integration KEY. Power of Implicit Differentiation. For this reason you should carry out all of the practice exercises. Definite Integral Notation Leibniz introduced the notation for the definite integral. Worksheet 14: Even More Area and Definite Integral Exercises [PDF] 15. e I dx a e ∞ −∞ =<< ∫ + Consider the contour integral over the path shown in the figure: 12 3 4. 1 Definite integrals involving trigonometric functions We begin by briefly discussing integrals of the form Z 2π 0 F(sinat,cosbt)dt. quad_vec(f, a, b[, epsabs, epsrel, norm, …]) Adaptive integration of a vector-valued function. Z a a f(x) dx = 0 x = a a n = 0 3. the meaning of this integral using correct units. If we make it equal to "a" in the previous equation we get:. Evaluate and interpret. Integration-using partial fractions. In this worksheet, we will practice using properties of definite integration, such as the order of integration limits, zero-width limits, sums, and differences. Articles worksheets. Solomon Press C2 INTEGRATION Worksheet B 1 f(x) ≡ 3 + 4x − x2. 2 Areas, Distances and the Definite Integral & 1. Definite integrals give a result (a number that represents the area) as opposed to indefinite integrals, which are represented by formulas. Worksheet: Part 2 Segment 1: Area Bounded by a Curve; Definite Integral Power Rule 1. Introduction to Integration - Calculus math review. A tutorial on the definition of definite integrals, properties of definite integrals, relationship between definite integrals and areas and the use of technology to evaluate definite integrals using the definition. 2 Introduction When you were first introduced to integration as the reverse of differentiation, the integrals you dealt with were indefinite integrals. Integration Full Chapter Explained - Integration Class 12 - Everything you need. A definite integral in is equal to the signed area between the curve and the x-axis. Education focus Hub's mission is to provide free and better educational help to the students. But because this is a definite integral, you still need to express the limits of integration in terms of u rather than x. 333 3 3 3 3 3 x dx x x x 4 32 1 5 5 5 5 75 4. We could guess, but how could we figure out the exact area? Below, using a few clever ideas, we actually define such an area and show that by using what is called the definite integral we can. The student will be given a definite integral and be asked to substitute a variable in, which should make the integral easier to evaluate. 2 12) What is the exact area of the region between y x and the x-axis , over the interval [0, 1]?. Each function, f(x), is defined as a piecewise function, so you will need to separate the integral into two integrals, according to the domain covered by each piece. As an example, here’s how you would evaluate the definite integral of sin(x)2 from 0 to n/2. ( x) − 3 x 5 d x. The symbol is read as “the integral from a to b of f of x dee x,” or sometimes as “the integral from. The figure given below illustrates clearly the difference between definite and indefinite integration: Some of the important properties of definite integrals are listed below. Worksheet Definite Integrals. Hint: Some Of These Problems Require A Simple Antiderivative, Some Require The Use Of Substitution, And Some Require Integration-by-parts. Hence, Z x7 dx = 1 8 x 8 + C. 2 Definite Integrals & Numeric Integration KEY. 1A – Antiderivatives and Indefinite Integration Objectives: 1. If the integral is definite then the table can be used to find the primitive and then you can evaluate it at the limits of integration. AP Calculus CHAPTER 5 WORKSHEET 2Ttz—. The Definite Integral. With respect to x: yxyx 22,8 -7 A couple of notes about this process (using the Absolute Value Trick) In this problem, we are able to see what the curves look like. Notice that we can check this result by di erentiating: F(x) = 1 8 x 8 + C F0(x) = x7 (The derivative of the constant C is just zero. For example, to calculate the area under the graph of on the interval, one would first take the integral as follows and evaluate at the end points: Volume of a solid See also: Solid of revolution, Volume by cross sections and Multiple integral. R (x+1)sin(x2 +2x+3)dx 13. Let: x = b a n x k = a+ k x x k be any point in [x k 1;x k] The de nite integral of f(x) from x = a to x = b is Z b a f(x) dx = lim n!1 Xn k=1 f(x k) x Properties of Integrals: 1. S v (DOLL — 2 + sinc on 0 < < with n = 4. Math Calculus Worksheet Chap 4: Integration Section: Name: Mr. Evaluate and interpret. every instant, it would become a definite integral. Free PDF download of RD Sharma Solutions for Class 12 Maths Chapter 20 - Definite Integrals solved by Expert Mathematics Teachers on Vedantu. Free indefinite integral calculator - solve indefinite integrals with all the steps. 11/15--Definite Integrals Worksheet note: on the solutions I did 28 instead of 29. Express the limit as a definite integral on the given interval slader. 3) arc length lecture (7. This should explain the similarity in the notations for the indefinite and definite integrals. Approximate area video demonstrating how the area under a curve can be approximated using a summation of rectangles. Alexandrina 2013-03-12 14:29:15. Write Numeric Answers (definite Integrals) In Exact Form (no Decimals). About This Quiz & Worksheet. x 1 2 3 4 y The shaded region is bounded by the curve, the x-axis and the lines x = 1 and x = 4. Integration is the inverse of differentiation and is often called antidifferentiation. Missing addend worksheets. Basic Integration Formulas. 1 Indefinite Integrals Calculus. For this reason you should carry out all of the practice exercises. Download free printable worksheets for CBSE Class 12 Indefinite & Definite Integrals with important topic wise questions, students must practice the NCERT Class 12 Indefinite & Definite Integrals worksheets, question banks, workbooks and exercises with solutions which will help them in revision of important concepts Class 12 Indefinite & Definite Integrals. Problems featuring functions defined by integrals have occurred frequently on recent AP Calculus. 34) Write these notes on your paper: The Definite Integral as an Area: a) When f (x) ≥ 0 and a < b: The area under the graph of f and above x-axis between a and b = ∫ b f(x) dx a b) When f (x) Is Not Positive: When f (x) is positive for some x values and negative for others, and a < b: ∫ b f(x) dx is the sum of areas above the x-axis, counted positively, and areas below the a x-axis, counted negatively. 2 Areas, Distances and the Definite Integral & 1. Displaying top 8 worksheets found for - Indefinite Integrals Substitution.
|
2021-04-23 03:23:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652153015136719, "perplexity": 727.3998221833294}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00234.warc.gz"}
|
http://makeyourowntextminingtoolkit.blogspot.co.uk/2017/02/
|
## Tuesday, 21 February 2017
### Dimension Reduction with SVD ... Let's Try It!
In the last two posts, here and here, we wanted to see how we could:
• Visualise lots of documents .. each with many dimensions (words) .. on a 2 dimensional plot .. so that we can see them clearly and usefully on a computer or on a book page.
• Extract out the underlying topics (e.g. vehicle, cooking) ... especially when the documents don't actually mention these more general topic names specifically.
We found a tool called Singular Valued Decomposition which should help. Let's try it it on real data.
### The Mixed Corpus
We previously created the mixed corpus, made of 13 documents from each of the Recipes, Iraq, Macbeth and Clinton data sets. We previously used it to see if our method of grouping similar documents worked.
We'll keep our code in a notebook on GitHub at:
We'll apply our stop-word filter, and create the relevance index as usual. We create the SVD decomposition of the word-document matrix $\mathbf{A}$, which saves a hdf5 file with the three matrices $\mathbf{U}$, $\mathbf{\Sigma}$, and $\mathbf{V}^T$
tmt.svd.calculate_singular_value_decomposition(cr.content_directory)
We now want to get a feel for how many useful dimensions there are. One way of doing this is looking at the eigenvalues .. the diagonal elements of the $\mathbf{\Sigma}$ matrix. We can see this by plotting them as a bat chart.
# get SVD eigenvalues
eigenvalues = tmt.svd.get_svd_eigenvalues(cr.content_directory)
# visualise the SVD eigenvalues as
tmt.visualisation.plot_bar_chart(eigenvalues)
The eigenvalues look like the following:
We can see there are two eigenvalues that stick out well above the others. That means the general nature of the mixed data set can be broadly reconstructed with just these two eigenvalues. If we wanted to be more detailed, we can see the next 3 eigenvalues also stick out as a bit larger than the rest which seem to fall away to a very small number.
We can explore these top 5 eigenvalues when we look at extracting topics below.
First let's see what the transformed document-view $\mathbf{\Sigma} \cdot \mathbf{V}^T$ looks like .. using only the top 2 eigenvalues in $\mathbf{\Sigma}$ to take a 2-dimensional slice. Hopefully similar documents are placed near each other.
# get document-view projection onto 2 dimensions
document_view = tmt.svd.get_document_view(cr.content_directory)
# plot documents in reduced dimension space with a 2-d scatter
tmt.visualisation.plot_scatter_chart(document_view)
Here's what the 2-d plot of the document-view looks like:
Well, we can see some clusters .. the dots aren't scattered everywhere in a random manner. But can we see 4 clusters, as we hope? Well, we'll have to see where the original documents from the RecipesIraqMacbeth and Clinton data sets ended up on that chat. We can do this because we kept the document-view matrix ordered with document names associated with the pandas data frame columns.
Here's that plot separated out into the documents that we know they are:
That's better! We can clearly see that there is indeed a Recipes cluster and a Clinton cluster. Yu can also see the documents are spread along lines which are perpendicular to each other .. in other worse, these two sets of documents are as different as could be, albeit being squished into a 2-d space! Cool!
The Macbeth and Iraq clusters are much closer to zero, they are distinct clusters, but they aren't as distant from each other as the recipes and Clinton documents are. We saw before that the Iraq and Macbeth documents merge most easily in terms of similarity, so this shouldn't surprise us.
Maybe there's a lesson there .. Macbeth and Iraq .. both about corruption, death and intrigue?
Now, let's try to extract some topics.
For topics, we're looking at the word-view $\mathbf{U} \cdot \mathbf{\Sigma}$. The columns of these matrix are the topics, made up of linear-combinations of the words from the original vocabulary. Because we've truncated the singular matrix $\mathbf{\Sigma}$, this word-view will have zero-value columns except for the left-most n where n is the number of retained eigenvalues in $\mathbf{\Sigma}$.
The way we code this is to take each column from the left-most n that aren't zero. For each one, we re-order it so that the values are in descending order .. this will give us the most significant word at the top of that column. We can then truncate these columns to only retain the most contributing words. We also remove the sign +/- of the values because a value of -0.8 is more contributing than a value of +0.001. Its the value of the elements that matters, it shows how much of the word contributes to the topic.
Let's try it:
# get top n topics, n is usually the same as key dimensions identified by the eigenvalue bar chart above
number_of_topics = 4
# how many words in each topic (the most significant)
topic_length = 10
topics_list = tmt.svd.get_topics(cr.content_directory, number_of_topics, topic_length)
That means we're picking out only the top 4 topics .. and each of these topics will be limited to the 10 most significant words. Printing the returned list of topics gives:
topic # 0
sauce 0.036254
butter 0.031721
broth 0.029299
boiled 0.028786
flour 0.028568
little 0.027809
water 0.027283
rice 0.025682
quantity 0.024787
salt 0.018478
Name: 0, dtype: float64
topic # 1
subject 0.042459
benghazi 0.033981
state 0.032291
f201504841 0.032254
redactions 0.031469
unclassified 0.031251
sensitive 0.028776
waiver 0.027892
05132015 0.027635
department 0.025765
Name: 1, dtype: float64
topic # 2
vegetables 0.036591
sauce 0.030400
rice 0.023624
soup 0.022900
fish 0.019670
cabbage 0.017470
boiled 0.016740
greens 0.015799
flour 0.015604
cooking 0.014433
Name: 2, dtype: float64
topic # 3
rice 0.041190
vegetables 0.037766
saffron 0.018390
cabbage 0.017364
marrow 0.017055
greens 0.016380
cooked 0.012974
beef 0.012837
soup 0.012059
them 0.011880
Name: 3, dtype: float64
We can see quite clearly the top 2 topics are distinct .. and about two very different themes. One is clearly about cooking, and the other about the Clinton emails, specifically about Libya.
That's great! We've extracted two topics.
What about the next two topics? These are also about cooking but these are less influential topics. How do we see this? If we plot the sum of the absolute values of the topic columns we can see the following:
The top 2 columns (x-axis) have the largest sums .. the next two drop significantly.
Great! That seems to work .. but lets try another, less contrived dataset.
### Recipes Corpus
Let's try the recipes corpus. We know this isn't a mixed corpus containing very different themes. Let' see what happens anyway .. it'll be useful to see how a mono-themed corpus yields to SVD.
The bar chart of eigenvalues shows only one value leading above the rest:
That probably suggests that there is only one main topic or theme of the data set. Indeed we know this to be true.
Let's see what the document-view plotted in 2-dimensions looks like:
There's a cluster ... not two or more distinct and distance clusters. Again we expected this.
The list if topics isn't that distinct, and the sum of the elements of the topic columns shows just one dominant theme:
Well - even if that didn't reveal great insights, we can see what a mono-themed dataset looks like when SVD is applied.
Let's try another, bigger corpus.
### Iraq Inquiry Report
The Iraq Inquiry Report is, at one level, all about one topic - the Iraq war and the circumstances that led to it, but it is a big enough data set that should contain several sub-topics. Let's see what happens...
So there's clearly one eigenvalue much bigger than the rest, but there are maybe four others that also stick out as significant. This could mean there are 4 or 5 topics. Let's continue ...
The document-view plotted in 2-dimensions shows one cluster that is very clearly distant and distinct from the rest. What are these? Let's see:
document_view.T[document_view.T[0] > 0.01]
01
the-report-of-the-iraq-inquiry_section_annex-4.txt0.0398910.001821
If we look closer at this document .. we see it is an annex, and in fact a set of maps. So that is indeed different from the rest of the documents which are more narrative.
Let's zoom into that central grouping a little more:
That's not quite as enlightening .. there still seems to be one cluster with a couple of off-shoots. Let look at these. The documents with large x-coordinates are...
document_view.T[document_view.T[0] > 0.001]
01
the-report-of-the-iraq-inquiry_section_annex-2.txt0.001225-0.003456
the-report-of-the-iraq-inquiry_section_annex-3.txt0.001312-0.003445
the-report-of-the-iraq-inquiry_section_annex-4.txt0.0398910.001821
If we look closer at these annexes .. (we saw 2 above) we find that they are indeed different from the main data set .. they are a glossary of terms (annex-2), a list of names and posts (annex-3) and the set of maps (annex-4) we saw above.
What about those docs dangling downwards ...
document_view.T[document_view.T[1] < -0.006]
01
the-report-of-the-iraq-inquiry_section-111.txt0.000499-0.007369
the-report-of-the-iraq-inquiry_section-112.txt0.000415-0.010617
Again, looking at these two documents, sections 111 and 112, we find that they are both about de‑Ba’athification, which the plot tells us must be sufficiently different and unique compared to the rest of the documents.
Let's look at some of the topics, taking the top 10, and ten words in each:
topic # 0
multinational 0.019513
map 0.018873
mndn 0.009999
maps 0.009649
division 0.007975
dissolved 0.005887
provinces 0.005885
southeast 0.005879
mnfnw 0.005734
mndnc 0.005734
Name: 0, dtype: float64
topic # 1
debaathification 0.010073
resolution 0.003564
basra 0.002889
weapons 0.002380
inspectors 0.002251
destruction 0.002238
cpa 0.002217
wmd 0.002115
biological 0.002111
Name: 1, dtype: float64
topic # 2
debaathification 0.016325
baath 0.002761
resolution 0.002180
postinvasion 0.002080
no1 0.002078
bremer 0.002075
weapons 0.001960
destruction 0.001869
baathists 0.001798
biological 0.001708
Name: 2, dtype: float64
topic # 3
telegrams 0.008248
quotes 0.004428
documents 0.003594
quote 0.003523
bold 0.002878
spellings 0.002869
redactions 0.002845
egram 0.002822
transcripts 0.002773
navigate 0.002693
Name: 3, dtype: float64
topic # 4
inquests 0.004623
families 0.004157
bereaved 0.003888
coroners 0.002850
mental 0.002770
boi 0.002714
reservists 0.002711
investigations 0.002630
telic 0.002620
inquest 0.002611
Name: 4, dtype: float64
topic # 5
basra 0.003605
inquests 0.003233
bereaved 0.002699
debaathification 0.002648
families 0.002642
butler 0.002636
destruction 0.002585
weapons 0.002520
dfid 0.002233
witness 0.002220
Name: 5, dtype: float64
topic # 6
witness 0.009471
director 0.001858
20012003 0.001808
representative 0.001532
lieutenant 0.001475
deputy 0.001458
resolution 0.001426
20002003 0.001356
commander 0.001354
butler 0.001294
Name: 6, dtype: float64
topic # 7
resolution 0.003268
snatch 0.003229
vehicle 0.002454
istar 0.002397
vehicles 0.002338
hutton 0.002313
butler 0.002294
requirement 0.001761
mobility 0.001725
destruction 0.001704
Name: 7, dtype: float64
topic # 8
treaty 0.004037
britain 0.003420
witness 0.003097
feisal 0.002870
ottoman 0.002433
nuri 0.002356
angloiraqi 0.002282
mesopotamia 0.002100
rashid 0.001794
resolution 0.001672
Name: 8, dtype: float64
topic # 9
veterans 0.004128
mental 0.003520
medical 0.002977
inquests 0.002574
health 0.002349
inquest 0.002167
reservists 0.001946
coroners 0.001898
resolution 0.001734
witness 0.001710
Name: 9, dtype: float64
This is much better than I would have expected .. each of these topics is meaningful .. the SVD seems to have pulled them out despite the lack of clear cluster on the 2-d plot:
• topic 0 is about maps, regions, provinces,
• topics 1, 2 are about de'baathification and post-invasion policy and events
• topic 3 is clearly about messages, telegrams, spelling, transcripts, quotes
• topic 4 is about inquests, families, bereaved, coroners
• topic 5 is also about inquests but seems to be focussed on basra and the Butler report
• not fully clear what topic 6 is about
• topic 8 is about the more historical topics of treaties and the ottoman and British empires
Given how well this is going .. why stop at 10 topics .. an experiment with 20 topics still yield great results with topics:
• topic 11 about treasury, funding, costs
• topic 18 about policing, basra, iraqiisaton and reform
• etc..
This is really really impressive! I certainly didn't expect such a treasure trove of topic insights to emerge!
### Conclusion
• SVD is great at extracting topics - we've seen the amazing results popping out of the Iraq Report corpus.
• SVD is ok at visualising in 2-d a much higher dimensional dataset .. but it isn't amazing as only the projection onto 2 axes (topics) can be visualised. This loses too much information if other topics are significant.
• The bar chart plot of eigenvalues from the $\mathbf{\Sigma}$ matrix is a really good way of determining how many significant topics there are in a data set.
Next time we'll try the much bigger Clinton emails .. and also have look at applying the similarity graph plotting to the reconstructed SVD matrix (with reduced singular values).
## Thursday, 9 February 2017
### So Many Dimensions! .. And How To Reduce Them - Part 2/2 (SVD Demystified)
• How do we visualise lots of documents .. each with many dimensions (words) .. on a 2 dimensional plot .. so that I can see them clearly and usefully on my computer or on a book page?
• How do we extract out the underlying topics (e.g. vehicle, cooking) ... especially when the documents don't actually mention these more general topic names specifically?
With a deliberately constructed example, we saw how reducing the dimensions can achieve both of these tasks. What we didn't see is how we could reduce dimensions automatically.
A popular way to do dimension reduction (ugh, what a yucky phrase) .. is by using a mathematical process called Singular Value Decomposition (another horrible phrase).
SVD is not well explained in my opinion, so here I'll try to gently introduce the idea by slowly building up from basics that we're all familiar with. Let' see if I can make it a bit clearer ...
### What Are We Trying To Do .. the View From Above
Before we dive into details .. we should really make clear and keep in mind what we're trying to do.
We're trying to take a matrix and turn it into a smaller one. That's it.
Let's explain that ... we have matrices, just a grid of numbers, which represent the word counts of words in documents. Here's one we saw earlier, which is a 6 by 6 matrix. The word wheel appears 4 times in doc1 and doesn't appear in doc6.
We want to turn it into a smaller matrix which also somehow keeps the most essential information. We saw this too, when we reduced that 6 by 6 matrix into a smaller 2 by 6 matrix.
So mathematically we need to do something like the following diagram shows:
As we go through some of the maths and algebra .. we should keep that above image in our minds .. that's what we're ultimately trying to do, .. reduce dimensions.
### Don't Throw Away The Important Bits
Looking at that diagram above we might naively suggest throwing away all the rightmost and bottommost elements, and just keeping the top left. Well, it would certainly reduce the size of the matrix.
But this is the important bit - somehow we have to throw away only the least useful bits of information.
That's really key .. and the reason we have to go through some mathematical jiggery-pokery. If we didn't care about keeping the most useful information in the smaller reduced matrix, we could just throw away bits of the bigger matrix without too much thought.
So how do we identify the most useful bits? If we can do this, we can chuck away the least useful bits with ease. This is the reason we need a bit of maths.
### What's Important?
How do we work out which bits are important? Who says they're important? Perhaps we might disagree on what's important and what's not?
All valid questions .. so let's agree on a definition otherwise we'll get nowhere:
The important bits of a set of data are the ones that can best reconstruct the whole data set.
That means if we threw away the important bits, and looked at the remaining bits, we would not be able to imagine or reconstruct anything like the original set of data. It won't be perfect, because we have lost information by throwing something away, but hopefully the bit we chucked away wasn't crucial to the shape or nature of the original data set.
You may realise that this is a bit like lossy compression - when we squeeze images or music down into a smaller set of data .. a smaller file .. we're throwing away bits which we hope don't distort the reconstructed image or music too much.
### Identifying The Important Bits
How do we actually identify the important bits? In mathematics, sometimes it's easier to work backwards from a desired result, rather than starting from nothing and hoping to get do a desired result.
So imagine we already have a mathematical operation that transforms our smaller matrix into the big one. When we apply a transformation, we usually do this by multiplying by a matrix. Why not adding or some other arithmetic operation? Simply because multiplying matrices gives us the possibility of stretching and rotating ... but adding/subtracting only shifts data along the axes which is not so useful if we have data that is normalised or centred around an origin.
The following shows this supposed transformation by multiplying matrices visually. That yellow matrix is the magic transformation. It applies to the smaller dimensions-reduced matrix to give us the original bigger matrix.
Could this work? Well, one rule that must apply is the one we know from school maths - that the rows and columns of the multiplied matrices must be be compatible. You'll remember that when we multiply matrices:
• the answer has the same number of rows as the first matrix, and the same number of columns as the second.
• and that the number of columns of the first matrix must equal the number of rows of the second.
The following illustrates these rules. If the rules weren't true the matrices can't actually be multiplied.
Oh dear .. that means we can't have a matrix multiplied by our smaller dimension-reduced one that results in the original bigger matrix. Why? Because from the multiplication rules above, the columns of the reduced matrix must equal the rows of the original matrix .. which wouldn't be a reduction.
Ok - so we actually do need the right hand side of that equation to comply with the matrix multiplication rules. There are lots of ways you could do that, but a simple one - simple is always good - just adds another matrix on the right. Have a look at the following to see how and why you can multiply three matrices (rightmost first) and end up with an answer that is the size of the big original.
You can see that to recreate a matrix of size A columns and B rows, the yellow matrix needs to have B=D rows and the green matrix needs to have A=G columns. It doesn't mater what the size of the middle matrix is .. and we want it to be smaller!
This is good progress - we now have a mathematical way to relate a bigger matrix to a smaller one.
Even better - you can see how that middle pink matrix could be as large as the original one ... and could shrink ... and we'd still have the answer of the right size. That's what we want to do .. to do that shrinking of the matrix .. and still reconstructing the original.
Phew! What a lot of maths effort!
Now we need to know what those matrices on the right need to contain ...
### Singular Value Decomposition
It turns out that it is indeed possible to break a matrix down into three multiplied components just as we wanted to do above. Those three components are the yellow, pink and green matrices on the right hand side of the earlier diagram.
This breaking down into multiplied components is just like factorisation that we might have seen at school. For example, 24 can be factorised into 2, 3 and 4. That is $24 = 2 \cdot 3 \cdot 4$.
There may be many ways to factorise a matrix .. just like there may be many ways to factorise normal numbers ... but the special factorisation of a matrix A that we're interested in is called the Singular Value Decomposition, or just SVD. It is often written as follows:
$$\mathbf{A} = \mathbf{U} \cdot \mathbf{\Sigma} \cdot \mathbf{V}^T$$
Let's go through that expression step-by-step:
• The $\mathbf{A}$ is the matrix we want to break down into 3 multiplied pieces.
• Those 3 pieces are $\mathbf{U}$, $\mathbf{\Sigma}$, and $\mathbf{V}^T$
• The middle piece, $\mathbf{\Sigma}$, is the piece that we can change the size of, and still have a matrix $\mathbf{A}$ of the right size. This is the one we're interested in for our overall aim of reducing dimensions. Lots of people also call this $\mathbf{S}$ instead of $\mathbf{\Sigma}$.
• The $\mathbf{U}$ is an interesting matrix but we can delay talking about it until the next post. We will see it used later in this post too.
• Similarly, the $\mathbf{V}^T$ is also an interesting matrix, which we'll delay talking about until the next post. That $T$ superscript means the matrix is transposed, that is, flipped along the diagonal that runs from top left to bottom right (so the top tight element becomes the bottom left element). Why? It's just convention that came from mathematicians finding it easier to work with.
To keep this discussion clear, we won't look at why and how the SVD works ... we'll do that in the next post. For now, we'll assume it works.
In fact, SVD is so popular and powerful, it is provided by many software toolkits, including python's numpy.linalg.svd libraries.
### Reducing Dimensions
That middle element $\mathbf{\Sigma}$ is the one we'll use to reduce the dimensions of $\mathbf{A}$. We've already seen above that if we change the size of $\mathbf{\Sigma}$ we don't affect the size of $\mathbf{A}$.
But how do we know we're not going to throw away important information? We already said that we only want to throw away bits of $\mathbf{\Sigma}$ that contribute less to $\mathbf{A}$?
Again - a nice feature of SVD is that the middle element, called the matrix of singular values is diagonal. That is, all the elements are zero except those on the diagonal. So that makes throwing bits away easy, because each diagonal element only affects a row or a column of the other matrices, $\mathbf{U}$ and $\mathbf{V}^T$.
Even better - the elements of that singular value matrix are ordered by size. The biggest values are at the top left, and the smallest at the bottom right. That is really really helpful. The largest values are the ones that contribute most to the content of $\mathbf{A}$. So if we want to reduce dimensions, we want to keep these large components.
Let's say all that last bit again, but in a different way .. because it is important enough to repeat! Have a look at the following. It's just a simple statement showing how two matrices are multiplied together to give another one.
$$\begin{pmatrix} 18 & 3 \\ \end{pmatrix} = \begin{pmatrix} 2 & 3 \\ \end{pmatrix} \cdot \begin{pmatrix} 9 & 0 \\ 0 & 1 \\ \end{pmatrix}$$
You can see that the 9 in the diagonal matrix contributes to the left hand side more than the 1 does. If we wanted to chop off that 1, we'd have the following:
$$\begin{pmatrix} 18 & 0 \\ \end{pmatrix} = \begin{pmatrix} 2 & 3 \\ \end{pmatrix} \cdot \begin{pmatrix} 9 & 0 \\ 0 & 0 \\ \end{pmatrix}$$
The original matrix is changed bit. But not as much as if we had removed the 9 and kept the 1:
$$\begin{pmatrix} 0 & 3 \\ \end{pmatrix} = \begin{pmatrix} 2 & 3 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 & 0 \\ 0 & 1 \\ \end{pmatrix}$$
It might be easier to see what's happening with pictures instead of numbers ... the following shows the effect on an image of a face as the singular values of the diagonal matrix are turned to zero progressively. You can see again, that the largest (top left) value maintains the broadest structure, whilst the smaller ones provide less significant information. Incidentally, this is not a bad way to compress images .. and there's a further good example here, and another here.
That shows how a diagonal matrix with the diagonal elements ordered by size, can be used to remove dimensions in a way that removes the least information first, and keeps the most important information. (There are proper rigorous proofs of this, but here we just want to give an intuition).
Enough theory .. let's try it ...
### SVD Practical Example 1 - Check Factors Rebuild Original Matrix
Let's see the SVD in action on a simple matrix first.
This python notebook on github shows how the following simple matrix is decomposed into 3 elements. The original matrix is $\mathbf{A}$:
A =
[[-1 1]
[ 1 1]]
Python numpy's function for SVD numpy.linalg.svd() uses slightly different terminology. It factors $\mathbf{A}$ into $\mathbf{u}$, $numpy.diag(s)$, and $\mathbf{v}$. Compared to the usual notation, $\mathbf{U} = \mathbf{u}$, $\mathbf{S} = numpy.diag(s)$, and $\mathbf{V}^T = \mathbf{v}$.
Applying this python SVD function to $\mathbf{A}$ gives us ...
U =
[[-0.70710678 0.70710678]
[ 0.70710678 0.70710678]]
S =
[[ 1.41421356 0. ]
[ 0. 1.41421356]]
V^T =
[[ 1. 0.]
[ 0. 1.]]
We can see a few things already:
• The $\mathbf{U}$ has rows that are unit-length vectors.
• The $\mathbf{V}^T$ has columns that are unit-length vectors.
• The $\mathbf{S}$ is indeed diagonal.
Now that we have broken $\mathbf{A}$ into $\mathbf{U} \cdot \mathbf{S} \cdot \mathbf{V}^T$, let's see if we can reconstruct $\mathbf{A}$ from these 3 factors.
Look again at the python notebook, and you'll see that multiplying $\mathbf{U}$, $\mathbf{S}$, and $\mathbf{V}^T$gives us back the original $\mathbf{A}$.
A2 =
[[-1. 1.]
[ 1. 1.]]
That's great! It is reassuring to see that actually confirmed.
Now let's look at SVD applied to the simple document set about cooking and vehicles we used before.
### SVD Practical Example 2 - Reduce Simple Document Set
Another python notebook on github illustrates SVD applied to that simple set of documents we talked about above.
The word-document matrix is
doc1doc2doc3doc4doc5doc6
wheel0.500.33330.250.00000.00000.0
seat0.250.33330.000.00000.00000.0
engine0.250.33330.750.00000.00000.0
slice0.000.00000.000.50000.50000.6
oven0.000.00000.000.33330.16670.0
boil0.000.00000.000.16670.33330.4
This is the original set of data $\mathbf{A}$ that we'll try to decompose into 3 pieces with SVD.
Applying the python numpy SVD function we get those 3 pieces:
U =
[[ 0. -0.57 -0.52 0. -0.64 0. ]
[ 0. -0.29 -0.6 0. 0.75 0. ]
[ 0. -0.77 0.61 0. 0.2 0. ]
[-0.84 0. 0. 0. 0. -0.54]
[-0.25 0. 0. -0.89 0. 0.38]
[-0.48 0. 0. 0.45 0. 0.75]]
S =
[[ 1.1 0. 0. 0. 0. 0. ]
[ 0. 1.06 0. 0. 0. 0. ]
[ 0. 0. 0.45 0. 0. 0. ]
[ 0. 0. 0. 0.29 0. 0. ]
[ 0. 0. 0. 0. 0.13 0. ]
[ 0. 0. 0. 0. 0. 0.05]]
V^T =
[[ 0. 0. 0. -0.53 -0.56 -0.63]
[-0.52 -0.51 -0.68 0. 0. 0. ]
[-0.57 -0.38 0.72 0. 0. 0. ]
[ 0. 0. 0. -0.77 0.01 0.64]
[-0.63 0.77 -0.1 0. 0. 0. ]
[-0. -0. -0. -0.35 0.83 -0.44]]
We make the same observations about these 3 matrices:
• The $\mathbf{U}$ has rows that are unit-length vectors.
• The $\mathbf{V}^T$ has columns that are unit-length vectors.
• The $\mathbf{S}$ is indeed diagonal, and this time we can see the values ordered by size, the largest being 1.1 and the smallest 0.05.
Now let's do the important bit - reducing dimensions. We previously wanted to reduce this 6 dimensional data into a more easily viewable 2 dimensions. We do this by only keeping 2 dimensions of $\mathbf{S}$ and calling it $\mathbf{S}_{reduced}$.
S2 =
[[ 1.1 0. 0. 0. 0. 0. ]
[ 0. 1.06 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]]
You can see that we keep only the values 1.1 and 1.06 and zero everything else.
What do we do with this new singular matrix? Well, we use it to create new views of the documents and the words. How? Look at the following digram showing what happens when we multiply $\mathbf{S}$ with one of the other two matrices.
You can see that:
• Calculating $\mathbf{S}_{reduced} \cdot \mathbf{V}^T$ is combining a simple scaling factor (the diagonal elements of $\mathbf{S}$) with the column vectors of $\mathbf{V}^T$. That means we still have a vertical vector, which in the original $\mathbf{A}$ is the document view. So this calculated vector is a new document view.
• Calculating $\mathbf{U} \cdot \mathbf{S}_{reduced}$ is combining a simple scaling factor (the diagonal elements of $\mathbf{S}$) with the row vectors of $\mathbf{U}$. Again, this means we still have a row vector, which in the original $\mathbf{A}$ is the word view. So this calculated vector is a new word view.
For a more mathematical explanation see the update at the bottom of this blog.
Let's look at the new view of the documents, which is $\mathbf{S}_{reduced} \cdot \mathbf{V}^T$. This turns out to be:
S_reduced_VT =
[[ 0. 0. 0. -0.58 -0.62 -0.7 ]
[-0.55 -0.54 -0.72 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]]
You can see this is really a 2-dimensional set of data ... each column, which represents a document, has only 2 features now. That's a direct result of zeroing all but the top2 elements of the diagonal $\mathbf{S}$.
Let's plot this to see what it looks like:
We see those 6 documents now squished into a 2-dimensional plane (the above graph), and we can see 2 clusters of documents. Those are the two natural groups of documents about vehicles and documents about cooking.
Nice!
Let's now look at the reduced view of the words, which is $\mathbf{U} \cdot \mathbf{S}_{reduced}$. This turns out to be:
012345
wheel0.00-0.600.00.00.00.0
seat0.00-0.300.00.00.00.0
engine0.00-0.810.00.00.00.0
slice-0.930.000.00.00.00.0
oven-0.270.000.00.00.00.0
boil-0.530.000.00.00.00.0
Remember you can follow the code at the notebook on github.
Anyway - what does that view tell us? Well, we can see there are 2 topics:
• The first topic is mostly influenced by the word slice, and then a bit by boil, and not much by oven. That topic is ... cooking. You can see that slice has a contributing factor of -0.93, compared to say oven which as a smaller -0.27.
• The other topic is mostly made of up the word engine, then wheel and some seat. That topic is ... vehicles.
What we have just done is exact those topics out without having to know what they were beforehand.
Nice!
### Try SVD on a Slightly Bigger (3 Topic) Document Set
Let's see if this SVD method works for a slightly bigger set of documents. Let's try it. We'll take the above small set of documents and add a 3 more representing a new topic, home, made of words like door, kitchen and roof.
You can follow the code in a notebook on githhub.
Here's the word document matrix:
doc1doc2doc3doc4doc5doc6doc7doc8doc9
wheel0.500.33330.250.00000.00000.00.00.000.00
seat0.250.33330.000.00000.00000.00.00.250.00
engine0.250.33330.750.00000.00000.00.00.000.00
slice0.000.00000.000.50000.50000.60.00.000.00
oven0.000.00000.000.33330.16670.00.50.000.00
boil0.000.00000.000.16670.33330.40.00.000.00
door0.000.00000.000.00000.00000.00.00.250.25
kitchen0.000.00000.000.00000.00000.00.50.250.25
roof0.000.00000.000.00000.00000.00.00.250.50
We take he SVD but use the top 3 element of the singular matrix, not 2. Why? Well, we could use 2 if all we wanted to do was plot the new document view on a 2d plot. But we want to spot 3 topics so we need 3 elements of the singular matrix to transform the words into topics. In real life, with unknown data, we would explore the data and views with different cuts of S to see whether meaningful topics emerged.
The 2d plot of documents clearly shows 3 clusters .. which is what we expected. Great!
And the topic view is as follows:
012345678
wheel-0.010.6-0.04000000
seat-0.020.330.11000000
engine-0.020.81-0.09000000
slice-0.91-0.03-0.15000000
oven-0.3700.29000000
boil-0.52-0.02-0.1000000
door-0.020.030.27000000
kitchen-0.120.040.57000000
roof-0.030.040.41000000
The above shows that the 2 new axes are linear combinations of existing words:
• topic 1 is mostly slice and some boil
• topic 2 is mostly engine and some wheel
• topic 3 is mostly kitchen and some roof
### Conclusion
We've seen the broad intuition behind reducing dimensions by using a special method of factorising matrices which allows us to separate out the key contributing elements and zeroing the ones we think are less informative.
We've seen practically how to:
• visualise high dimensional data in much lower dimensions .. 6-dimensional documents on to a 2-d plot.
• extract out the key topics in terms of the words they contain, without knowing what they are beforehand.
In this post we've said that the SVD can do it's magic but not really explained why or how .. that's a more mathematical discussion which we'll do in the next / future post.
### Update
I wasn't 100% convinced by the many books and guides that say that $\mathbf{S}_{reduced} \cdot \mathbf{V}^T$ is a document view in the new, dimension-reduced space. Similarly, the explanations for why $\mathbf{U} \cdot \mathbf{S}_{reduced}$ is a new word view in the dimension-reduced space weren't quite believable.
So I kept looking for a good explanation. The best I could find is the following from one of my textbooks:
At the bottom of that section it says:
"we can see that ... $A'^TA' = VD'^TD'V^T$ .. and thus the representation
of documents in the low-dimensional LSI space is given by the rows of the $VD^T$ matrix"
How is this?
Let's start with what they call $A'^TA'$. That's our original matrix $\mathbf{A}$ but the version that's reconstructed from the reduced $\mathbf{S}$. So we can say
$$\mathbf{A'} = \mathbf{U} \cdot \mathbf{\Sigma'} \cdot \mathbf{V}^T$$
Notice the symbol ' used to remind us we're talking about the reduced dimension versions of these.
Fine .. but what's the obsession with $A'^TA'$? We'll let's go back to our original $\mathbf{A}$. What is $\mathbf{A^T} \cdot \mathbf{A}$? Well, it's a matrix where each element is arrived at by combining two vectors, one from $\mathbf{A^T}$ and one from $\mathbf{A}$. The way matrices are multiplied, this is effectively combining document vectors ... vectors of each document, containing word-counts. The highest values will be those pairs which have high values for word-counts for each document vector... in other words, $\mathbf{A^T} \cdot \mathbf{A}$ is a document-document correlation matrix. It shows which documents are most similar to each other.
To see this even more clearly, that first part of the multiplication is $\mathbf{A^T}$ .. that transpose makes sure that when we do the matrix multiplication, it is the document vector (vertical in $\mathbf{A}$) that is used.
So,
$$\mathbf{A^T} \cdot \mathbf{A}$$
is a document-document correlation matrix.
You can see that if we swapped rows for columns when multiplying those matrices, we'd get a word-word correlation matrix:
$$\mathbf{A} \cdot \mathbf{A^T}$$
where the order is reverse.
Phew ... what's that got to do with the question? Well, if we expand out the algebra for what $\mathbf{A}$ is .. $\mathbf{A} = \mathbf{U} \cdot \mathbf{\Sigma} \cdot \mathbf{V}^T$ .. we can say ..
$$\mathbf{A} = \mathbf{U} \cdot \mathbf{\Sigma} \cdot \mathbf{V}^T$$
$$\mathbf{A^T} \cdot \mathbf{A} = (\mathbf{V} \cdot \mathbf{\Sigma}^T \cdot \mathbf{U}^T) \cdot (\mathbf{U} \cdot \mathbf{\Sigma} \cdot \mathbf{V}^T)$$
.. simplifying because $\mathbf{U}^T \cdot \mathbf{U} = 1$ by definition ... and also $\mathbf{\Sigma}^T = \mathbf{\Sigma}$ because it's diagonal ...
$$\mathbf{A^T} \cdot \mathbf{A} = (\mathbf{V} \cdot \mathbf{\Sigma}) \cdot (\mathbf{\Sigma} \cdot \mathbf{V}^T)$$
Now from this we can see that the right hand side is a document-document correlation matrix, with two similar bits, one which is the transpose of the other. So we can relate $\mathbf{A}$ with $\mathbf{\Sigma} \cdot \mathbf{V}^T$ in some way.
Or in the new low-dimensional space, $\mathbf{A'}$ with $\mathbf{\Sigma'} \cdot \mathbf{V}^T$.
That's why we can say that
$$\mathbf{\Sigma'} \cdot \mathbf{V}^T$$
is a document-view like matrix. I think ... I'm convinced ... are you?
### Update 2
I am convinced now .. I've found many more papers that explain the justification for why $\mathbf{\Sigma'} \cdot \mathbf{V}^T$ is a document-view like matrix.
(click to enlarge)
Notice how that top paper makes clear that $\mathbf{\Sigma'} \cdot \mathbf{V}^T$ is not the same as $\mathbf{A}$ or $\mathbf{A}^T$ .. just that the correlations work out the same:
$$\mathbf{A^T} \cdot \mathbf{A} = (\mathbf{V} \cdot \mathbf{\Sigma}) \cdot (\mathbf{\Sigma} \cdot \mathbf{V}^T)$$
|
2018-02-24 17:33:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7415056228637695, "perplexity": 2918.9910896837328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815918.89/warc/CC-MAIN-20180224172043-20180224192043-00386.warc.gz"}
|
https://aakashdigitalsrv1.meritnation.com/ask-answer/expert-answers/coordinate-geometry/ch134/page:7
|
Subject: Maths, asked on 4/2/18
## Q 39 39. Find the points on the x-axis which are at a distance of $2\sqrt{5}$ from the point (7, 4). How many such points are there ?
Subject: Maths, asked on 4/2/18
## Show that the point (3,2),(0,5) , (-3,2) & (0,-1) are the vertices of a square
Subject: Maths, asked on 1/2/18
## 16. If (3K-1, K-2) , (K,K-7) and (K-1, -K-2 ) are collinear . Find K
Subject: Maths, asked on 31/1/18
## Solve Question no. 4 using distance formula Q4. Find the value of m if the points (5 , 1) (–2, –3) and (8 , 2m ) are collinear.
Subject: Maths, asked on 31/1/18
## Question no. 3 Q3. Find a point which is equidistant from the points A (– 5, 4) and B (– 1, 6). How many such points are there ?
Subject: Maths, asked on 30/1/18
## 23rd Q23. Draw the graph of the equation : x + 3y = 15 and 2x – 3y = – 6 Determine the coordinates of the of the verikes triangle formed by these lines and y – axis. Also shade the triangular region.
Subject: Maths, asked on 29/1/18
## AOBC is a rectangle whose three vertices are A(0,3) O(0,0) B(5,0). Find coordinate of C through Distance formula.
Subject: Maths, asked on 28/1/18
## Find the coordinates of the point P on AB such that PA/PB = 3/7, where A (-3,1) and B (2,5).
Subject: Maths, asked on 28/1/18
## Question no 17 & 18
Subject: Maths, asked on 28/1/18
## Pls answer this The points A(4, -2), B(7, 2), C(0, 9) and D(-3, 5) form a parallelogram. Find the length of the altitude of the parallelogram on the base AB.
Subject: Maths, asked on 26/1/18
## Q.16
Subject: Maths, asked on 26/1/18
## Q.30. Find the equation of the straight line bisecting the segment joining the points (5, 3) and (4, 4) and making an angel $45°$ with positive direction of x-axis.
Subject: Maths, asked on 26/1/18
## Q.29. If are real roots of equation ${x}^{3}-3p{x}^{2}+3qx-1=0$, then find the centroid of a triangle whose vertices are .
Subject: Maths, asked on 26/1/18
## Q.13. If the line (3x - 8y + 5) + $\lambda$(5x - 3y + 10) = 0 is parallel to x-axis, then find the value of $\lambda$?
Subject: Maths, asked on 26/1/18
## Q.5. Find the area enclosed by $2\left|x\right|+3\left|y\right|\le 6$.
What are you looking for?
|
2021-05-17 08:55:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.609412670135498, "perplexity": 2079.8228191867775}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00541.warc.gz"}
|
https://www.physicsforums.com/threads/how-do-i-prove-an-either-or-inequality.903739/
|
# How do I prove an either/or inequality?
1. Feb 12, 2017
### Eclair_de_XII
1. The problem statement, all variables and given/known data
"Given: $a,b,c∈ℤ$,
Prove: If $2a+3b≥12m+1$, then $a≥3m+1$ or $b≥2m+1$."
2. Relevant equations
$P:a≥3m+1$
$Q:b≥2m+1$
$R:2a+3b≥12m+1$
3. The attempt at a solution
Goal: $~(P∨Q)≅(~P)∧(~Q)⇒~R$
Assume that $a<3m+1$ and $b<2m+1$. Then $2a+3b<2(3m+1)+3(2m+1)=12m+5$. But this doesn't necessarily imply that $2a+3b>12m+1$. Can someone help me connect the dots?
2. Feb 12, 2017
### FactChecker
The problem doesn't ask you to prove that. Your proof by contradiction is complete as it is.
CORRECTION: I should have said proof by contrapositive.
CORRECTION 2: I completely missed the point that the OP proves <12m+5 but it needs <12m+1
Last edited: Feb 13, 2017
3. Feb 12, 2017
### Q.B.
Hi,
Your statement must be wrong since $a=5,b=1$ and $m=1$ clearly violate it.
4. Feb 12, 2017
### FactChecker
Note. Tildas are difficult to do in LaTeX. I don't know how to do them. Here is a readable version of your Goal.
5. Feb 12, 2017
### Eclair_de_XII
I was trying to do proof by contra-positive, but I ended up with an inequality in my original post that says nothing about my original statement. I figured that it was only conditionally true. So I ended up with a statement just saying that the statement is false when at least one of the inequalities is false; then I gave a counter-example.
6. Feb 13, 2017
### Ray Vickson
Since $a, b, m$ are all integers, you can re-write the three inequalities in the original question by first removing the "+1" on all three right-hand-sides and replacing "≥" by ">". That works because both sides are integers. It is worth doing---try it and see.
7. Feb 13, 2017
### FactChecker
If a < 3m+1 and you are restricted to integers, then you know that a ≤ A = 3m. Likewise b < 2m+1 implies b ≤ B = 2m. Compare 2a+3b with A+B and 12m+1.
CORRECTION: Should have said "Compare 2a+3b with 2A+3B and 12m+1."
Last edited: Feb 14, 2017
8. Feb 13, 2017
### Eclair_de_XII
Well, that simplified my argument by a whole lot more. Thanks, too, @FactChecker.
|
2017-12-15 22:45:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6174337267875671, "perplexity": 1205.1037978855488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579567.73/warc/CC-MAIN-20171215211734-20171215233734-00375.warc.gz"}
|
https://gmatclub.com/forum/an-investment-compounds-annually-at-an-interest-rate-of-34-1-what-is-308125.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Oct 2019, 22:41
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# An investment compounds annually at an interest rate of 34.1% What is
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 58391
An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
14 Oct 2019, 21:42
1
1
00:00
Difficulty:
35% (medium)
Question Stats:
61% (01:48) correct 39% (01:40) wrong based on 39 sessions
### HideShow timer Statistics
Competition Mode Question
An investment compounds annually at an interest rate of 34.1% What is the smallest investment period by which time the investment will more than triple in value?
A. 3
B. 4
C. 6
D. 9
E. 12
_________________
Director
Joined: 20 Jul 2017
Posts: 907
Location: India
Concentration: Entrepreneurship, Marketing
WE: Education (Education)
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
14 Oct 2019, 22:43
1
Let the initial amount = p
After 'n' years, Amount = p(1 + 34.1/100)^n = p(1.341)^n
When n = 2, Amount = 1.75p approx
When n = 4, Amount = 3.06p approx
--> n = 4
IMO Option B
Senior Manager
Joined: 18 May 2019
Posts: 357
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
14 Oct 2019, 23:03
1
For simplicity of calculation, I would approximate 34.1% to 34%.
After year 1, The total amount will be 1.34 times the invested amount
After year 2, the total amount will be 1.34*1.34 = 1.7956 approximately 1.80 times the invested amount
After year 3, the total amount will be 1.34*1.80 = 2.412, approximately 2.4 times the invested amount
After year 4, the total amount will be 1.34*2.4 = 3.216, and this is more than 3 times the invested amount.
Therefore the amount has to be invested for at least 4 years at the compound annual interest rate of 34.1% in order to just earn more than three times the invested amount.
PS. I would be looking forward to shorter alternatives, as this approach is not optimal in my view.
Manager
Joined: 01 Mar 2019
Posts: 155
Location: India
Concentration: Strategy, Social Entrepreneurship
Schools: Ross '22, ISB '20, NUS '20
GPA: 4
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
14 Oct 2019, 23:09
1
Initial Investment be x
Total amount = x(1.34)^y
Where y is time of investment
Above value have to become 3x
= (1.34*1.34*1.34*1.34) = 3.22x
So y have to be 4
OA:B
Posted from my mobile device
_________________
Appreciate any KUDOS given
GMAT Club Legend
Joined: 18 Aug 2017
Posts: 5009
Location: India
Concentration: Sustainability, Marketing
GPA: 4
WE: Marketing (Energy and Utilities)
An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
Updated on: 15 Oct 2019, 21:23
CI=p*(1+r/100)'t
so R=34.1%
P=1 target P =4
will happen
IMO B
An investment compounds annually at an interest rate of 34.1% What is the smallest investment period by which time the investment will more than triple in value?
A. 3
B. 4
C. 6
D. 9
E. 12
Originally posted by Archit3110 on 15 Oct 2019, 00:57.
Last edited by Archit3110 on 15 Oct 2019, 21:23, edited 1 time in total.
Director
Joined: 24 Nov 2016
Posts: 589
Location: United States
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
15 Oct 2019, 04:48
1
Quote:
An investment compounds annually at an interest rate of 34.1% What is the smallest investment period by which time the investment will more than triple in value?
A. 3
B. 4
C. 6
D. 9
E. 12
34.1% is close to 33.3% = 1/3
$$(1+1/3)^n≥3…(4/3)^n≥3$$
$$n=3…(4/3)^3=64/27<3$$
$$n=4…(4/3)^4=256/81=aprox.260/80=aprox.26/8>3$$
Manager
Joined: 10 Apr 2018
Posts: 223
Location: India
Concentration: Entrepreneurship, Strategy
GMAT 1: 680 Q48 V34
GPA: 3.3
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
15 Oct 2019, 06:26
1
Quote:
An investment compounds annually at an interest rate of 34.1% What is the smallest investment period by which time the investment will more than triple in value?
A. 3
B. 4
C. 6
D. 9
E. 12
Let the principal amount be p and time period of investment be t.
So, 3p=p$$(1+\frac{34.1}{100})^t$$
=> 3=$$(\frac{134.1}{100})^t$$
=>3=$$(1.341)^t$$
For t=3, $$(1.341)^t$$=2.41 and for t=4, $$(1.341)^t$$=3.23
Thus, smallest investment period is 4 years.
Therefore, the correct answer is option B.
Manager
Joined: 22 Feb 2018
Posts: 111
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
15 Oct 2019, 08:40
1
Imo. B
Initial investment = p, final amount (triple) = 3p, r = 34.1 % and n=?
3p = p (1+34.1/100)n
3 = (1.341)n, as p cancels out from the equation
Let's workout from answer choices to get the answer in quickest way,
Let's try, n=3, 1.3(41)*1.3 = 1.69, 1.69*1.3 = 1.6 * 1.3 = 2.xy, So, A is not the answer
Let's try, n=4, 1.34*1.34 = ~ 1.8, 1.8*1.8 = 3.xy, So, B can be answer.
Rest options are greater than 4, hence, it can satisfy the equation. But, we need smallest investment period to triple the investment. Hence, n = 4.
A. 3
B. 4
C. 6
D. 9
E. 12
Senior Manager
Joined: 07 Mar 2019
Posts: 321
Location: India
GMAT 1: 580 Q43 V27
WE: Sales (Energy and Utilities)
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
15 Oct 2019, 11:11
1
An investment compounds annually at an interest rate of 34.1% What is the smallest investment period by which time the investment will more than triple in value?
A. 3
B. 4
C. 6
D. 9
E. 12
Here Rule of 72 can be applied. The rule states that number '72' if divided by number of interest rate(34.1) would give a number of period in which the investment amount doubles. So to triple the investment amount we simply multiple the resultant number by 3/2. Hence
$$\frac{72}{34.1} * \frac{3}{2}$$ = 3.16
Since our investment period is annual, the smallest investment period is 4.
Alternatively,
Let investment = 100
Rate r = 34.1%
Initial amount = 100
Amount after 1st year = 134.1
Amount after 2nd year = 179.8
Amount after 3rd year = 241.1
Amount after 4th year = 323.3 (more than triple)
P.S.: In finance rule of 72 is not accurate so other measures are taken to make it as accurate as possible
_________________
Ephemeral Epiphany..!
GMATPREP1 590(Q48,V23) March 6, 2019
GMATPREP2 610(Q44,V29) June 10, 2019
GMATPREPSoft1 680(Q48,V35) June 26, 2019
Manager
Joined: 17 Jan 2019
Posts: 92
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink]
### Show Tags
15 Oct 2019, 19:17
1
2 times: 134.1*1.341 approximately 180
3 times: 180*1.341 is approximately 240
Therefore 4 times to get more than 300%
B
Re: An investment compounds annually at an interest rate of 34.1% What is [#permalink] 15 Oct 2019, 19:17
Display posts from previous: Sort by
|
2019-10-17 05:41:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6241158843040466, "perplexity": 6603.836269931596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00445.warc.gz"}
|
https://brilliant.org/problems/see-if-you-can-look-at-this-problem-from-another/
|
# See if you can look at this problem from another perspective.....
Algebra Level 1
$^6\sqrt{4096} = ?$
×
|
2020-08-12 09:55:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377158880233765, "perplexity": 1014.1499024865275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00198.warc.gz"}
|
https://www.semanticscholar.org/paper/Competing-phases-in-spin-ladders-with-ring-exchange-Metavitsiadis-Eggert/ec0ac8c2e5f85da71c84cc185b563c742f9959b6
|
# Competing phases in spin ladders with ring exchange and frustration
@article{Metavitsiadis2017CompetingPI,
title={Competing phases in spin ladders with ring exchange and frustration},
journal={Physical Review B},
year={2017},
volume={95},
pages={144415}
}
• Published 30 November 2016
• Physics
• Physical Review B
The ground state properties of spin-1/2 ladders are studied, emphasizing the role of frustration and ring exchange coupling. We present a unified field theory for ladders with general coupling constants and geometry. Rich phase diagrams can be deduced by using a renormalization group calculation for ladders with in--chain next nearest neighbor interactions and plaquette ring exchange coupling. In addition to established phases such as Haldane, rung singlet, and dimerized phases, we also observe…
10 Citations
## Figures from this paper
Transition to disordered phase and spin dynamics in the two-dimensional ferrimagnetic model
• L. S. Lima
• Physics
Journal of Magnetism and Magnetic Materials
• 2018
Abstract The effect of transition to disordered phase on AC conductivity in the two-dimensional frustrated ferrimagnetic model in the square lattice at T = 0 is investigated. The spin conductivity, σ
Quantum phase transition of the Bose-Hubbard model with anisotropic hopping on a cubic lattice
• Physics
• 2020
In a quantum many-body system, dimensionality plays a critical role in determining the type of quantum phase transition. To study the quantum system during dimensional crossover, we studied the
Multiparticle Interactions for Ultracold Atoms in Optical Tweezers: Cyclic Ring-Exchange Terms.
• Medicine, Physics
Physical review letters
• 2020
A protocol showing how SU(N)-invariant multibody interactions can be implemented in optical tweezer arrays is proposed, and how the proposed protocol can be utilized to implement the strongly frustrated J-Q model, a candidate for hosting a deconfined quantum critical point.
Ground-state phase diagram of the frustrated spin- 12 two-leg honeycomb ladder
• Physics
Physical Review B
• 2018
We investigate a spin-$1/2$ two-leg honeycomb ladder with frustrating next-nearest-neighbor (NNN) coupling along the legs, which is equivalent to two $J_1$-$J_2$ spin chains coupled with $J_\perp$ at
Enhancement of magnetization plateaus in low-dimensional spin systems
• Physics
Physical Review B
• 2020
We study the low-energy properties and, in particular, the magnetization process of a spin-1/2 Heisenberg J₁−J₂ sawtooth and frustrated chain (also known as a zigzag ladder) with a spatially
The specific heat and magnetic properties of a spin-1/2 ladder including butterfly-shaped molecular cages
The specific heat, structural characterization, and magnetic property studies of a new spin configuration of butterfly-shaped molecular cages are reported. The model introduced here is an infinite
The specific heat and magnetic properties of two species of spin-1/2 ladders with butterfly-shaped unit blocks.
• Physics, Medicine
Journal of physics. Condensed matter : an Institute of Physics journal
• 2019
It is proved that the reentrance points can be satisfactorily considered as magnetization plateau witnesses even in the high temperatures and that the fluctuations of the specific heat with respect to the magnetic field are in highly accordance with the magnetization plateaus and magnetization jumps.
Intrinsic jump character of first-order quantum phase transitions
• Physics
Physical Review B
• 2019
We find that the first-order quantum phase transitions~(QPTs) are characterized by intrinsic jumps of relevant operators while the continuous ones are not. Based on such an observation, we propose a
Entanglement of Hard‐Core Bosons on the Honeycomb Lattice
• Physics
physica status solidi (b)
• 2019
The entanglement of hard-core bosons in square and honeycomb lattices with nearest-neighbor interactions is estimated by means of quantum Monte Carlo simulations and spin-wave analysis. The
Thermal properties of rung-disordered two-leg quantum spin ladders: Quantum Monte Carlo study.
• Physics, Medicine
Physical review. E
• 2020
A two-leg quenched random bond disordered antiferromagnetic spin-1/2 Heisenberg ladder system is investigated by means of stochastic series expansion quantum Monte Carlo method and has a special temperature point at which the specific heat takes the same value regardless of the strength of the disorder.
## References
SHOWING 1-10 OF 72 REFERENCES
Quantum dimer phases in a frustrated spin ladder
• Physics
• 2006
The phase diagram of a frustrated S = 1/2 antiferromagnetic spin ladder with additional nextnearest neighbor exchanges, both diagonal and inchain, is studied by a weak-coupling effective field theory
Quantum phase diagram of the frustrated spin ladder with next-nearest-neighbor interactions
• Physics
• 2012
Using the density matrix renormalization group technique, we investigate the quantum phase diagram of a ferromagnetic frustrated two-leg spin-1/2 ladder with diagonal and in-chain
Quantum phase transitions in multileg spin ladders with ring exchange
• Physics
• 2013
Four-spin exchange interaction has been raising intriguing questions regarding the exotic phase transitions it induces in two-dimensional quantum spin systems. In this context, we investigate the
Phase Diagram of the Heisenberg Spin Ladder with Ring Exchange
• Physics
• 2004
We investigate the phase diagram of a generalized spin-½ quantum antiferromagnet on a ladder with rung, leg, diagonal, and ring-exchange interactions. We consider the exactly soluble models
Search for quantum dimer phases and transitions in a frustrated spin ladder
• Physics
• 2006
A two-leg spin-1/2 ladder with diagonal interactions is investigated numerically. We focus our attention on the possibility of columnar dimer phase, which was recently predicted based on a
Elementary excitations in the gapped phase of a frustrated S = 1/2 spin ladder: from spinons to the Haldane triplet
• Chemistry, Physics
• 1998
We use the variational matrix-product ansatz to study elementary excitations in the ladder with additional diagonal coupling, equivalent to a single chain with alternating exchange and
Perturbation theories for the S = 1 2 spin ladder with a four-spin ring exchange
• Physics
• 2002
The isotropic S = ½ antiferromagnetic spin ladder with additional four-spin ring exchange is studied per-turbatively in the strong coupling regime with the help of cluster expansion technique, and by
Dimerized phase in the cross-coupled antiferromagnetic spin ladder
• Physics
• 2012
We revisit the phase diagram of the frustrated s=1/2 spin ladder with antiferromagnetic rung and diagonal couplings. In particular, we reexamine the evidence for the columnar dimer phase, which has
Topological order, dimerization, and spinon deconfinement in frustrated spin ladders
• Physics
• 2008
We consider topological order and dimer order in several frustrated spin ladder models, which are related to higher dimensional models of current interest; we also address the occurrence of
Phase diagram of the frustrated spin ladder
• Physics
• 2010
We re-visit the phase diagram of the frustrated spin-1/2 ladder with two competing inter-chain antiferromagnetic exchanges, rung coupling J_\perp and diagonal coupling J_\times. We suggest, based on
|
2022-01-24 22:35:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5264438986778259, "perplexity": 4358.363185320355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00617.warc.gz"}
|
https://kerodon.net/tag/01ER
|
Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
4.4.1 Isofibrations of $\infty$-Categories
Let us begin by reviewing a bit of classical category theory.
Definition 4.4.1.1. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between categories. We say that $F$ is an isofibration if it satisfies the following condition:
$(\ast )$
For every object $C \in \operatorname{\mathcal{C}}$ and every isomorphism $u: D \rightarrow F(C)$ in the category $\operatorname{\mathcal{D}}$, there exists an isomorphism $\overline{u}: \overline{D} \rightarrow C$ in the category $\operatorname{\mathcal{C}}$ satisfying $F(\overline{u}) = u$.
Example 4.4.1.2. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between categories. If $F$ is a fibration in groupoids (or an opfibration in groupoids), then $F$ is an isofibration. For a more general statement, see Example 4.4.1.10.
The notion of isofibration is self-dual:
Proposition 4.4.1.3. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between categories. Then $F$ is an isofibration if and only if the opposite functor $F^{\operatorname{op}}: \operatorname{\mathcal{C}}^{\operatorname{op}} \rightarrow \operatorname{\mathcal{D}}^{\operatorname{op}}$ is an isofibration.
Proof. Assume that $F$ is an isofibration; we will show that $F^{\operatorname{op}}$ is also an isofibration (the reverse implication follows by the same argument). Fix an object $C \in \operatorname{\mathcal{C}}$ and an isomorphism $u: F(C) \rightarrow D$ in the category $\operatorname{\mathcal{D}}$. Since $F$ is an isofibration, the inverse isomorphism $u^{-1}: D \rightarrow F(C)$ can be lifted to an isomorphism $v: \overline{D} \rightarrow C$ in the category $\operatorname{\mathcal{C}}$. Then $v^{-1}: C \rightarrow \overline{D}$ satisfies $F( v^{-1} ) = u$. $\square$
We now introduce an $\infty$-categorical counterpart of Definition 4.4.1.1.
Definition 4.4.1.4. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between $\infty$-categories. We say that $F$ is an isofibration if it is an inner fibration (Definition 4.1.1.1) which satisfies the following additional condition:
$(\ast )$
For every object $C \in \operatorname{\mathcal{C}}$ and every isomorphism $u: D \rightarrow F(C)$ in the category $\operatorname{\mathcal{D}}$, there exists an isomorphism $\overline{u}: \overline{D} \rightarrow C$ in the category $\operatorname{\mathcal{C}}$ satisfying $F(\overline{u}) = u$.
Example 4.4.1.5. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between ordinary categories. Then $F$ is an isofibration (in the sense of Definition 4.4.1.1) if and only if the induced map of simplicial sets $\operatorname{N}_{\bullet }(F): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is an isofibration of $\infty$-categories. This follows from the observation that $\operatorname{N}_{\bullet }(F)$ is automatically an inner fibration (see Proposition 4.1.1.10).
Example 4.4.1.6. Let $\operatorname{\mathcal{C}}$ be an $\infty$-category and let $\operatorname{\mathcal{D}}$ be an ordinary category. By virtue of Proposition 4.1.1.10, every functor $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is automatically an inner fibration. If every isomorphism in $\operatorname{\mathcal{D}}$ is an identity morphism, then $F$ is also an isofibration. In particular, every functor of $\infty$-categories $\operatorname{\mathcal{C}}\rightarrow \Delta ^ n$ is automatically an isofibration.
Proposition 4.4.1.7. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be an inner fibration between $\infty$-categories. Then $F$ is an isofibration of $\infty$-categories (in the sense of Definition 4.4.1.4) if and only if the induced functor of homotopy categories $f: \mathrm{h} \mathit{\operatorname{\mathcal{C}}} \rightarrow \mathrm{h} \mathit{\operatorname{\mathcal{D}}}$ is an isofibration of ordinary categories (in the sense of Definition 4.4.1.1).
Proof. Assume first that $F$ is an isofibration and let $C \in \operatorname{\mathcal{C}}$ be an object, and let $[u]: D \rightarrow F(C)$ be an isomorphism in the homotopy category $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}$, given by the homotopy class of some morphism $u: D \rightarrow F(C)$ in the $\infty$-category $\operatorname{\mathcal{D}}$. Then $u$ is an isomorphism, so our assumption that $F$ is an isofibration guarantees that we can lift $u$ to an isomorphism $\overline{u}: \overline{D} \rightarrow C$ in the $\infty$-category $\operatorname{\mathcal{C}}$. The homotopy class $[ \overline{u} ]$ is then an isomorphism in the homotopy category $\mathrm{h} \mathit{\operatorname{\mathcal{C}}}$ satisfying $f( [ \overline{u}] ) = [u]$. Allowing $C$ and $[u]$ to vary, we conclude that $f$ is an isofibration of ordinary categories.
Now suppose that $f$ is an isofibration, let $C \in \operatorname{\mathcal{C}}$ be an object, and let $u: D \rightarrow F(C)$ be an isomorphism in the $\infty$-category $\operatorname{\mathcal{D}}$. Then the homotopy class $[u]: D \rightarrow F(C)$ is an isomorphism in the homotopy category $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}$. Invoking our assumption that $f$ is an isofibration, we conclude that there exists an isomorphism $[v]: \overline{D} \rightarrow C$ in the homotopy category $\mathrm{h} \mathit{\operatorname{\mathcal{C}}}$ satisfying $f( [v] ) = [u]$. Then $[v]$ can be realized as the homotopy class of some morphism $v: \overline{D} \rightarrow C$ in the $\infty$-category $\operatorname{\mathcal{C}}$, which is automatically an isomorphism. The equation $f([v]) = [u]$ guarantees that there exists a homotopy from $F(v)$ to $u$ in the $\infty$-category $\operatorname{\mathcal{D}}$, given by a $2$-simplex $\sigma :$
$\xymatrix@R =50pt@C=50pt{ & F(C) \ar [dr]^{ \operatorname{id}_{ F(C) }} & \\ D \ar [ur]^{ F(v) } \ar [rr]^{ u } & & F(C). }$
Since $F$ is an inner fibration, it has the right lifting property with respect to the inclusion $\Lambda ^{2}_{1} \hookrightarrow \Delta ^2$. We can therefore lift $\sigma$ to a $2$-simplex $\overline{\sigma }:$
$\xymatrix@R =50pt@C=50pt{ & C \ar [dr]^{ \operatorname{id}_{C }} & \\ \overline{D} \ar [ur]^{ v } \ar [rr]^{ \overline{u} } & & C. }$
in the $\infty$-category $\operatorname{\mathcal{C}}$. Since $v$ and $\operatorname{id}_{C}$ are isomorphisms, it follows that $\overline{u}$ is an isomorphism (Remark 1.3.6.3). Allowing $C$ and $u$ to vary, we conclude that $F$ is an isofibration of $\infty$-categories. $\square$
Corollary 4.4.1.8. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between $\infty$-categories. Then $F$ is an isofibration if and only if the opposite functor $F^{\operatorname{op}}: \operatorname{\mathcal{C}}^{\operatorname{op}} \rightarrow \operatorname{\mathcal{D}}^{\operatorname{op}}$ is an isofibration.
Remark 4.4.1.9. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ and $G: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{E}}$ be isofibrations of $\infty$-categories. Then the composition $G \circ F$ is also an isofibration of $\infty$-categories (for a more general statement, see Remark 4.5.5.13).
Example 4.4.1.10. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a right fibration between $\infty$-categories. Then $F$ is an inner fibration (Remark 4.2.1.4), and any isomorphism $u: D \rightarrow F(C)$ can be lifted to a morphism $\overline{u}: \overline{D} \rightarrow C$ in $\operatorname{\mathcal{C}}$, which is automatically an isomorphism by virtue of Proposition 4.4.2.11. It follows that $F$ is an isofibration. Similarly, any left fibration of $\infty$-categories is an isofibration. For a more general statement, see Corollary 5.7.7.5.
Example 4.4.1.11 (Replete Subcategories). Let $\operatorname{\mathcal{C}}$ be an $\infty$-category and let $\operatorname{\mathcal{C}}' \subseteq \operatorname{\mathcal{C}}$ be a subcategory (Definition 4.1.2.2). The following conditions are equivalent:
$(1)$
The inclusion functor $\operatorname{\mathcal{C}}' \hookrightarrow \operatorname{\mathcal{C}}$ is an isofibration.
$(2)$
If $u: X \rightarrow Y$ is an isomorphism in $\operatorname{\mathcal{C}}$ and the object $Y$ belongs to the subcategory $\operatorname{\mathcal{C}}'$, then the isomorphism $u$ also belongs to the subcategory $\operatorname{\mathcal{C}}'$ (and, in particular, the object $X$ also belongs to $\operatorname{\mathcal{C}}'$).
$(3)$
If $u: X \rightarrow Y$ is an isomorphism in $\operatorname{\mathcal{C}}$ and the object $X$ belongs to the subcategory $\operatorname{\mathcal{C}}'$, then the isomorphism $u$ also belongs to the subcategory $\operatorname{\mathcal{C}}'$ (and, in particular, the object $Y$ also belongs to $\operatorname{\mathcal{C}}'$).
If these conditions are satisfied, then we say that the subcategory $\operatorname{\mathcal{C}}' \subseteq \operatorname{\mathcal{C}}$ is replete.
Exercise 4.4.1.12. Let $X$ be a Kan complex, and let $Y \subseteq X$ be a simplicial subset. Show that $Y$ is a summand of $X$ (Definition 1.1.6.1) if and only if it is a replete full subcategory of $X$.
Example 4.4.1.13. Let $\operatorname{\mathcal{C}}$ be an $\infty$-category, and let $\operatorname{Isom}(\operatorname{\mathcal{C}})$ denote the full subcategory of $\operatorname{Fun}(\Delta ^1, \operatorname{\mathcal{C}})$ spanned by the isomorphisms in $\operatorname{\mathcal{C}}$. Then the subcategory $\operatorname{Isom}(\operatorname{\mathcal{C}}) \subseteq \operatorname{Fun}(\Delta ^1, \operatorname{\mathcal{C}})$ is replete. Unwinding the definitions, this amounts to the observation that for every commutative diagram
$\xymatrix@R =50pt@C=50pt{ X \ar [r]^-{u} \ar [d]^{v} & Y \ar [d]^{v'} \\ X' \ar [r]^-{u'} & Y' }$
in the $\infty$-category $\operatorname{\mathcal{C}}$ where $u$, $v$, and $v'$ are isomorphisms, the morphism $u'$ is also an isomorphism. This follows immediately from the two-out-of-three property of Remark 1.3.6.3.
|
2022-06-30 19:50:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962347745895386, "perplexity": 110.28623897077273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00615.warc.gz"}
|
https://physics.stackexchange.com/questions/468139/has-the-osmium-spheres-gravity-example-ever-been-done
|
# Has the “osmium spheres” gravity example ever been done?
Calculating the motion of two dense spheres (in a frictionless vacuum) due to gravitational interaction is a classic physics problem that has come up numerous times.
Has any real-world space mission ever performed a demonstration of small body gravity? Set up an effectively weightless environment in high vacuum, nudge a pair of spheres very gently, and film their slow co-orbit with a time lapse camera.
• Search terms: Cavendish, Eötvös. – rob Mar 23 '19 at 1:13
• @rob thanks, I have clarified my question. – Foo Bar Mar 23 '19 at 13:09
• What Rob meant was that yes it has been done, and to high accuracy. Not exactly as you are thinking. A dumbbell is suspended from a fine wire that is easy to twist. Two weights are put near the ends of the dumbbell. The force of gravity is weak, but enough to twist the wire. – mmesser314 Mar 23 '19 at 14:11
• – rob Mar 24 '19 at 4:19
• Here's a simple Cavendish-like experiment, using lead rather than osmium, so it's much cheaper. ;) Bending Spacetime in the Basement. – PM 2Ring Mar 24 '19 at 4:44
Let's say that we have two spheres of density $$\rho$$, each of radius $$r$$, a distance $$2r$$ from each other. (Technically they'd have to separated by a distance $$2r + \epsilon$$, so that they weren't actually touching; but we'll ignore this.) The mass of each sphere is $$\frac{4 \pi}{3} \rho r^3$$, so the acceleration of each due to the other is $$a_\text{sphere} = \frac{G (4 \pi/3) \rho r^3 }{ (2r)^2} = \frac{\pi}{3} G \rho r.$$
But now imagine that you're at the central point between the two spheres (crammed into that $$\epsilon$$-wide gap). You're orbiting along with the spheres. Since the spheres' centers of mass are not at the same location as you, you'll see them accelerate relative to you; this is the tidal acceleration I mentioned above. The exact magnitude & direction of this tidal acceleration differ depending on the positions of you, the masses, and the Earth; but in general, its order of magnitude is $$a_\text{tidal} = \frac{G M_E r}{d^3}$$ where $$d$$ is the distance between you and the center of the Earth.
To cleanly observe the orbit the way you want to, you need this relative acceleration due to the Earth to be much smaller than the effects the spheres have on each other. In other words, their accelerations must satisfy $$a_\text{sphere} > a_\text{tidal}.$$ After some algebra, this implies that $$d > \sqrt[3]{M_E/\rho}$$ If $$\rho$$ is the density of osmium, this implies that $$d \gg$$ 6430 km or so. But the radius of the Earth is about 6370 km. In other words, to do this experiment, the distance between you and the center of the Earth must be much greater than the radius of the Earth.
This means that to do this experiment, a simple satellite in low Earth orbit wouldn't suffice; you'd need to launch into a relatively high orbit instead, since you want $$a_\text{tidal}$$ to be quite small compared to $$a_\text{sphere}$$. A geosynchronous orbit would be better; at that distance from the Earth, the effects of the tidal forces are only about 0.4% of the effects of the spheres on each other. But given the expense of launching to geosynchronous orbit, I suspect that scientists have decided that they have better things to do with their money.
|
2021-01-25 04:57:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565946817398071, "perplexity": 440.53803586626447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00447.warc.gz"}
|
https://math.wikia.org/wiki/Open
|
## FANDOM
1,165 Pages
An open subset of a metric space is a set that contains only interior points.
In the text below, $(X, d)$ will always refer to a metric space.
## Definition: neighbourhood of a point
Let $x \in X$. By a neighbourhood of $x$ of radius $0 < r \in \mathbf R$ we mean the set $N_r(x) = \{ y \in X | d(x, y) < r \}$.
## Definition: interior point
Let $E$ be a nonempty subset of $X$. A point $x \in E$ is said to be an interior point of $E$ if and only if there exists a neighbourhood $N$ of $x$ such that $N \subset E$.
## Definition: open set
A nonempty subset $E$ of $X$ is said to be open if and only if every point of $E$ is an interior point of $E$.
The property of being open is related to the property of being closed by the following theorem.
## Theorem: relation between open and closed sets
A subset $E$ of $X$ is open if and only if its complement $E^c$ is a closed subset of $X$.
Proof. First suppose that $E^c$ is closed. We want to show that this implies that $E$ is open. Choose $x \in E$. Then $x$ is not a limit point of $E^c$ (if it was, then $x$ would be an element of $E^c$, by definition of being closed, which is absurd, since $x \in E$), and hence there exists a neighbourhood $N$ of $x$ such that $N \cap E^c$ is empty. But then $N \subset E$ so that $x$ is an interior point of $E$. Hence $E$ is open.
Now suppose that $E$ is open. We want to show that this implies that $E^c$ is closed. Let $x$ be a limit point of $E^c$. If no such $x$ exists, then $E^c$ contains all its limit points, and the proof is complete. If not, then every neighbourhood $N$ of $x$ is such that $N \cap E^c$ is not empty. But then $x$ is not an interior point of $E$. Since $E$ is open, we must have $x \in E^c$. But then $E^c$ is closed, and the proof is complete.
QED.
## References
• Rudin, Walter: Principles of Mathematical Analysis, 3rd edition, McGraw Hill, 1976.
Community content is available under CC-BY-SA unless otherwise noted.
|
2019-07-22 01:26:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953811526298523, "perplexity": 46.973346001093084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527458.86/warc/CC-MAIN-20190722010436-20190722032436-00203.warc.gz"}
|
https://ijfs.usb.ac.ir/article_3331.html
|
# REDUNDANCY OF MULTISET TOPOLOGICAL SPACES
Document Type : Research Paper
Author
Department of Mathematics, Faculty of Science, South Valley University, Qena 83523, Egypt
Abstract
In this paper, we show the redundancies of multiset topological spaces. It is proved that $(P^\star(U),\sqsubseteq)$ and $(Ds(\varphi(U)),\subseteq)$ are isomorphic. It follows that multiset topological spaces are superfluous and unnecessary in the theoretical view point.
Keywords
#### References
[1] K. T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems, 20(1) (1986), 87{96.
[2] W. D. Blizard, Multiset theory, Notre Dame J. Formal Logic, 30(1) (1988), 36{66.
[3] S. A. El-Sheikh, R. A. K. Omar and M. Raafat,
-operation in m-topological space, Gen.
Math. Notes, 7(1) (2015), 40{54.
[4] S. A. El-Sheikh, R. A. K. Omar and M. Raafat, Separation axioms on multiset topological
space, Journal of New Theory, 7 (2015), 11{21.
[5] K. P. Girish and J. S. Jacob, On multiset topologies, Theory and Applications of Mathematics
and Computer Science, 2(1) (2012), 37{52.
[6] K. P. Girish and S. J. John, Transactions on rough sets XIV, Springer Berlin Heidelberg,
Berlin, Heidelberg, (2011), Ch. Rough Multiset and Its Multiset Topology, 62{80.
[7] K. P. Girish and S. J. John, Multiset topologies induced by multiset relations, Information
Sciences, 188 (2012), 298{313.
[8] A. Kandil, O. Tantawy, S. El-Sheikh and A. Zakaria, Multiset proximity spaces, Journal of
Egyptian Mathematical Society, 24(4) (2016), 562-567.
[9] P. M. Mahalakshmi and P. Thangavelu, m-connectedness in m-topology, International Journal
of Pure and Applied Mathematics, 106(8) (2016), 21-25.
[10] J. Mahanta and D. Das, Boundary and exterior of a multiset topology, ArXiv e-prints,
arXiv:1501.07193.
[11] D. Molodtsov, Soft set theory- rst results, Computers and Mathematics with Applications,
37 (45) (1999), 19{31.
[12] F. G. Shi and B. Pang, Redundancy of fuzzy soft topological spaces, Journal of Intelligent and
Fuzzy Systems, 27 (4) (2014), 1757{1760.
[13] F. G. Shi and B. Pang, A note on soft topological spaces, Iranian Journal of Fuzzy Systems,
12 (5) (2015), 149{155.
[14] R. R. Yager, On the theory of bags, International Journal of General Systems, 13 (1986),
23{37.
[15] L. A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965), 338{353.
|
2023-03-22 09:03:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7485098838806152, "perplexity": 10932.217519610176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00286.warc.gz"}
|
http://mathhelpforum.com/statistics/226386-evaluating-binomial-coefficiants.html
|
Thread: Evaluating binomial coefficiants
1. Evaluating binomial coefficiants
Hi, I have three binomial coefficient problems that I need to prove.
a.) Is (93 Choose 30) greater than, equal to, or less than (93 Choose 31)
b.) Is (93 Choose 30) greater than, equal to, or less than (93 Choose 63)
c.) let a= (99 Choose 4) and let b = (100 choose 95) in terms of a and b.
I know a.) is less then b.) is equal to, and c.) is a+b simply by plugging them into a calculator but is there away to show these are correct without computations? My professor wasn't concerned with the actual values, more so he was looking for the reasoning behind why these are correct without actually computing them. Thank you
2. Re: Evaluating binomial coefficiants
Originally Posted by crownvicman
Hi, I have three binomial coefficient problems that I need to prove.
a.) Is (93 Choose 30) greater than, equal to, or less than (93 Choose 31)
b.) Is (93 Choose 30) greater than, equal to, or less than (93 Choose 63)
c.) let a= (99 Choose 4) and let b = (100 choose 95) in terms of a and b.
I know a.) is less then b.) is equal to, and c.) is a+b simply by plugging them into a calculator but is there away to show these are correct without computations? My professor wasn't concerned with the actual values, more so he was looking for the reasoning behind why these are correct without actually computing them. Thank you
Choose(n,k) = Choose(n,n-k)
Choose(n,k) is maximum at (n/2) if n is even or at (n-1)/2 and (n+1)/2 if odd
Choose(n,k) is monotonic increasing to this maximum and monotonic decreasing after it
so, for a given n, the farther away k is from the midpoint the smaller it is
That answers (a) and (b) w/o computation.
I don't quite get (c). Are you saying you want to express Choose(100,95) in terms of Choose(99,4) ?
3. Re: Evaluating binomial coefficiants
Thank you very much. Sorry I mis-wrote question c.) it's supposed to be a= (99 choose 4) and let b= (99 choose 5). The question was to express (100 choose 95) in terms of a and b, which I'm still not quite sure how to explain that
4. Re: Evaluating binomial coefficiants
Originally Posted by crownvicman
Thank you very much. Sorry I mis-wrote question c.) it's supposed to be a= (99 choose 4) and let b= (99 choose 5). The question was to express (100 choose 95) in terms of a and b, which I'm still not quite sure how to explain that
$\left(\begin {array}{c}n \\ k \end{array}\right)=\left(\begin {array}{c}n-1 \\ k-1 \end{array}\right) + \left(\begin {array}{c}n-1 \\ k \end{array}\right)$
5. Re: Evaluating binomial coefficiants
Originally Posted by crownvicman
Hi, I have three binomial coefficient problems that I need to prove.
a.) Is (93 Choose 30) greater than, equal to, or less than (93 Choose 31)
b.) Is (93 Choose 30) greater than, equal to, or less than (93 Choose 63)
c.) let a= (99 Choose 4) and let b = (100 choose 95) in terms of a and b.
I know a.) is less then b.) is equal to, and c.) is a+b simply by plugging them into a calculator but is there away to show these are correct without computations? My professor wasn't concerned with the actual values, more so he was looking for the reasoning behind why these are correct without actually computing them. Thank you
For a) it would help to remember that \displaystyle \begin{align*} {n\choose{k}} = \frac{n!}{k! \left( n - k \right) !} \end{align*}, which means \displaystyle \begin{align*} {93\choose{30}} = \frac{93!}{30! \cdot 63!} \end{align*} and \displaystyle \begin{align*} {93\choose{31}} = \frac{93!}{31! \cdot 62! } \end{align*}. Which of these is greater?
You can do the same for b).
|
2016-10-26 17:19:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180813431739807, "perplexity": 812.9319218191001}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720967.29/warc/CC-MAIN-20161020183840-00412-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.quizover.com/course/section/relationship-to-the-design-parameter-q-by-openstax
|
# 0.7 Appendix a (Page 2/4)
Page 2 / 4
$L=\frac{\alpha {f}_{s}}{\delta f}$
where $L,\phantom{\rule{4pt}{0ex}}{f}_{s}$ , and $\delta f$ are as just defined, and α is given by
$\alpha =0.22+0.0366·SBR$
The validity of these simplified formulas depends on a number of assumptions, detailed in [link] , but all of them are sufficiently satisfied in this case to permit accuracy in the estimation of L within 5% or so.
Examination of [link] shows that $\delta f$ , the filter transition band, can be no larger than $\Delta f-B$ , the difference between the channel spacing and the bandwidth of each channel. Recalling also that $N·\Delta f={f}_{s}$ , we find that
$L=N\alpha \frac{\Delta f}{\delta f}=N\alpha \left\{\frac{1}{1-\frac{B}{\Delta f}}\right\}.$
Thus, to first order, the pulse response duration of the required filter is proportional to the number of channels N and is hyperbolic in the percentage bandwidth , the ratio of the channel bandwidth B to the channel spacing $\Delta f$ . The effect of the proportionality to α will be examined shortly.
## Relationship to the design parameter Q
The development presented in the section Derivation of the equations for a Basic FDM-TDM Transmux defined the integer variable Q as the ratio of L and N . It was pointed out there without proof that in fact Q was an important design parameter, not just the artifact of two others. This can now be seen by combining the relationship $L\equiv QN$ with [link] to produce an expression for Q :
$Q=\alpha \left\{\frac{\Delta f}{\delta f}\right\}=\alpha \left\{\frac{1}{1-\frac{B}{\Delta f}}\right\}$
Since N depends strictly on the number of channels into which the input band is divided, Q contains all of the information about the impact of the desired filter characteristics.
## Continuation of the telegraphy demodulation example
Consider again the example of demodulating R.35 FDM FSK VFT canals discussed in the section Example: Using an FDM-TDM Transmux to Demodulate R.35 Telegraphy Signals . In that section, we determined that the following parameters would be appropriate: ${f}_{s}=3840$ Hz, $N=64$ , and $\Delta f=60$ Hz. To determine Q , and hence the rate of computation needed for the data weighting segment of the transmultiplexer, we need to specify B and $SBR$ , the degree of stopband suppression required.
Generally speaking, the filters in an FSK demodulator need to have unity gain at the mark or space frequency and zero gain at the space or mark frequency, respectively. A computer simulation used to verify the design of the demodulator showed that suppression of 50 dB was more than enough to provide the needed performance. At first glance it might appear that the transition band $\delta f$ can be allowed to equal the tone spacing $\Delta f=60$ Hz, making the percentage bandwidth equal to zero. Actual FSK VFT systems, however, sometimes experience bulk frequency shifts of several Hertz. In order to maintain full performance in the presence of such frequency offsets, the tuner filters need to be designed with a passband bandwidth of 15 Hz or so. Using $SBR=50$ dB in [link] , we find with [link] that the required value of Q for this application is about 2.71. The actual value chosen for this application was 3, producing a pulse response duration of $L=QN=192$ , with the remaining degrees of freedom in the filter design used to widen the filter still more, allowing for even more frequency offset.
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
2018-09-23 23:34:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7389063835144043, "perplexity": 1606.6390581717826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159938.71/warc/CC-MAIN-20180923232129-20180924012529-00404.warc.gz"}
|
https://math.stackexchange.com/questions/2962452/algebraic-construction-of-a-state-space-system
|
# Algebraic construction of a state space system
How could a state space system be automatically constructed based on the knowledge of underlying circuit?
E.g. -
$$\dot{v_c}=\frac{-v_c}{CR}+\frac{V_{dc}}{CR}$$
That is, the representation is easy to figure out by hand. But what would be the computer-based, algorithmic way?
• You may want to start here. – Fabio Somenzi Oct 19 '18 at 19:04
• Did you heard about the software Spice Simulator? – Cesareo Oct 19 '18 at 19:38
• @Cesareo Of course I have - use it almost daily. But I am looking to build a state-space, switched SPICE. – SunnyBoyNY Oct 19 '18 at 20:57
|
2019-04-20 10:12:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42721715569496155, "perplexity": 1242.3453234074657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529606.64/warc/CC-MAIN-20190420100901-20190420122901-00461.warc.gz"}
|
https://tex.stackexchange.com/questions/408230/tufte-book-full-width-caption-above-full-width-table
|
# Tufte-book: Full-width caption above full-width table
It seems that all captions in Tufte-book are placed in the margin.
I have a full-width table where I can place the caption above the table using
\begin{table*}[b]
but how do I specify the caption to have the same width as the table?
Complete MWE:
\documentclass[b5paper,11pt,nobib,symmetric,justified,notoc]{tufte-book}
\begin{document}
\begin{table*}[b]
\caption{Foo-bar Foo-bar Foo-bar Foo-bar Foo-bar Foo-bar}
\begin{tabular}{llllllllll}
Foo & bar & Foo & bar & Foo & bar & Foo & bar & Foo & bar
\end{tabular}
\end{table*}
\end{document}
• Welcome. Please add a minimal working example (mwe). – Bobyandbob Dec 30 '17 at 21:55
I will say that the best solution is switch to another document class, but in case that this is not a valid option ...
\documentclass[b5paper,11pt,nobib,symmetric,justified,notoc]{tufte-book}
\usepackage{caption}
\usepackage{booktabs}
\begin{document}
\begin{fullwidth}
\centering
\begin{tabular}{llllllllll}\toprule
Foo & bar & Foo & bar & Foo & bar & Foo & bar & Foo & bar \\
\bottomrule\\[-2ex]
\multicolumn{10}{c}{\captionof{table}{
Foo-bar Foo-bar Foo-bar Foo-bar Foo-bar Foo-bar}}
\end{tabular}
\end{fullwidth}
\end{document}
|
2019-08-19 13:46:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979960560798645, "perplexity": 6271.549305418071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00394.warc.gz"}
|
http://haroldsoh.com/2011/07/
|
# Monthly Archives: July 2011
Just testing some $\LaTeX$. Some of the world’s most favourite equations:
$\nabla \cdot \textbf{D} = \rho \\ \nabla \cdot \textbf{B} = 0 \\ \nabla \times \textbf{E} = \frac{\partial \textbf{B}}{\partial t} \\ \nabla \times \textbf{H} = \frac{\partial \mathbf{D}}{\partial t} + \mathbf{J}$
$\mathbf{F} = m\mathbf{a}$
$a^2 + b^2 = c^2$
$H\psi = E\psi$
$E = mc^2$
$S = k \ln W$
$1+1 =2$
$\delta S = 0$
$p = \frac{h}{\lambda}$
|
2014-10-23 19:11:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5936880111694336, "perplexity": 13602.306895259962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067467.26/warc/CC-MAIN-20141017150107-00220-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://blog.vinux.in/
|
# Useful Softwares for Research
I took a session on ‘Useful software for research’ for the new batch of research scholars. I hope sharing the slides useful to other research students. The software suggestions were sorted based on my experience and colleagues’ experiences. The presentation is prepared in svg (jessyink template) format, and it is best viewed in Firefox/chrome browser.
Let me know if you have any comments or suggestions. I will include those for future sessions.
# Data visualisation talk: Presentation using reports package
Why did I use html5 for my today’s talk? My last presentation was prepared using html5. This time I wanted some innovation while making the slides. I prepared first few slides in Jessyink. Then I got to know that my friend (Trinker) developed an R package, reports, for creating an amazing presentation including beamer and reveal. js template. This is cool!!! Create a presentation in R for R talk. I could write everything in R markdown and convert into beautiful html5 slides.
You could find my presentation just below this blog. If any of you wanted to recreate the same, here is the R Markdown file Rmd. I did bit modification in the style file and some tags. Otherwise, you could generate the same. Ensure you have all required packages (check Rmd file) before running the file. The options required for slide creation very well described in the help (reports package).
Short version of slide creation is as follows:
1. Install reports package (install.packages(“reports”) or take it from github).
2. Call the presentation(“your_folder”) function. This creates Rnw and Rmd files inside your_folder/PRESENTATION.
3. Add contents to Rmd file. Then click knit HTML if you are using RStudio or call knit2html. This would create the html file.
4. Finally, run reveal. j’s () function. This would create the html5 presentation slides inside your_folder/PRESENTATION/reveal.js(). You could customize the path or theme or transition… I have used the theme as beige and transition as a cube.
This process wouldn’t be smoother for an R beginner who uses Windows. One should make sure all dependencies are set. Github page provides more help resources, and it is helpful to complete successful creation of your presentation.
Apart from reports one can try slidify package for creating html5 slides. This got multiple html5 presentation templates including shower, io2012, etc.
One thing I am a bit unhappy is that Emacs+ess doesn’t have a Rmd mode. I am sure someone is working on that. Now it is time for my research work.
Note:Use firefox or chrome browser to view my presentation. left, right, up*, and down* arrows for transition.
PS: ggplot2 is liked by almost all. As per the feedback, I should spend more time on ggplot2.
# Fun with R graphics: A raptor and a cake
R graphics are something I always explore or experiment with. Base graphics, ggplot2 are my favorites. On other graphics packages, I haven’t used plotrix much. I am currently exploring grid package.
## Raptor
First one is using ggplot2. (You could also do this using base graphics).
It would be more fun if we can do animation on Raptors. You can find an animated version of Raptor here. Rcode.
## Cake
Here is the easy one. This one is a cake using plotrix package.
I have already discussed this graph in my favorite forum talks tats. I dedicated both the graphs to my friends in the forum. You can find more fun graphs in this forum.
# R Graphics with ggplot2
ggplot2 is one of the most elegant R packages for data analysis and visualization. Recently I gave a tutorial on ggplot2 package. You could find my ggplot2 notes here (click the image below).
You could find my presentation below. The presentation made in html5. Use your left/right key to go through the presentation.
# A heuristic enhancement on an optimisation algorithm
Directly or indirectly we heavily use optimization in our life. Resource or utility optimization is something we hear every day. Here I will discuss a standard optimization problem in statistics, maximum likelihood estimate. This blog is a small episode of my recent work. I used R language to implement the optimization exercise.
There are packages available for optimization in R. The mostly used are: optim() and optimize() are in stats package. A dedicated function mle() available in stats4 package. This function is very useful most of the mean part modelling (eg: glm). What we need for this optimization is to prepare a function for the (log) likelihood and gradient (or hessian). We can also specify the algorithm (methods) as any of the following: “Nelder-Mead”, “BFGS”, “CG”, “L-BFGS-B”, “SANN”, and “Brent”. This optimization output gives the necessary information for the model estimation.
I was working on an extension of bivariate garch model. It includes more than 20 parameters. Let me focus pure garch model, garch(1,1), as a prototype to explain the algorithm. Here the model equation is in the variance part. Also, the parameters have constraint to ensure the positive variance. The model equation is given below.
$x_t=z_t\sqrt{h_t};z_t\sim\mathcal{N}(0,1)$
$h_t = \omega + \alpha x_{t-1}^2 + \beta h_{t-1}; \omega >0 , \alpha,\beta \geq 0$
The optimization function may work for the garch (1,1) estimation. But as the number of parameters increases, there is not much use with optimal functions in R. The main reason for failing the algorithm is the nonconvex surface with sensitive boundaries. So the solution is to write the codes for newton raphson algorithm or BHHH or other modified algorithms.
I wrote BHHH algorithm for my model. I was not able solve it fully, it was not giving the optimum. The reason is the direction for each iteration when it reaches sensitive area is not properly scaled. ie. either it overshoot in some dimension or there is no significant improvement in some direction. So to get appropriate multiplier on optimal direction fails here.
Now I will discuss about the heuristic part of the modification in the algorithm. In this heuristic, certain number of dimensions are randomly selected and perturbed. Remaining dimensions are left unchanged. This way we can overcome the scaling issue. This given me optimal solution but it is not optimised in the execution perspective. It take slightly more time ( I compared few simple models with my heuristic and existing garch estimation).
This approach looks very silly for regression type problems. But in garch like process tweaking like this is very much required. I am still looking for the better way. Anyway the blogging helped me to revise my approach again. Please let me know if you have any suggestions.
# Why Emacs is important to me? : ESS and org-mode
I cannot believe that I lived without emacs! Now I use emacs more than any application. The usage of emacs is going to increase as the days go. A non-emacs user may think that why an editor should get a credit like “emacs is an integral part of my life”. Emacs users may find nothing unusual about my statement. Well! I will brief my emacs story. I will start with my programming life.
### My programming life
My programming life is summarized as below. Even though I learned multiple languages I have included the languages where I spend more than 30 days(or more than 200 hours) roughly.
• 2000-2001: Basic and FORTRAN ->Windows days
• 2002: C and C++ ->Windows days
• 2003: VC++ and OpenGL ->Windows days
• 2004: C and C++ ->Here onwards Linux is my favorite desktop OS
• 2005-2006: JAVA, VB and VFoxpro* (I don’t think anybody uses this one)
• 2007-2009: SAS and Shell Script
• 2010-2011: R
• 2012 : R, Python and elisp
Out of this I consider 2002, 2003 and 2012 are my best years of programming. I should consider my first few years were my golden days of programming. Those days I didn’t have to do anything (No responsibility) other than coding and some mathematics. At that time I never understood the importance of skill of programming. I treated programming as one subject of mathematics. I did programming mainly for fun. I thank my best friend Jais for my consistent interest in coding. We had lots of intellectual discussion and debates. We spent lots of hours to create graphical applications, screen savers, hacking, etc. Now, after a long break, I am back to programming lifestyle.
When I worked for a software company (2005-2006) I did more cut-copy-paste and trial & error methods. This way I never learnt anything deep. Money and movies where other distractions. Weekends were mainly window shopping and parties. The main reason why I didn’t do any real programming was I didn’t get a good friend who is crazy enough on geek stuff.
### What motivated to learn Emacs
I always like simple applications. When I say simple it means quick execution, transparent functionality and customisable. I really like VI/vim editor and it was my favorite editor till 2011 (I don’t consider the Windows era). I tried emacs in 2004 and I didn’t like it. It was a nightmare at that time because to remember hell number of commands. Again, I tried emacs in 2010-2011. This time the motivation was that fact that ’emacs topped as the favorite editor for most of the stat tech guys’. But I didn’t learn anything other than C-x C-f and C-x C-c. Now what happened? I was searching a good R editor. I got to know about ESS and ESS is the first reason to learn emacs. So I started using emacs only for R coding. Then I started exploring shortcuts. I found it is very easy to customize. I have copied few short cuts and started customizing my own. Fortunately stackoverflow is very helpful on this. Some of the customisations are
• auto complete
• execution short cuts
• folding mode
First two features are very common ins standard editor but, not the third one. I have seen folding mode in Rstudio. After experimenting on customisation I started learning emacs lisp. This made me think one editor for all programming.
The main reason to learn emacs is Org-mode. I guess this should be one of the key addons to emacs. Like ggplot2 in R. The advantage of org-mode is integrating emacs with:
|
2015-10-05 01:20:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 1, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3449092209339142, "perplexity": 2951.5517022836657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676547.12/warc/CC-MAIN-20151001215756-00097-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://drhuang.com/science/mathematics/math%20word/math/c/c551.htm
|
## Confluent Hypergeometric Function of the First Kind
The confluent hypergeometric function is a degenerate form the Hypergeometric Function which arises as a solution the Confluent Hypergeometric Differential Equation. It is commonly denoted , , or , and is also known as Kummer's Function of the first kind. An alternate form of the solution to the Confluent Hypergeometric Differential Equation is known as the Whittaker Function.
The confluent hypergeometric function has a Hypergeometric Series given by
(1)
where and are Pochhammer Symbols. If and are Integers, , and either or , then the series yields a Polynomial with a finite number of terms. If is an Integer , then is undefined. The confluent hypergeometric function also has an integral representation
(2)
(Abramowitz and Stegun 1972, p. 505).
Bessel Functions, the Error Function, the incomplete Gamma Function, Hermite Polynomial, Laguerre Polynomial, as well as other are all special cases of this function (Abramowitz and Stegun 1972, p. 509).
(3)
where is the Confluent Hypergeometric Function and , , , ....
See also Confluent Hypergeometric Differential Equation, Confluent Hypergeometric Function of the Second Kind, Confluent Hypergeometric Limit Function, Generalized Hypergeometric Function, Heine Hypergeometric Series, Hypergeometric Function, Hypergeometric Series, Kummer's Formulas, Weber-Sonine Formula, Whittaker Function
References
Abramowitz, M. and Stegun, C. A. (Eds.). Confluent Hypergeometric Functions.'' Ch. 13 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, pp. 503-515, 1972.
Arfken, G. Confluent Hypergeometric Functions.'' §13.6 in Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, pp. 753-758, 1985.
Iyanaga, S. and Kawada, Y. (Eds.). Hypergeometric Function of Confluent Type.'' Appendix A, Table 19.I in Encyclopedic Dictionary of Mathematics. Cambridge, MA: MIT Press, p. 1469, 1980.
Morse, P. M. and Feshbach, H. Methods of Theoretical Physics, Part I. New York: McGraw-Hill, pp. 551-554 and 604-605, 1953.
Slater, L. J. Confluent Hypergeometric Functions. Cambridge, England: Cambridge University Press, 1960.
Spanier, J. and Oldham, K. B. The Kummer Function .'' Ch. 47 in An Atlas of Functions. Washington, DC: Hemisphere, pp. 459-469, 1987.
|
2023-03-30 10:45:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936486005783081, "perplexity": 951.1780448536068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00184.warc.gz"}
|
http://www.insbwafer.com/2018/01/
|
## Highlights
InSb magnetotransport measurements reveal 3 regimes in carrier density/mobility.
Nomarski imaging shows surface structure matching electronic mean free path lengths.
Transport lifetime modelling shows surface features are limiting scattering rate.
Potential for mobility improvement with buffer redesign.
## Abstract
We report magnetotransport measurements of InSb/Al1−xInxSb quantum well structures at low temperature (3 K), with evidence for 3 characteristic regimes of electron carrier density and mobility. We observe characteristic surface structure using differential interference contrast DIC (Nomarski) optical imaging, and through use of image analysis techniques, we are able to extract a representative average grain feature size for this surface structure. From this we deduce a limiting low temperature scattering mechanism not previously incorporated in transport lifetime modelling of this system, with this improved model giving strong agreement with standard low temperature Hall measurements. We have demonstrated that the mobility in such a material is critically limited by quality from the buffer layer growth, as opposed to fundamental material scattering mechanisms. This suggests that the material has immense potential for mobility improvement over that reported to date.
## Keywords
Magnetotransport,
Nomarski microscopy,
Electron scattering,
## 1. Introduction
Indium antimonide (InSb) exhibits the lowest reported electron effective mass (${m}^{*}=$ 0.014 ${m}_{e}$) and highest reported room-temperature electron mobility (${\mu }_{e}=$78,000 cm2 V−1 s−1) of any compound semiconductor. These properties make InSb particularly suited to many electronic applications including low power high frequency electronics and quantum device realisation. There has been a recent resurgence of interest in the development of high quality InSb material following the report of two-dimensional electron gas (2DEG) channel mobilities in excess of 200,000 cm2 V−1 s−1 at $T=$ 1.8 K , and the possibility of Majorana fermion observation in InSb nanowires .
Furthermore, the strong spin-orbit interaction and extremely large Landé $g$-factor ($g\approx$ −50) exhibited in InSb has gained attention for potential exploitation in spintronics and quantum information control .
Previous studies of carrier transport in InSb 2DEGs have considered standard scattering mechanisms using the relaxation time approximation and described the mobility variation over a wide range of temperature (typically 3–293 K). Whilst there has been good agreement, parameters used have tended to be extreme values to enable acceptable fits to data. It is believed that a major scattering mechanism associated with material quality has not been considered previously, with this having a major effect on the mobility behaviour. Considering this additional structural scattering allows for far more reasonable values for standard scattering mechanisms, showing that there is immense potential for mobility improvement in this material.
## 2. Growth and sample details
The InSb quantum well heterostructures studied were grown by solid source molecular beam epitaxy (MBE) on semi-insulating GaAs substrates (and so are therefore lattice mismatched with regard to the substrate). In growth order, the epitaxy comprises an aluminium antimonide (AlSb) accommodation layer, a 3 µm Al0.1In0.9Sb strain-relieving barrier layer (to allow for mismatch relaxation), a 30 nm InSb quantum well layer and a 50 nm Al0.15In0.85Sb top barrier layer. Tellurium (Te) δ-doping is introduced into the top barrier, 25 nm above the InSb quantum well (QW). Deliberate doping of the lower barrier is avoided in order to prevent any impurity donor atoms being carried forward on the growth plane which could significantly compromise the transport lifetime of carriers in the quantum well. Hall bar devices with an aspect ratio of 5:1 (nominally 200 µm$×$40 µm) were fabricated using standard photolithography, mesa wet etching and metal evaporation techniques. Relatively low temperature fabrication processes were used ($\le$100 °C) to prevent excess metal penetration and tellurium dopant migration in the samples. The fabricated devices were mounted into non-magnetic ceramic leadless chip carriers and contact between the package and devices made via gold fine wire wedge bonding.
## 3. Experimental determination of mobility and sheet carrier density
All samples were measured over a range of temperature between 2.8 K and 300 K using a closed cycle pulse tube cryostat. The sample, when placed in the cryostat, was situated between the poles of a 0.6 T electromagnet. Electrical measurements of the samples were performed using a combination of a Keithley 6221 Current Source Meter and a Keithley 2401 Nanovoltmeter, using a pseudo-AC technique to remove any voltage drifts due to heating. To determine the mobility and carrier density, a constant current of either 1 μA or 2 μA was applied and the resulting transverse and longitudinal voltages recorded. No heating effects were observed with use of different current values. The magnetic field was then swept from 0 T to 0.6 T and from 0 T to $-\phantom{\rule{1em}{0ex}}$0.6 T, and this was repeated as a function of temperature.
Fig.1.(a) shows the measured longitudinal (${R}_{\mathit{xx}}$) and transverse Hall resistance (${R}_{\mathit{xy}}$) for a typical sample at temperatures of 3 K, 50 K and 150 K. Also shown are 2 carrier fits to the data , with the fits matching well to the data. The fitting gives a 2D sheet carrier density (${n}_{2\mathit{D}}$) of $~\phantom{\rule{1em}{0ex}}$3$×$1011 cm−2 and mobility $~\phantom{\rule{1em}{0ex}}$200,000 cm2 V−1 s−1, with ${n}_{2D}$ increasing slightly over the temperature range studied.
Fig. 1(a) Typical longitudinal (${R}_{\mathit{xx}}$) and Hall (${R}_{\mathit{xy}}$) resistance vs. $B$-field for 3 K, 50 K and 150 K (symbols, every 25th data point plotted for clarity) with 2 carrier fits shown as solid lines (b) Measured mobility (two carrier fit) vs. 2D sheet carrier density (${n}_{2D}$) from 3 K Hall measurements. A series of samples with increasing δ-doping levels is shown by the larger, filed symbols, showing at first an increasing mobility with carrier density. This then plateaus before then decreasing. Smaller, unfilled symbols indicate samples from other growth batches. Dashed lines are contours of constant conductance.
Fig.1.(b) shows the extracted mobilities and carrier densities for a range of different samples at 3 K. Samples with increasing δ-doping (filled symbols) broadly define three regimes in the data. Initially, in region 1, an increasing mobility is observed for an increase in carrier concentration ${n}_{2D}$ (from 1$×$1011 cm−2 to 3$×$1011 cm−2). This is believed to be due to single subband filling, calculated from Schrödinger-Poisson (S.P.) modelling , giving rise to increased Thomas-Fermi screening. The mobility, $\mu$, then begins to plateau at $~$ 250,000 cm2 V−1 s−1 for a narrow range of carrier densities (region 2), before decreasing beyond 4$×$1011 cm−2 (region 3). In region 3, Schrödinger-Poisson modelling shows there is the possibility of multiple subband occupancy and additional intersubband scattering.
## 4. Nomarski image analysis
To help examine the limiting factors affecting the higher mobility samples, the surface morphology was considered using optical differential interference contrast DIC (Nomarski) imaging. To analyse the material surface multiple microscope images were taken of various samples in a spread of positions on the surface. Images were taken at an optical magnification of $×$50. A raw Nomarski image of a standard wafer with ${n}_{2D}~$ 3$×$1011 cm−2 and $\mu ~$ 200,000 cm2 V−1 s−1 is shown in Fig.2.
Fig. 2Left: Optical Nomarski image of wafer surface, magnification $×\phantom{\rule{1em}{0ex}}$50, for a sample with ${n}_{2D}\phantom{\rule{1em}{0ex}}~$3×1011 cm−2 and mobility $~$200,000 cm2 V−1 s−1. The surface is clearly textured, with features of varying sizes. Right: Comparison of mean feature diameter (and error) (open green circles) with weighted mean and one standard deviation (green line and shading) as a function of mobility, and measured mean free path (red squares) with linear regression fitted (dashed line). Filled squares represent samples from material measured in this paper, open squares represent historical samples taken from references (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
The Nomarski image shows clear surface roughness, present similarly on all wafers imaged in this study. The roughness consists of approximately circular features with clear boundaries separating features. Due to the proximity of the 2DEG to the surface, it is reasonable to assume that this surface roughness, and in particular the boundaries, have a severe impact on the electron transport in the quantum well active layer.
Multiple grey scale images were taken of each wafer, and for each image, a small square sub-image was extracted using a window typically 200 pixels by 200 pixels of the original image. This ensured each sub-image contained many features. The sub-image was kernel smoothed to remove noise from the image and a 2D polynomial was fitted to remove any background trend (blush) in the image. To calculate the number of features, a 2D gradient was calculated by first determining gradients in the $x$ and $y$ directions, and depending on illumination, calculating the sum of or difference in these gradients. Any region above a set threshold was labelled as a definable feature and subsequently counted, giving the total number of features for the sub image. This analysis was repeated for multiple windows on each image (typically 40 sub images), with results combined to give a mean feature count, the error given by the standard deviation. This process was repeated for all images of each wafer, and corresponding feature counts averaged with errors combined in quadrature.
To turn the feature count into an average feature size, it was assumed that the features filled all space, and therefore the average area of each feature was given by the area of the sub-image divided by the mean feature count. A comparison of the mean feature diameter calculated using this Nomarski imaging analysis to the mean free paths, $\lambda$, deduced from a basic Drude transport model, is shown in Fig.2.The mean free paths are calculated using mobilities from standard 3 K Hall measurements of samples shown in Fig.1,, as well as from historical samples from references.
There is a clear trend in measured mean free path with mobility, reaching a peak mean free path of $~$ 2.5 µm. Fig.1.shows the calculated feature diameter is approximately constant across all samples measured, with a mean value of $~$2.43±0.13 µm. This is in excellent agreement with the maximum mean free path deduced from mobility measurement, and is strongly suggestive that at low temperatures, when phonon effects are reduced, an electron traveling in the quantum well may travel ballistically through a feature until it reaches a boundary, causing a scattering event and ultimately limiting the transport lifetime. It is speculated therefore that the surface features are the low temperature limiting scattering mechanism in determining high electron mobilities in the QW. Further advancement in mobility and mean free path without significant buffer redesign will be impossible.
To investigate the effect of the surface features on electron transport, a transport lifetime model was implemented following closely that performed by Orr et al. in their work on similar InSb QW devices . The aim of this was to emphasise the effect of feature sizes acting as a limiting scattering mechanism in the low temperature, high mobility regime. This model includes several of the dominant scattering mechanisms present in III-V heterostructures, including ionised impurities (remote ionised scattering from dopant ions and background $p$-type impurity scattering), phonons (both optical and acoustic phonons) and interface roughness. Also included is the effect of band non-parabolicity in the form of a modified effective mass, deduced from a six-band Kane model. The equations and parameters used for this transport model are taken from reference.
Included in the transport model is the effect of dopant dragging in the top cap. The scattering rate due to remote ionised impurities is now an extended sum of scattering from each individual 2D sheet of dopant charge, added inversely using Mathieson's rule. The charge density in each sheet follows an exponential drop off, with the total amount of charge being the sum of all sheets. This results in a slight reduction of scattering compared to a perfect single $\delta$-plane with the same amount of dopant. In this system, it is assumed that the dopant atoms are fully ionized, however not all of the electrons are donated to the well, with a proportion instead transferring to the surface. This proportion is taken as another exponential drop off, steeper than that of the dopant distribution. This ratio of dopant donating electrons to the well vs. total dopant atoms is necessary to achieve correct Schrödinger-Poisson models (where ratios of $~$5 to $~$6 were needed to achieve the correct sheet carrier densities) and so this ratio is included the transport model studied here. To match the modelled mobility to the measured mobility, physical parameters for each sample were considered. These included sheet carrier density at 3 K, various interface rms roughness values and background concentrations, as well as the ratio of dopant activated into the well. It was found that for reasonable values of background $p$-type impurities ($~$1–10$×$1015 cm−3) and rms roughness 0–5 monolayers), activation ratios significantly greater than those predicted by S.P. modelling were required. To achieve more reasonable ratios and to explain the measured mobility, another scattering mechanism is needed.
The model can be adapted to include an extra scattering rate term corresponding to the dimensions of the physical surface features observed in Nomarski imaging. If these features are taken as the ballistic mean free path length (${l}_{e}$), with inelastic scattering occurring at the boundaries, then their corresponding scattering rate is given by:
where ${n}_{2D}$ is the 2DEG sheet carrier density in the QW.
A typical graph of the modelled temperature dependant mobility, including a non-parabolic effective mass, is shown in Fig.3.alongside data for a sample with a $3$ K mobility of approximately 200,000 cm2 V−1 s−1. The model matches the data well in the low temperature regime where the dominant scattering mechanism is associated with the surface features. The proposed scattering mechanism is only observed at low temperatures whereas high temperature effects are dominated by phonon scattering.
Fig. 3Transport model mobility (lines) and measured data (symbols) vs. temperature including scattering due to 2.4 µm features as well as standard scattering terms (using typical values). The effect of dopant dragging on remote ionised impurity scattering is also included.
Performing this analysis for all of the measured samples using a typical feature size of 2.4 µm and a reasonable background dopant concentration of
$p=$ 5$×$1015 cm−3, the modelled and measured mobilities are in close agreement at low temperatures. The required ratios of total dopant to carriers in the well are generally around 0–10 for the higher doped, high mobility samples, matching those required for the S.P. modelling. This strongly suggests that scattering due to surface feature boundaries is indeed a limiting scattering mechanism in these QW structures. For the more lightly doped, lower mobility samples, the ratios required are slightly higher, indicating that physical feature scattering is not the limiting factor here, in agreement with the analysis in the Nomarski imaging section. Instead, in this regime, the remote ionised impurity scattering is dominating, with the subband filling and associated Thomas-Fermi screening leading to increased mobility with carrier density.
## 6. Conclusions
We have studied InSb/AlInSb QW 2DEG material and have shown evidence for three characteristic regimes in the measured low temperature Hall mobility behaviour with carrier density. We have demonstrated these regimes can be described by consideration of a combination of band filling from Schrödinger-Poisson modelling and physical surface structure scattering. Using image analysis techniques, representative feature sizes were extracted from Nomarski images of the material surfaces, with these shown to be a transport limiting scattering centre equal to the largest low temperature mean free paths measured. Through use of a conventional transport model, with updated terms for scattering due to a distribution of dopant, and for scattering due to physical features, we have shown it is possible to match predicted carrier densities and mobilities to those measured using low temperature Hall measurements. This work shows that with correct buffer redesign there is a clear potential for significant improvement in the mobility of such material, leading to many possible future implementations for high mobility materials.
## Acknowledgements
The work was supported by the UK Engineering and Physical Sciences Research Council[grant numbers EP/L012995/1 and EP/M507842]. Data supporting this research is openly available from the Cardiff University Research Portal at http://dx.doi.org/10.17035/d.2017.0031836175.
Source:ScienceDirect
|
2021-07-29 02:05:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6212502121925354, "perplexity": 1729.9264976838049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00211.warc.gz"}
|
https://www.semanticscholar.org/paper/The-3.4%CE%BCm-absorption-in-Titan%E2%80%99s-stratosphere%3A-of-Cours-Cordier/1dde97aacf397e31a72fe56bd9958db89b52afba
|
# The 3.4μm absorption in Titan’s stratosphere: Contribution of ethane, propane, butane and complex hydrogenated organics
@article{Cours2020The3A,
title={The 3.4$\mu$m absorption in Titan’s stratosphere: Contribution of ethane, propane, butane and complex hydrogenated organics},
author={T. Cours and D. Cordier and B. Seignovert and L. Maltagliati and L. Biennier},
journal={Icarus},
year={2020},
volume={339},
pages={113571}
}
Abstract The complex organic chemistry harbored by the atmosphere of Titan has been investigated in depth by Cassini observations. Among them, a series of solar occultations performed by the VIMS instrument throughout the 13 years of Cassini revealed a strong absorption centered at ∼ 3 . 4 μ m . Several molecules present in Titan’s atmosphere generate spectral features in that wavelength region, but their individual contributions are difficult to disentangle. In this work, we quantify the… Expand
4 Citations
#### Figures and Tables from this paper
Ethane clathrate hydrate infrared signatures for solar system remote sensing
• Physics, Materials Science
• 2021
Abstract Hydrocarbons such as methane and ethane are present in many solar system objects, including comets, moons and planets. The interaction of these hydrocarbons with water ice at lowExpand
Distribution and intensity of water ice signature in South Xanadu and Tui Regio
Titan’s surface was revealed by Cassini ’s instruments, showing the presence of liquid hydrocarbons in lakes, and features like dry riverbed. In order to study the sediment transport in Titan’sExpand
Absorption cross sections for ethane broadened by hydrogen and helium in the 3.3 micron region
• Materials Science
• 2020
Abstract The infrared absorption cross sections of ethane (C2H6) have been measured in the 3.3 µm region by high resolution Fourier transform spectroscopy. The ethane samples had temperatures of 203,Expand
Laser Radiation Absorption in the Atmosphere of Titan
• Materials Science
• 2020
General formulas for the extinction coefficient of laser radiation in an atmosphere are derived. They take into account nonlinear effects and significantly differ from the linear optics results.Expand
#### References
SHOWING 1-10 OF 84 REFERENCES
A photochemical model of Titan's atmosphere and ionosphere
Abstract The model for Titan by Krasnopolsky (2014, Icarus 236, 83-91) has been adjusted to Pluto's conditions during the New Horizons flyby. The model includes 419 reactions of 83 neutrals and 33Expand
LARGE ABUNDANCES OF POLYCYCLIC AROMATIC HYDROCARBONS IN TITAN'S UPPER ATMOSPHERE
In this paper, we analyze the strong unidentified emission near 3.28 μm in Titan's upper daytime atmosphere recently discovered by Dinelli et al. We have studied it by using the NASA Ames PAH IRExpand
Three-micron extinction of the Titan haze in the 250–700 km altitude range: Possible evidence of a particle-aging process
• Physics, Geography
• 2015
Context. The chemical nature of the Titan haze is poorly understood. The investigation carried out by the Cassini-Huygens suite of instruments is bringing new insights into this question. Aims. ThisExpand
Retrieval and tentative indentification of the 3 μm spectral feature in Titan's haze
Recently, an unidentified 3.3–3.4 μm feature found in the solar occultation spectra of the atmosphere of Titan observed by Cassini/VIMS was tentatively attributed to the C–H stretching mode ofExpand
DETECTION OF PROPENE IN TITAN'S STRATOSPHERE
The Voyager 1 flyby of Titan in 1980 gave a first glimpse of the chemical complexity of Titan's atmosphere, detecting many new molecules with the infrared interferometer spectrometer (IRIS). TheseExpand
Elusive anion growth in Titan’s atmosphere: Low temperature kinetics of the C3N− + HC3N reaction
Ion chemistry appears to be deeply involved in the formation of heavy molecules in the upper atmosphere of Titan. These large species form the seeds of the organic aerosols responsible for the opaqueExpand
The role of benzene photolysis in Titan haze formation
• Chemistry
• 2014
Abstract During the Cassini mission to the saturnian system, benzene (C 6 H 6 ) was observed throughout Titan’s atmosphere. Although present in trace amounts, benzene has been proposed to be anExpand
The mesosphere and lower thermosphere of Titan revealed by Cassini/UVIS stellar occultations
Abstract Stellar occultations observed by the Cassini/UVIS instrument provide unique data that probe the mesosphere and lower thermosphere of Titan at altitudes between 400 and 1400 km. This regionExpand
Simulation of Titan’s atmospheric photochemistry - Formation of non-volatile residue from polar nitrile ices
• Physics
• 2015
We studied the photochemistry of frozen ice of a polar Titan’s atmospheric molecule cyanodiacetylene (HC5N) to determine the possible contribution of this compound to the lower altitudeExpand
Photochemical activity of Titan's low-altitude condensed haze.
• Medicine
• Nature communications
• 2013
It is demonstrated that indeed tholin-like haze formation could occur on condensed aerosols throughout the atmospheric column of Titan, and C4N2 ices undergo condensed-phase photopolymerization (tholin formation) at wavelengths as long as 355 nm pertinent to solar radiation reaching a large portion of Titan's atmosphere. Expand
|
2021-09-21 11:19:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4363970160484314, "perplexity": 9012.36388162572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00013.warc.gz"}
|
https://comojogaruno.com/a-compound-microscope-has-an-objective-of-focal-length-1-cm-and-an-eyepiece-of-focal-length-25-cm
|
A compound microscope has an objective of focal length 1 cm and an eyepiece of focal length 2.5 cm
A compound microscope has an objective of focal length 1.25 cm and eyepiece of focal length 5 cm. A small object is kept at 2.5 cm from the objective. If the final image formed is at infinity, find the distance between the objective and the eyepiece ?
Distance between the objective and the eyepiece, L =
$v_0 + \left| u_e \right|$
To find v0, we have:
$u_0 = - 2 . 5 \text {cm and } f_0 = 1 . 25 cm$
$\text { Now }, - \frac{1}{u_0} + \frac{1}{v_0} = \frac{1}{f_0}$ $or \ v_0 = 2 . 5 cm$
To find ue, we have:
$v_e = \infty\text { and } f_e = 5 cm$
Calculating using the same formula as above, we get:
$u_e = - 5 cm$
∴ L = 2.5 + 5 = 7.5 cm
Concept: Optical Instruments - The Microscope
Is there an error in this question or solution?
Get 7 days Free Trail Now
Get an Expert Advice From Turito.
Solution : Here, f_0 = 1 cm, f_e = 2.5 cm. <br> u_0 = -1.2 cm., m = ? L = ? <br> As (1)/(v_0)-(1)/(u_0)=(1)/(f_0) <br> :. (1)/(v_0)=(1)/(f_0)+(1)/(u_0)=(1)/(1)-(1)/(1.2)=(0.2)/(1.2) <br> v_0 = 1.2//0.2 = 6 cm <br> As m = (v_0)/(|u_0|)(1 + (d)/(f_e)) <br> :. m = (6)/(1.2)(1 + (25)/(2.5)) = 55 <br> L = v_0 + f_e = 6 + 2.5 = 8.5 cm.
|
2022-12-05 01:52:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5455619692802429, "perplexity": 2128.9841830637974}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00278.warc.gz"}
|
https://www.lokad.com/pinball-loss-function-definition
|
# Pinball loss function (quantile accuracy)
By Joannès Vermorel, February 2012
Evaluating the accuracy of a quantile forecast is a subtle problem. Indeed, contrary to the classic forecasts where the goal is to have the forecast as close as possible from the observed values, the situation is biased (on purpose) when it comes to quantile forecasts. Hence the naive comparison observed vs forecasts is not satisfying. The pinball loss function returns a value that can be interpreted as the accuracy of a quantile forecasting model.
## Formula
Let $\tau$ be the target quantile, $y$ the real value and $z$ the quantile forecast, then $L_\tau$, the pinball loss function, can be written:
$$\begin{eqnarray} L_{\tau}(y,z) & = & (y - z) \tau & \textrm{ if } y \geq z \\\ & = & (z - y) (1 - \tau) & \textrm{ if } z > y \end{eqnarray}$$
The spreadsheet illustrates how to compute the pinball loss function within Microsoft Excel. The actual formula is no more complicated that most accuracy indicators such as a the MAPE.
## Illustration
The pinball loss function (in red) has been named after its shape that looks like the trajectory of a ball on a pinball. The function is always positive and the further away from the target $y$, the larger the value of $L_\tau(y,z)$. The slope is used to reflect the desired imbalance in the quantile forecast.
## Best quantile model has lowest pinball loss
The most important result associated with the pinball loss function is that the lower the pinball loss, the more accurate the quantile forecast.
It can be proved that the function that minimizes the pinball loss delivers the optimal quantile as well. However, the formalism required for the proof goes beyond the scope of this article.
Hence, in order to compare the respective accuracy of two quantile models (say Lokad vs other), it is sufficient to compute the average pinball loss of each model over a number of time-series sufficiently large to make sure that the observed difference is statistically significant. In practice, a few hundred time-series are sufficient to assess which quantile model is the most accurate.
|
2019-07-23 00:40:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826066792011261, "perplexity": 546.1179305254423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00239.warc.gz"}
|
https://ask.sagemath.org/users/1026/roman/?sort=recent
|
2020-01-14 08:34:00 +0200 received badge ● Taxonomist 2018-05-21 22:37:25 +0200 received badge ● Notable Question (source) 2018-05-21 22:37:25 +0200 received badge ● Popular Question (source) 2012-07-20 06:07:44 +0200 received badge ● Supporter (source) 2012-07-19 07:33:41 +0200 commented question Homology of chain complexes Considerng the boundary maps: it's d0:ZZ^3->0 d1:ZZ^4->ZZ^3 d2:ZZ^2->ZZ^4 (and d3:0->ZZ^2, which I omitted) Considering the worksheet: I edited it in the main question 2012-07-19 07:33:00 +0200 answered a question Homology of chain complexes Considerng the boundary maps: it's d0:ZZ^3->0 d1:ZZ^4->ZZ^3 d2:ZZ^2->ZZ^4 (and d3:0->ZZ^2, which I omitted) Considering the worksheet: It says: Worksheet is publicly viewable at http://www.sagenb.org/home/pub/4890 - however, I can't access it via the link either. Don't know if there's any way to fix it. Gonna edit it in the main question 2012-07-18 09:31:31 +0200 received badge ● Editor (source) 2012-07-18 09:30:25 +0200 asked a question Homology of chain complexes I've got the following chain complex: 0->ZZ^2->ZZ^4->ZZ^3->0 With the boundarymaps given by d0:(z1,z2,z3) |-> 0 d1:(z1,z2,z3,z4) |-> (-2(z1+z3+z4), 2(z1-z2), z2+z3+z4) d2:(z1,z2)|-> (z1+z2, z1+z2,-z1,-z2) Now I tried to compute the homology groups (e.g. H0 = ker d0 / im d1) using sage. One time manually via taking the quotients of the respective modules, one time using the ChainComplex() module. However, I don't really understand the output using the first method (e.g. what means: "Finitely generated module V/W over Integer Ring with invariants (2, 0)"), and both methods seem to deliver different results... I've defined my boundary maps as matrices: d0 = matrix(ZZ, 1,3,[[0,0,0]]).transpose() d1 = matrix(ZZ, 3,4,[[-2,0,-2,-2],[2,-2,0,0],[0,1,1,1]]).transpose() d2 = matrix(ZZ,4,2,[[1,1],[1,1],[-1,0],[0,-1]]).transpose() Where I've taken the transpose since I'm used to write linear maps as d(x) = Dx, whereas sage seems to use d(x) = xD, where D is the corresponding matrix. Calculating the homology groups via H0 = d0.kernel()/d1.image() H1 = d1.kernel()/d2.image() H2 = d2.kernel() gives the following results: H0: Finitely generated module V/W over Integer Ring with invariants (2, 0) H1: Finitely generated module V/W over Integer Ring with invariants () H2: Free module of degree 2 and rank 0 over Integer Ring whereas ChainComplex([d0,d1,d2]).homology() yields a different strucure. {0: Z, 1: Z, 2: C2, 3: 0} To maximize confusion, calculation by hand gives me H0=C2^2 x ZZ, H1=0, H2=0. I'd might have made some mistakes there, though. So I don't really konw how to interpret the results from Sage.
|
2022-10-02 06:11:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36678290367126465, "perplexity": 5827.532644365159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00594.warc.gz"}
|
https://forum.poshenloh.com/topic/70/why-is-bd/1
|
# Why is BD?
• This post is deleted!
• [Originally posted in the Discussions]
Why is BD = (2 root 3/2 root 3 + 4)x2?
• It's great that you are trying to understand this step! It's a little confusing, because many steps were done at the same time.
Let's look at a different example of a simpler ratio:
Like in the Day 13: Your Turn question, the purple angles are equal, and the line going through the triangle is an angle bisector. The difference is that the lengths of the two sides which make up the bisected angle are simpler: 1 and 2. These sides are in the ratio 1 : 2, which we can illustrate with a pie chart:
The blue section corresponds to the "1" out of the total "1 + 2", so it takes up 1/3 not 1/2 of the whole pie. We would find that the segment labeled with a "?" takes up 1/3 of the length of x.
Now, back to our problem: Instead of 1 and 2, the lengths of the sides that touch the bisected angle are 2 x root(3) and 4, so the segment that we want, BD, takes up
$$\frac{2\sqrt{3}}{2\sqrt{3} + 4}$$
of the length of the third triangle side. The length of BD is this ratio of the length of the third side, or
$$\frac{2 \sqrt{3}}{2\sqrt{3} + 4} \times \text{ length of third side}$$
which equals:
$$\frac{2\sqrt{3}}{2\sqrt{3} + 4} \times 2$$
is because the total length of the third side (BD + CD) is equal to 2.
I hope this helps! It's a pleasure to help you with this, and congratulations on almost finishing the course. We really hope you have enjoyed learning with this problem-based approach and enjoy tackling challenges more and more.
Happy Learning!
The Daily Challenge Team
|
2021-03-01 20:08:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5966744422912598, "perplexity": 565.7531729772055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362899.14/warc/CC-MAIN-20210301182445-20210301212445-00605.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1012.03035
|
# zbMATH — the first resource for mathematics
Observations on the monoidal t-norm logic. (English) Zbl 1012.03035
The paper is concerned with the properties of some nonclassical logics and their relations, especially to Łukasiewicz logic, Gödel logic, and the monoidal t-norm-based logic. Their relations become stronger if some axioms (the double negation axiom, the idempotence axiom of conjunction and two further specific axioms) are added to them. The main results deal mainly with the monoidal t-norm-based logic and its Archimedian property.
##### MSC:
03B52 Fuzzy logic; logic of vagueness 03B50 Many-valued logic
Full Text:
##### References:
[1] Cignoli, R.; Esteva, F.; Godo, L.; Montagna, F., On a class of left-continuous t-norms, Fuzzy sets and systems, 131, 283-296, (2002) · Zbl 1012.03032 [2] Cintula, P., About axiomatic systems for product logic, Soft comput., 5, 243-244, (2001) · Zbl 0987.03022 [3] F. Esteva, J. Gispert, L. Godo, F. Montagna, On the rational and standard completeness of some axiomatic extensions of the monoidal t-norm logic, Studia Logica, to appear. · Zbl 1011.03015 [4] Esteva, F.; Godo, L., Monoidal t-norms based logictowards a logic for left-continuous t-norms, Fuzzy sets and systems, 124, 271-288, (2001) · Zbl 0994.03017 [5] Flondor, P.; Georgescu, G.; Iorgulescu, A., Pseudo-t-norms and pseudo-BL-algebras, Soft comput., 5, 355-371, (2001) · Zbl 0995.03048 [6] Fodor, J.C., Contrapositive symmetry of fuzzy implications, Fuzzy sets and systems, 69, 141-156, (1995) · Zbl 0845.03007 [7] Fuchs, L., Partially ordered algebraic structures, (1963), Pergamon Press Oxford · Zbl 0137.02001 [8] Gottwald, S.; Jenei, S., A new axiomatization for involutive monoidal t-norm based logic, Fuzzy sets and systems, 124, 303-306, (2001) · Zbl 0998.03024 [9] Hájek, P., Metamathematics of fuzzy logic, (1998), Kluwer Academic Publishers Dordrecht · Zbl 0937.03030 [10] Jenei, S., On Archimedean triangular norms, Fuzzy sets and systems, 99, 179-186, (1998) · Zbl 0938.03083 [11] Jenei, S., New family of triangular norms via contrapositive symmetrization of residuated implications, Fuzzy sets and systems, 110, 117-174, (2000) · Zbl 0941.03059 [12] S. Jenei, Structure of Girard monoids on [0,1], in: Klement et al. (Eds.), Topological and Algebraic Structures in Fuzzy Sets, Kluwer Academic Publishers, Dordrecht, to appear. · Zbl 1036.03035 [13] S. Jenei, F. Montagna, A proof of standard completeness of Esteva and Godo’s logic MTL, Studia Logica, to appear. · Zbl 0997.03027 [14] Klement, E.P.; Mesiar, R.; Pap, E., Triangular norms, (2000), Kluwer Academic Publishers Dordrecht · Zbl 0972.03002 [15] Kolesárová, A., A note on Archimedean triangular norms, Busefal, 80, 57-60, (1999)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-08-01 05:52:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851537823677063, "perplexity": 10853.550358390134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00430.warc.gz"}
|
http://hackage.haskell.org/package/dhall-1.6.0/docs/Dhall-Core.html
|
dhall-1.6.0: A configuration language guaranteed to terminate
Dhall.Core
Description
This module contains the core calculus for the Dhall language.
Dhall is essentially a fork of the morte compiler but with more built-in functionality, better error messages, and Haskell integration
Synopsis
# Syntax
data Const Source #
Constants for a pure type system
The only axiom is:
⊦ Type : Kind
... and the valid rule pairs are:
⊦ Type ↝ Type : Type -- Functions from terms to terms (ordinary functions)
⊦ Kind ↝ Type : Type -- Functions from types to terms (polymorphic functions)
⊦ Kind ↝ Kind : Kind -- Functions from types to types (type constructors)
These are the same rule pairs as System Fω
Note that Dhall does not support functions from terms to types and therefore Dhall is not a dependently typed language
Constructors
Type Kind
Instances
Source # Methods Source # Methodssucc :: Const -> Const #pred :: Const -> Const #toEnum :: Int -> Const #enumFrom :: Const -> [Const] #enumFromThen :: Const -> Const -> [Const] #enumFromTo :: Const -> Const -> [Const] #enumFromThenTo :: Const -> Const -> Const -> [Const] # Source # Methods(==) :: Const -> Const -> Bool #(/=) :: Const -> Const -> Bool # Source # MethodsshowsPrec :: Int -> Const -> ShowS #show :: Const -> String #showList :: [Const] -> ShowS # Source # Methods
data HasHome Source #
Whether or not a path is relative to the user's home directory
Constructors
Home Homeless
Instances
Source # Methods(==) :: HasHome -> HasHome -> Bool #(/=) :: HasHome -> HasHome -> Bool # Source # Methods(<) :: HasHome -> HasHome -> Bool #(<=) :: HasHome -> HasHome -> Bool #(>) :: HasHome -> HasHome -> Bool #(>=) :: HasHome -> HasHome -> Bool # Source # MethodsshowList :: [HasHome] -> ShowS #
data PathType Source #
The type of path to import (i.e. local vs. remote vs. environment)
Constructors
File HasHome FilePath Local path URL Text (Maybe PathType) URL of emote resource and optional headers stored in a path Env Text Environment variable
Instances
Source # Methods Source # Methods(<) :: PathType -> PathType -> Bool #(>) :: PathType -> PathType -> Bool # Source # MethodsshowList :: [PathType] -> ShowS # Source # Methods
data PathMode Source #
How to interpret the path's contents (i.e. as Dhall code or raw text)
Constructors
Code RawText
Instances
Source # Methods Source # Methods(<) :: PathMode -> PathMode -> Bool #(>) :: PathMode -> PathMode -> Bool # Source # MethodsshowList :: [PathMode] -> ShowS #
data Path Source #
Path to an external resource
Constructors
Path FieldspathType :: PathType pathMode :: PathMode
Instances
Source # Methods(==) :: Path -> Path -> Bool #(/=) :: Path -> Path -> Bool # Source # Methodscompare :: Path -> Path -> Ordering #(<) :: Path -> Path -> Bool #(<=) :: Path -> Path -> Bool #(>) :: Path -> Path -> Bool #(>=) :: Path -> Path -> Bool #max :: Path -> Path -> Path #min :: Path -> Path -> Path # Source # MethodsshowsPrec :: Int -> Path -> ShowS #show :: Path -> String #showList :: [Path] -> ShowS # Source # Methods
data Var Source #
Label for a bound variable
The Text field is the variable's name (i.e. "x").
The Int field disambiguates variables with the same name if there are multiple bound variables of the same name in scope. Zero refers to the nearest bound variable and the index increases by one for each bound variable of the same name going outward. The following diagram may help:
┌──refers to──┐
│ │
v │
λ(x : Type) → λ(y : Type) → λ(x : Type) → x@0
┌─────────────────refers to─────────────────┐
│ │
v │
λ(x : Type) → λ(y : Type) → λ(x : Type) → x@1
This Int behaves like a De Bruijn index in the special case where all variables have the same name.
You can optionally omit the index if it is 0:
┌─refers to─┐
│ │
v │
λ(x : Type) → λ(y : Type) → λ(x : Type) → x
Zero indices are omitted when pretty-printing Vars and non-zero indices appear as a numeric suffix.
Constructors
V Text !Integer
Instances
Source # Methods(==) :: Var -> Var -> Bool #(/=) :: Var -> Var -> Bool # Source # MethodsshowsPrec :: Int -> Var -> ShowS #show :: Var -> String #showList :: [Var] -> ShowS # Source # Methods Source # Methodsbuild :: Var -> Builder #
data Expr s a Source #
Syntax tree for expressions
Constructors
Const Const Const c ~ c Var Var Var (V x 0) ~ x Var (V x n) ~ x@n Lam Text (Expr s a) (Expr s a) Lam x A b ~ λ(x : A) -> b Pi Text (Expr s a) (Expr s a) Pi "_" A B ~ A -> B Pi x A B ~ ∀(x : A) -> B App (Expr s a) (Expr s a) App f a ~ f a Let Text (Maybe (Expr s a)) (Expr s a) (Expr s a) Let x Nothing r e ~ let x = r in e Let x (Just t) r e ~ let x : t = r in e Annot (Expr s a) (Expr s a) Annot x t ~ x : t Bool Bool ~ Bool BoolLit Bool BoolLit b ~ b BoolAnd (Expr s a) (Expr s a) BoolAnd x y ~ x && y BoolOr (Expr s a) (Expr s a) BoolOr x y ~ x || y BoolEQ (Expr s a) (Expr s a) BoolEQ x y ~ x == y BoolNE (Expr s a) (Expr s a) BoolNE x y ~ x != y BoolIf (Expr s a) (Expr s a) (Expr s a) BoolIf x y z ~ if x then y else z Natural Natural ~ Natural NaturalLit Natural NaturalLit n ~ +n NaturalFold NaturalFold ~ Natural/fold NaturalBuild NaturalBuild ~ Natural/build NaturalIsZero NaturalIsZero ~ Natural/isZero NaturalEven NaturalEven ~ Natural/even NaturalOdd NaturalOdd ~ Natural/odd NaturalToInteger NaturalToInteger ~ Natural/toInteger NaturalShow NaturalShow ~ Natural/show NaturalPlus (Expr s a) (Expr s a) NaturalPlus x y ~ x + y NaturalTimes (Expr s a) (Expr s a) NaturalTimes x y ~ x * y Integer Integer ~ Integer IntegerLit Integer IntegerLit n ~ n IntegerShow IntegerShow ~ Integer/show Double Double ~ Double DoubleLit Double DoubleLit n ~ n DoubleShow DoubleShow ~ Double/show Text Text ~ Text TextLit Builder TextLit t ~ t TextAppend (Expr s a) (Expr s a) TextAppend x y ~ x ++ y List List ~ List ListLit (Maybe (Expr s a)) (Vector (Expr s a)) ListLit (Just t ) [x, y, z] ~ [x, y, z] : List t ListLit Nothing [x, y, z] ~ [x, y, z] ListAppend (Expr s a) (Expr s a) ListAppend x y ~ x # y ListBuild ListBuild ~ List/build ListFold ListFold ~ List/fold ListLength ListLength ~ List/length ListHead ListHead ~ List/head ListLast ListLast ~ List/last ListIndexed ListIndexed ~ List/indexed ListReverse ListReverse ~ List/reverse Optional Optional ~ Optional OptionalLit (Expr s a) (Vector (Expr s a)) OptionalLit t [e] ~ [e] : Optional t OptionalLit t [] ~ [] : Optional t OptionalFold OptionalFold ~ Optional/fold OptionalBuild OptionalBuild ~ Optional/build Record (Map Text (Expr s a)) Record [(k1, t1), (k2, t2)] ~ { k1 : t1, k2 : t1 } RecordLit (Map Text (Expr s a)) RecordLit [(k1, v1), (k2, v2)] ~ { k1 = v1, k2 = v2 } Union (Map Text (Expr s a)) Union [(k1, t1), (k2, t2)] ~ < k1 : t1 | k2 : t2 > UnionLit Text (Expr s a) (Map Text (Expr s a)) UnionLit (k1, v1) [(k2, t2), (k3, t3)] ~ < k1 = t1 | k2 : t2 | k3 : t3 > Combine (Expr s a) (Expr s a) Combine x y ~ x ∧ y Prefer (Expr s a) (Expr s a) CombineRight x y ~ x ⫽ y Merge (Expr s a) (Expr s a) (Maybe (Expr s a)) Merge x y (Just t ) ~ merge x y : t| > Merge x y Nothing ~ merge x y Field (Expr s a) Text Field e x ~ e.x Note s (Expr s a) Note s x ~ e Embed a Embed path ~ path
Instances
Source # Methodsbimap :: (a -> b) -> (c -> d) -> Expr a c -> Expr b d #first :: (a -> b) -> Expr a c -> Expr b c #second :: (b -> c) -> Expr a b -> Expr a c # Monad (Expr s) Source # Methods(>>=) :: Expr s a -> (a -> Expr s b) -> Expr s b #(>>) :: Expr s a -> Expr s b -> Expr s b #return :: a -> Expr s a #fail :: String -> Expr s a # Functor (Expr s) Source # Methodsfmap :: (a -> b) -> Expr s a -> Expr s b #(<\$) :: a -> Expr s b -> Expr s a # Source # Methodspure :: a -> Expr s a #(<*>) :: Expr s (a -> b) -> Expr s a -> Expr s b #(*>) :: Expr s a -> Expr s b -> Expr s b #(<*) :: Expr s a -> Expr s b -> Expr s a # Foldable (Expr s) Source # Methodsfold :: Monoid m => Expr s m -> m #foldMap :: Monoid m => (a -> m) -> Expr s a -> m #foldr :: (a -> b -> b) -> b -> Expr s a -> b #foldr' :: (a -> b -> b) -> b -> Expr s a -> b #foldl :: (b -> a -> b) -> b -> Expr s a -> b #foldl' :: (b -> a -> b) -> b -> Expr s a -> b #foldr1 :: (a -> a -> a) -> Expr s a -> a #foldl1 :: (a -> a -> a) -> Expr s a -> a #toList :: Expr s a -> [a] #null :: Expr s a -> Bool #length :: Expr s a -> Int #elem :: Eq a => a -> Expr s a -> Bool #maximum :: Ord a => Expr s a -> a #minimum :: Ord a => Expr s a -> a #sum :: Num a => Expr s a -> a #product :: Num a => Expr s a -> a # Source # Methodstraverse :: Applicative f => (a -> f b) -> Expr s a -> f (Expr s b) #sequenceA :: Applicative f => Expr s (f a) -> f (Expr s a) #mapM :: Monad m => (a -> m b) -> Expr s a -> m (Expr s b) #sequence :: Monad m => Expr s (m a) -> m (Expr s a) # (Eq a, Eq s) => Eq (Expr s a) Source # Methods(==) :: Expr s a -> Expr s a -> Bool #(/=) :: Expr s a -> Expr s a -> Bool # (Show a, Show s) => Show (Expr s a) Source # MethodsshowsPrec :: Int -> Expr s a -> ShowS #show :: Expr s a -> String #showList :: [Expr s a] -> ShowS # IsString (Expr s a) Source # MethodsfromString :: String -> Expr s a # Buildable a => Buildable (Expr s a) Source # Generates a syntactically valid Dhall program Methodsbuild :: Expr s a -> Builder #
# Normalization
normalize :: Expr s a -> Expr t a Source #
Reduce an expression to its normal form, performing beta reduction
normalize does not type-check the expression. You may want to type-check expressions before normalizing them since normalization can convert an ill-typed expression into a well-typed expression.
However, normalize will not fail if the expression is ill-typed and will leave ill-typed sub-expressions unevaluated.
normalizeWith :: Normalizer a -> Expr s a -> Expr t a Source #
Reduce an expression to its normal form, performing beta reduction and applying any custom definitions.
normalizeWith is designed to be used with function typeWith. The typeWith function allows typing of Dhall functions in a custom typing context whereas normalizeWith allows evaluating Dhall expressions in a custom context.
To be more precise normalizeWith applies the given normalizer when it finds an application term that it cannot reduce by other means.
Note that the context used in normalization will determine the properties of normalization. That is, if the functions in custom context are not total then the Dhall language, evaluated with those functions is not total either.
type Normalizer a = forall s. Expr s a -> Maybe (Expr s a) Source #
Use this to wrap you embedded functions (see normalizeWith) to make them polymorphic enough to be used.
subst :: Var -> Expr s a -> Expr t a -> Expr s a Source #
Substitute all occurrences of a variable with an expression
subst x C B ~ B[x := C]
shift :: Integer -> Var -> Expr s a -> Expr t a Source #
shift is used by both normalization and type-checking to avoid variable capture by shifting variable indices
For example, suppose that you were to normalize the following expression:
λ(a : Type) → λ(x : a) → (λ(y : a) → λ(x : a) → y) x
If you were to substitute y with x without shifting any variable indices, then you would get the following incorrect result:
λ(a : Type) → λ(x : a) → λ(x : a) → x -- Incorrect normalized form
In order to substitute x in place of y we need to shift x by 1 in order to avoid being misinterpreted as the x bound by the innermost lambda. If we perform that shift then we get the correct result:
λ(a : Type) → λ(x : a) → λ(x : a) → x@1
As a more worked example, suppose that you were to normalize the following expression:
λ(a : Type)
→ λ(f : a → a → a)
→ λ(x : a)
→ λ(x : a)
→ (λ(x : a) → f x x@1) x@1
The correct normalized result would be:
λ(a : Type)
→ λ(f : a → a → a)
→ λ(x : a)
→ λ(x : a)
→ f x@1 x
The above example illustrates how we need to both increase and decrease variable indices as part of substitution:
• We need to increase the index of the outer x@1 to x@2 before we substitute it into the body of the innermost lambda expression in order to avoid variable capture. This substitution changes the body of the lambda expression to (f x@2 x@1)
• We then remove the innermost lambda and therefore decrease the indices of both xs in (f x@2 x@1) to (f x@1 x) in order to reflect that one less x variable is now bound within that scope
Formally, (shift d (V x n) e) modifies the expression e by adding d to the indices of all variables named x whose indices are greater than (n + m), where m is the number of bound variables of the same name within that scope
In practice, d is always 1 or -1 because we either:
• increment variables by 1 to avoid variable capture during substitution
• decrement variables by 1 when deleting lambdas after substitution
n starts off at 0 when substitution begins and increments every time we descend into a lambda or let expression that binds a variable of the same name in order to avoid shifting the bound variables by mistake.
isNormalized :: Expr s a -> Bool Source #
Quickly check if an expression is in normal form
isNormalizedWith :: (Eq s, Eq a) => Normalizer a -> Expr s a -> Bool Source #
Check if an expression is in a normal form given a context of evaluation. Unlike isNormalized, this will fully normalize and traverse through the expression.
It is much more efficient to use isNormalized.
# Pretty-printing
pretty :: Buildable a => a -> Text Source #
Pretty-print a value
# Miscellaneous
internalError :: Text -> forall b. b Source #
Utility function used to throw internal errors that should never happen (in theory) but that are not enforced by the type system
The set of reserved identifiers for the Dhall language
|
2019-06-26 11:31:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21809713542461395, "perplexity": 13655.812049513923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000266.39/warc/CC-MAIN-20190626094111-20190626120111-00329.warc.gz"}
|
https://bout-dev.readthedocs.io/en/latest/user_docs/time_integration.html
|
# Time integration¶
## Options¶
BOUT++ can be compiled with several different time-integration solvers , and at minimum should have Runge-Kutta (RK4) and PVODE (BDF/Adams) solvers available.
The solver library used is set using the solver:type option, so either in BOUT.inp:
[solver]
type = rk4 # Set the solver to use
or on the command line by adding solver:type=pvode for example:
mpirun -np 4 ./2fluid solver:type=rk4
NB: Make sure there are no spaces around the “=” sign: solver:type =pvode won’t work (probably). Table Table 10 gives a list of time integration solvers, along with any compile-time options needed to make the solver available.
Table 10 Available time integration solvers
Name
Description
Compile options
euler
Euler explicit method (example only)
Always available
rk4
Runge-Kutta 4th-order explicit method
Always available
rkgeneric
Generic Runge Kutta explicit methods
Always available
rk3ssp
3rd-order Strong Stability Preserving
Always available
splitrk
Split RK3-SSP and RK-Legendre
Always available
pvode
1998 PVODE with BDF method
Always available
cvode
SUNDIALS CVODE. BDF and Adams methods
–with-cvode
ida
SUNDIALS IDA. DAE solver
–with-ida
arkode
SUNDIALS ARKODE IMEX solver
–with-arkode
petsc
PETSc TS methods
–with-petsc
imexbdf2
IMEX-BDF2 scheme
–with-petsc
beuler / snes
Backward Euler with SNES solvers
–with-petsc
Each solver can have its own settings which work in slightly different ways, but some common settings and which solvers they are used in are given in table Table 11.
Table 11 Time integration solver options
Option
Description
Solvers used
atol
Absolute tolerance
rk4, pvode, cvode, ida, imexbdf2, beuler
rtol
Relative tolerance
rk4, pvode, cvode, ida, imexbdf2, beuler
mxstep
Maximum internal steps per output step
rk4, imexbdf2
max_timestep
Maximum timestep
rk4, cvode
timestep
Starting timestep
rk4, euler, imexbdf2, beuler
rk4, imexbdf2
use_precon
Use a preconditioner? (Y/N)
pvode, cvode, ida, imexbdf2
mudq, mldq
BBD preconditioner settings
pvode, cvode, ida
mukeep, mlkeep
maxl
Maximum number of linear iterations
cvode, imexbdf2
max_nonlinear_iterations
Maximum number of nonlinear iterations
cvode, imexbdf2, beuler
use_jacobian
Use user-supplied Jacobian? (Y/N)
cvode
Use Adams-Moulton method rather than BDF
cvode
diagnose
cvode, imexbdf2, beuler
The most commonly changed options are the absolute and relative solver tolerances, atol and rtol which should be varied to check convergence.
## CVODE¶
The most commonly used time integration solver is CVODE, or its older version PVODE. CVODE has several advantages over PVODE, including better support for preconditioning and diagnostics.
Enabling diagnostics output using solver:diagnose=true will print a set of outputs for each timestep similar to:
CVODE: nsteps 51, nfevals 69, nniters 65, npevals 126, nliters 79
-> Newton iterations per step: 1.274510e+00
-> Linear iterations per Newton iteration: 1.215385e+00
-> Preconditioner evaluations per Newton: 1.938462e+00
-> Last step size: 1.026792e+00, order: 5
-> Local error fails: 0, nonlinear convergence fails: 0
-> Stability limit order reductions: 0
1.000e+01 149 2.07e+01 78.3 0.0 10.0 0.9 10.8
When diagnosing slow performance, key quantities to look for are nonlinear convergence failures, and the number of linear iterations per Newton iteration. A large number of failures, and close to 5 linear iterations per Newton iteration are a sign that the linear solver is not converging quickly enough, and hitting the default limit of 5 iterations. This limit can be modified using the solver:maxl setting. Giving it a large value e.g. solver:maxl=1000 will show how many iterations are needed to solve the linear system. If the number of iterations becomes large, this may be an indication that the system is poorly conditioned, and a preconditioner might help improve performance. See Preconditioning.
CVODE can set constraints to keep some quantities positive, non-negative, negative or non-positive. These constraints can be activated by setting the option solver:apply_positivity_constraints=true, and then in the section for a certain variable (e.g. [n]), setting the option positivity_constraint to one of positive, non_negative, negative, or non_positive.
## IMEX-BDF2¶
This is an IMplicit-EXplicit time integration solver, which allows the evolving function to be split into two parts: one which has relatively long timescales and can be integrated using explicit methods, and a part which has short timescales and must be integrated implicitly. The order of accuracy is variable (up to 4th-order currently), and an adaptive timestep can be used.
To use the IMEX-BDF2 solver, set the solver type to imexbdf2, e.g. on the command-line add solver:type=imexbdf2 or in the options file:
[solver]
type = imexbdf2
The order of the method is set to 2 by default, but can be increased up to a maximum of 4:
[solver]
type = imexbdf2
maxOrder = 3
This is a multistep method, so the state from previous steps are used to construct the next one. This means that at the start, when there are no previous steps, the order is limited to 1 (backwards Euler method). Similarly, the second step is limited to order 2, and so on. At the moment the order is not adapted, so just increases until reaching maxOrder.
At each step the explicit (non-stiff) part of the function is called, and combined with previous timestep values. The implicit part of the function is then solved using PETSc’s SNES, which consists of a nonlinear solver (usually modified Newton iteration), each iteration of which requires a linear solve (usually GMRES). Settings which affect this implicit part of the solve are:
Option
Default
Description
atol
1e-16
Absolute tolerance on SNES solver
rtol
1e-10
Relative tolerance on SNES solver
max_nonlinear_it
5
Maximum number of nonlinear iterations If adaptive timestepping is used then failure will cause timestep reduction
maxl
20
Maximum number of linear iterations If adaptive, failure will cause timestep reduction
predictor
1
Starting guess for the nonlinear solve Specifies order of extrapolating polynomial
use_precon
false
Use user-supplied preconditioner?
matrix_free
true
Use Jacobian-free methods? If false, calculates the Jacobian matrix using finite difference
use_coloring
true
If not matrix free, use coloring to speed up calculation of the Jacobian
Note that the SNES tolerances atol and rtol are set very conservatively by default. More reasonable values might be 1e-10 and 1e-5, but this must be explicitly asked for in the input options.
The predictor extrapolates from previous timesteps to get a starting estimate for the value at the next timestep. This estimate is then used to initialise the SNES nonlinear solve. The value is the order of the extrapolating polynomial, so 1 (the default) is a linear extrapolation from the last two steps, 0 is the same as the last step. A value of -1 uses the explicit update to the state as the starting guess, i.e. assuming that the implicit part of the problem is small. This is usually not a good guess.
To diagnose what is happening in the time integration, for example to see why it is failing to converge or why timesteps are small, there are two settings which can be set to true to enable:
• diagnose outputs a summary at each output time, similar to CVODE. This contains information like the last timestep, average number of iterations and number of convergence failures.
• verbose prints information at every internal step, with more information on the values used to modify timesteps, and the reasons for solver failures.
By default adaptive timestepping is turned on, using several factors to modify the timestep:
1. If the nonlinear solver (SNES) fails to converge, either because it diverges or exceeds the iteration limits max_nonlinear_its or maxl. Reduces the timestep by 2 and tries again, giving up after 10 failures.
2. Every nadapt internal timesteps (default 4), the error is checked by taking the timestep twice: Once with the current order of accuracy, and once with one order of accuracy lower. The difference between the solutions is then used to estimate the timestep required to achieve the required tolerances. If this is much larger or smaller than the current timestep, then the timestep is modified.
3. The timestep is kept within user-specified maximum and minimum ranges.
The options which control this behaviour are:
Option
Default
Description
true
timestep
output timestep
If adaptive sets the starting timestep. If not adaptive, timestep fixed at this value
dtMin
1e-10
Minimum timestep
dtMax
output timestep
Maximum timestep
mxstep
1e5
Maximum number of internal steps between outputs
4
How often is error checked and timestep adjusted?
1e-3
Target relative tolerance for adaptive timestep
scaleCushDown
1.0
Timestep scale factor below which the timestep is modified. By default the timestep is always reduced
scaleCushUp
1.5
Minimum timestep scale factor based on adaptRtol above which the timestep will be modified. Currently the timestep increase is limited to 25%
## Split-RK¶
The splitrk solver type uses Strang splitting to combine two explicit Runge Kutta schemes:
1. 2nd order Runge-Kutta-Legendre method for the diffusion (parabolic) part. These schemes use multiple stages to increase stability, rather than accuracy; this is always 2nd order, but the stable timestep for diffusion problems increases as the square of the number of stages. The number of stages is an input option, and can be arbitrarily large.
Each timestep consists of
1. A half timestep of the diffusion part
2. A full timestep of the advection part
3. A half timestep of the diffusion part
Options to control the behaviour of the solver are:
Option
Default
Description
timestep
output timestep
If adaptive sets the starting timestep. If not adaptive, timestep fixed at this value
nstages
10
Number of stages in RKL step. Must be > 1
diagnose
false
Print diagnostic information
Option
Default
Description
true
atol
1e-10
Absolute tolerance
rtol
1e-5
Relative tolerance
max_timestep
output timestep
Maximum internal timestep
max_timestep_change
2
Maximum factor by which the timestep by which the time step can be changed at each step
mxstep
1000
Maximum number of internal steps before output
1
Number of internal steps between tolerance checks
## Backward Euler - SNES¶
The beuler or snes solver type (either name can be used) is intended mainly for solving steady-state problems, so integrates in time using a stable but low accuracy method (Backward Euler). It uses PETSc’s SNES solvers to solve the nonlinear system at each timestep, and adjusts the internal timestep to keep the number of SNES iterations within a given range.
Option
Default
Description
snes_type
newtonls
PETSc SNES nonlinear solver (try anderson, qn)
ksp_type
gmres
PETSc KSP linear solver
pc_type
ilu / bjacobi
PETSc PC preconditioner
max_nonlinear_iterations
20
If exceeded, solve restarts with timestep / 2
maxl
20
Maximum number of linear iterations
atol
1e-12
Absolute tolerance of SNES solve
rtol
1e-5
Relative tolerance of SNES solve
upper_its
80% max
If exceeded, next timestep reduced by 10%
lower_its
50% max
If under this, next timestep increased by 10%
timestep
1
Initial timestep
predictor
true
Use linear predictor?
matrix_free
false
Use matrix free Jacobian-vector product?
use_coloring
true
If matrix_free=false, use coloring to speed up calculation of the Jacobian elements.
lag_jacobian
50
Re-use the Jacobian for successive inner solves
kspsetinitialguessnonzero
false
If true, Use previous solution as KSP initial
use_precon
false
Use user-supplied preconditioner? If false, the default PETSc preconditioner is used
diagnose
false
Print diagnostic information every iteration
The predictor is linear extrapolation from the last two timesteps. It seems to be effective, but can be disabled by setting predictor = false.
The default newtonls SNES type can be very effective if combined with Jacobian coloring: The coloring enables the Jacobian to be calculated relatively efficiently; once a Jacobian matrix has been calculated, effective preconditioners can be used to speed up convergence. It is important to note that the coloring assumes a star stencil and so won’t work for every problem: It assumes that each evolving quantity is coupled to all other evolving quantities on the same grid cell, and on all the neighbouring grid cells. If the RHS function includes Fourier transforms, or matrix inversions (e.g. potential solves) then these will introduce longer-range coupling and the Jacobian calculation will give spurious results. Generally the method will then fail to converge. Two solutions are to a) switch to matrix-free (matrix_free=true), or b) solve the matrix inversion as a constraint.
The SNES type can be set through PETSc command-line options, or in the BOUT++ options as setting snes_type. Good choices for unpreconditioned problems where the Jacobian is not available (matrix_free=true) seem to be anderson and qn (quasinewton).
Preconditioner types:
1. On one processor the ILU solver is typically very effective, and is usually the default
2. The Hypre package can be installed with PETSc and used as a preconditioner. One of the options available in Hypre is the Euler parallel ILU solver. Enable with command-line args -pc_type hypre -pc_hypre_type euclid -pc_hypre_euclid_levels k where k is the level (1-8 typically).
## ODE integration¶
The Solver class can be used to solve systems of ODEs inside a physics model: Multiple Solver objects can exist besides the main one used for time integration. Example code is in examples/test-integrate.
To use this feature, systems of ODEs must be represented by a class derived from PhysicsModel.
class MyFunction : public PhysicsModel {
public:
int init(bool restarting) {
// Initialise ODE
// Add variables to solver as usual
...
}
int rhs(BoutReal time) {
// Specify derivatives of fields as usual
ddt(result) = ...
}
private:
Field3D result;
};
To solve this ODE, create a new Solver object:
Solver* ode = Solver::create(Options::getRoot()->getSection("ode"));
This will look in the section [ode] in the options file. Important: To prevent this solver overwriting the main restart files with its own restart files, either disable restart files:
[ode]
enablerestart = false
or specify a different directory to put the restart files:
[ode]
restartdir = ode # Restart files ode/BOUT.restart.0.nc, ...
Create a model object, and pass it to the solver:
MyFunction* model = new MyFunction();
ode->setModel(model);
Finally tell the solver to perform the integration:
ode->solve(5, 0.1);
The first argument is the number of steps to take, and the second is the size of each step. These can also be specified in the options, so calling
ode->solve();
will cause ode to look in the input for nout and timestep options:
[ode]
nout = 5
timestep = 0.1
Finally, delete the model and solver when finished:
delete model;
delete solver;
Note: If an ODE needs to be solved multiple times, at the moment it is recommended to delete the solver, and create a new one each time.
## Preconditioning¶
At every time step, an implicit scheme such as BDF has to solve a non-linear problem to find the next solution. This is usually done using Newton’s method, each step of which involves solving a linear (matrix) problem. For $$N$$ evolving variables is an $$N\times N$$ matrix and so can be very large. By default matrix-free methods are used, in which the Jacobian $$\mathcal{J}$$ is approximated by finite differences (see next subsection), and so this matrix never needs to be explicitly calculated. Finding a solution to this matrix can still be difficult, particularly as $$\delta t$$ gets large compared with some time-scales in the system (i.e. a stiff problem).
A preconditioner is a function which quickly finds an approximate solution to this matrix, speeding up convergence to a solution. A preconditioner does not need to include all the terms in the problem being solved, as the preconditioner only affects the convergence rate and not the final solution. A good preconditioner can therefore concentrate on solving the parts of the problem with the fastest time-scales.
A simple example 1 is a coupled wave equation, solved in the test-precon example code:
$\frac{\partial u}{\partial t} = \partial_{||}v \qquad \frac{\partial v}{\partial t} = \partial_{||} u$
First, calculate the Jacobian of this set of equations by taking partial derivatives of the time-derivatives with respect to each of the evolving variables
$\begin{split}\mathcal{J} = (\begin{array}{cc} \frac{\partial}{\partial u}\frac{\partial u}{\partial t} & \frac{\partial}{\partial v}\frac{\partial u}{\partial t}\\ \frac{\partial}{\partial u}\frac{\partial v}{\partial t} & \frac{\partial}{\partial v}\frac{\partial v}{\partial t} \end{array} ) = (\begin{array}{cc} 0 & \partial_{||} \\ \partial_{||} & 0 \end{array} )\end{split}$
In this case $$\frac{\partial u}{\partial t}$$ doesn’t depend on $$u$$ nor $$\frac{\partial v}{\partial t}$$ on $$v$$, so the diagonal is empty. Since the equations are linear, the Jacobian doesn’t depend on $$u$$ or $$v$$ and so
$\begin{split}\frac{\partial}{\partial t}(\begin{array}{c} u \\ v \end{array}) = \mathcal{J} (\begin{array}{c} u \\ v \end{array} )\end{split}$
In general for non-linear functions $$\mathcal{J}$$ gives the change in time-derivatives in response to changes in the state variables $$u$$ and $$v$$.
In implicit time stepping, the preconditioner needs to solve an equation
$\mathcal{I} - \gamma \mathcal{J}$
where $$\mathcal{I}$$ is the identity matrix, and $$\gamma$$ depends on the time step and method (e.g. $$\gamma = \delta t$$ for backwards Euler method). For the simple wave equation problem, this is
$\begin{split}\mathcal{I} - \gamma \mathcal{J} = (\begin{array}{cc} 1 & -\gamma\partial_{||} \\ -\gamma\partial_{||} & 1 \end{array} )\end{split}$
This matrix can be block inverted using Schur factorisation 2
$\begin{split}(\begin{array}{cc} {\mathbf{E}} & {\mathbf{U}} \\ {\mathbf{L}} & {\mathbf{D}} \end{array})^{-1} = (\begin{array}{cc} {\mathbf{I}} & -{\mathbf{E}}^{-1}{\mathbf{U}} \\ 0 & {\mathbf{I}} \end{array} )(\begin{array}{cc} {\mathbf{E}}^{-1} & 0 \\ 0 & {\mathbf{P}}_{Schur}^{-1} \end{array} )(\begin{array}{cc} {\mathbf{I}} & 0 \\ -{\mathbf{L}}{\mathbf{E}}^{-1} & {\mathbf{I}} \end{array} )\end{split}$
where $${\mathbf{P}}_{Schur} = {\mathbf{D}} - {\mathbf{L}}{\mathbf{E}}^{-1}{\mathbf{U}}$$ Using this, the wave problem becomes:
(2)$\begin{split}(\begin{array}{cc} 1 & -\gamma\partial_{||} \\ -\gamma\partial_{||} & 1 \end{array})^{-1} = (\begin{array}{cc} 1 & \gamma\partial_{||}\\ 0 & 1 \end{array} )(\begin{array}{cc} 1 & 0 \\ 0 & (1 -\gamma^2\partial^2_{||})^{-1} \end{array} )(\begin{array}{cc} 1 & 0\\ \gamma\partial_{||} & 1 \end{array} )\end{split}$
The preconditioner is implemented by defining a function of the form
int precon(BoutReal t, BoutReal gamma, BoutReal delta) {
...
}
which takes as input the current time, the $$\gamma$$ factor appearing above, and $$\delta$$ which is only important for constrained problems (not discussed here… yet). The current state of the system is stored in the state variables (here u and v ), whilst the vector to be preconditioned is stored in the time derivatives (here ddt(u) and ddt(v) ). At the end of the preconditioner the result should be in the time derivatives. A preconditioner which is just the identity matrix and so does nothing is therefore:
int precon(BoutReal t, BoutReal gamma, BoutReal delta) {
}
To implement the preconditioner in equation (2), first apply the rightmost matrix to the given vector:
$\begin{split}(\begin{array}{c} \texttt{ddt(u)} \\ \texttt{ddt(v)} \end{array} ) = (\begin{array}{cc} 1 & 0 \\ \gamma\partial_{||} & 1 \end{array} )(\begin{array}{c} \texttt{ddt(u)} \\ \texttt{ddt(v)} \end{array} )\end{split}$
int precon(BoutReal t, BoutReal gamma, BoutReal delta) {
mesh->communicate(ddt(u));
//ddt(u) = ddt(u);
note that since the preconditioner is linear, it doesn’t depend on $$u$$ or $$v$$. As in the RHS function, since we are taking a differential of ddt(u), it first needs to be communicated to exchange guard cell values.
The second matrix
$\begin{split}(\begin{array}{c} \texttt{ddt(u)} \\ \texttt{ddt(v)} \end{array} ) \rightarrow (\begin{array}{cc} 1 & 0 \\ 0 & (1 - \gamma^2\partial^2_{||})^{-1} \end{array} )(\begin{array}{c} \texttt{ddt(u)} \\ \texttt{ddt(v)} \end{array} )\end{split}$
doesn’t alter $$u$$, but solves a parabolic equation in the parallel direction. There is a solver class to do this called InvertPar which solves the equation $$(A + B\partial_{||}^2)x = b$$ where $$A$$ and $$B$$ are Field2D or constants 3. In PhysicsModel::init() we create one of these solvers:
InvertPar *inv; // Parallel inversion class
int init(bool restarting) {
...
inv = InvertPar::Create();
inv->setCoefA(1.0);
...
}
In the preconditioner we then use this solver to update $$v$$:
inv->setCoefB(-SQ(gamma));
ddt(v) = inv->solve(ddt(v));
which solves $$ddt(v) \rightarrow (1 - \gamma^2\partial_{||}^2)^{-1} ddt(v)$$. The final matrix just updates $$u$$ using this new solution for $$v$$
$\begin{split}(\begin{array}{c} \texttt{ddt(u)} \\ \texttt{ddt(v)} \end{array} ) \rightarrow (\begin{array}{cc} 1 & \gamma\partial_{||} \\ 0 & 1 \end{array} )(\begin{array}{c} \texttt{ddt(u)} \\ \texttt{ddt(v)} \end{array} )\end{split}$
mesh->communicate(ddt(v));
Finally, boundary conditions need to be imposed, which should be consistent with the conditions used in the RHS:
ddt(u).applyBoundary("dirichlet");
ddt(v).applyBoundary("dirichlet");
To use the preconditioner, pass the function to the solver in PhysicsModel::init():
int init(bool restarting) {
solver->setPrecon(precon);
...
}
then in the BOUT.inp settings file switch on the preconditioner
[solver]
type = cvode # Need CVODE or PETSc
use_precon = true # Use preconditioner
rightprec = false # Use Right preconditioner (default left)
## DAE constraint equations¶
Using the IDA or IMEX-BDF2 solvers, BOUT++ can solve Differential Algebraic Equations (DAEs), in which algebraic constraints are used for some variables. Examples of how this is used are in the examples/constraints subdirectory.
First the variable to be constrained is added to the solver, in a similar way to time integrated variables. For example
Field3D phi;
...
solver->constraint(phi, ddt(phi), "phi");
The first argument is the variable to be solved for (constrained). The second argument is the field to contain the residual (error). In this example the time derivative field ddt(phi) is used, but it could be another Field3D variable. The solver will attempt to find a solution to the first argument (phi here) such that the second argument (ddt(phi)) is zero to within tolerances.
In the RHS function the residual should be calculated. In this example (examples/constraints/drift-wave-constraint) we have:
ddt(phi) = Delp2(phi) - Vort;
so the time integration solver includes the algebraic constraint Delp2(phi) = Vort i.e. ($$\nabla_\perp^2\phi = \omega$$).
## IMEX-BDF2¶
This is an implicit-explicit multistep method, which uses the PETSc library for the SNES nonlinear solver. To use this solver, BOUT++ must have been configured with PETSc support, and the solver type set to imexbdf2
[solver]
type = imexbdf2
For examples of using IMEX-BDF2, see the examples/IMEX/ subdirectory, in particular the diffusion-nl, drift-wave and drift-wave-constrain examples.
The time step is currently fixed (not adaptive), and defaults to the output timestep. To set a smaller internal timestep, the solver:timestep option can be set. If the timestep is too large, then the explicit part of the problem may become unstable, or the implicit part may fail to converge.
The implicit part of the problem can be solved matrix-free, in which case the Jacobian-vector product is approximated using finite differences. This is currently the default, and can be set on the command-line using the options:
solver:matrix_free=true -snes_mf
Note the -snes_mf flag which is passed to PETSc. When using a matrix free solver, the Jacobian is not calculated and so the amount of memory used is minimal. However, since the Jacobian is not known, many standard preconditioning methods cannot be used, and so in many cases a custom preconditioner is needed to obtain good convergence.
An experimental feature uses PETSc’s ability to calculate the Jacobian using finite differences. This can then speed up the linear solve, and allows more options for preconditioning. To enable this option:
solver:matrix_free=false
There are two ways to calculate the Jacobian: A brute force method which is set up by this call to PETSc which is generally very slow, and a “coloring” scheme which can be quite fast and is the default. Coloring uses knowledge of where the non-zero values are in the Jacobian, to work out which rows can be calculated simultaneously. The coloring code in IMEX-BDF2 currently assumes that every field is coupled to every other field in a star pattern: one cell on each side, a 7 point stencil for 3D fields. If this is not the case for your problem, then the solver may not converge.
The brute force method can be useful for comparing the Jacobian structure, so to turn off coloring:
solver:use_coloring=false
Using MatView calls, or the -mat_view PETSc options, the non-zero structure of the Jacobian can be plotted or printed.
## Monitoring the simulation output¶
Monitoring of the solution can be done at two levels: output monitoring, and timestep monitoring. Output monitoring occurs only when data is written to file, whereas timestep monitoring is every timestep and so (usually) much more frequent. Examples of both are in examples/monitor and examples/monitor-newapi.
Output monitoring: At every output timestep the solver calls a monitor method of the BoutMonitor class, which writes the output dump file, calculates and prints timing information and estimated time remaining. If you want to run additional code or write data to a different file, you can implement the outputMonitor method of PhysicsModel:
int outputMonitor(BoutReal simtime, int iter, int nout)
The first input is the current simulation time, the second is the output number, and the last is the total number of outputs requested. This method is called by a monitor object PhysicsModel::modelMonitor, which writes the restart files at the same time. You can change the frequency at which the monitor is called by calling, in PhysicsModel::init:
modelMonitor.setTimestep(new_timestep)
where new_timestep is a BoutReal which is either timestep*n or timestep/n for an integer n. Note that this will change the frequency of writing restarts as well as of calling outputMonitor().
You can also add custom monitor object(s) for more flexibility.
You can call your output monitor class whatever you like, but it must be a subclass of Monitor and provide the method call which takes 4 inputs and returns an int:
class MyOutputMonitor : public Monitor {
int call(Solver *solver, BoutReal simtime, int iter, int NOUT) {
...
}
};
The first input is the solver object, the second is the current simulation time, the third is the output number, and the last is the total number of outputs requested. To get the solver to call this function every output time, define a MyOutputMonitor object as a member of your PhysicsModel:
MyOutputMonitor my_output_monitor;
and put in your PhysicsModel::init() code:
solver->addMonitor(&my_output_monitor);
Note that the solver only stores a pointer to the Monitor, so you must make sure the object is persistent, e.g. a member of a PhysicsModel class, not a local variable in a constructor. If you want to later remove a monitor, you can do so with:
solver->removeMonitor(&my_output_monitor);
A simple example using this monitor is:
class MyOutputMonitor: public Monitor{
public:
MyOutputMonitor(BoutReal timestep=-1):Monitor(timestep){};
int call(Solver *solver, BoutReal simtime, int iter, int NOUT) override;
};
int MyOutputMonitor::call(Solver *solver, BoutReal simtime, int iter, int NOUT) {
output.write("Output monitor, time = %e, step %d of %d\n",
simtime, iter, NOUT);
return 0;
}
MyOutputMonitor my_monitor;
int init(bool restarting) {
}
See the monitor example (examples/monitor) for full code.
Timestep monitoring: This uses functions instead of objects. First define a monitor function:
int my_timestep_monitor(Solver *solver, BoutReal simtime, BoutReal lastdt) {
...
}
where simtime will again contain the current simulation time, and lastdt the last timestep taken. Add this function to the solver:
solver->addTimestepMonitor(my_timestep_monitor);
Timestep monitoring is disabled by default, unlike output monitoring. To enable timestep monitoring, set in the options file (BOUT.inp):
[solver]
monitor_timestep = true
or put on the command line solver:monitor_timestep=true . When this is enabled, it will change how solvers like CVODE and PVODE (the default solvers) are used. Rather than being run in NORMAL mode, they will instead be run in SINGLE_STEP mode (see the SUNDIALS notes here:https://computation.llnl.gov/casc/sundials/support/notes.html). This may in some cases be less efficient.
## Implementation internals¶
The solver is the interface between BOUT++ and the time-integration code such as SUNDIALS. All solvers implement the Solver class interface (see src/solver/generic_solver.hxx).
First all the fields which are to be evolved need to be added to the solver. These are always done in pairs, the first specifying the field, and the second the time-derivative:
void add(Field2D &v, Field2D &F_v, const char* name);
This is normally called in the PhysicsModel::init() initialisation routine. Some solvers (e.g. IDA) can support constraints, which need to be added in the same way as evolving fields:
bool constraints();
void constraint(Field2D &v, Field2D &C_v, const char* name);
The constraints() function tests whether or not the current solver supports constraints. The format of constraint(...) is the same as add, except that now the solver will attempt to make C_v zero. If constraint is called when the solver doesn’t support them then an error should occur.
If the physics model implements a preconditioner or Jacobian-vector multiplication routine, these can be passed to the solver during initialisation:
typedef int (*PhysicsPrecon)(BoutReal t, BoutReal gamma, BoutReal delta);
void setPrecon(PhysicsPrecon f); // Specify a preconditioner
typedef int (*Jacobian)(BoutReal t);
void setJacobian(Jacobian j); // Specify a Jacobian
If the solver doesn’t support these functions then the calls will just be ignored.
Once the problem to be solved has been specified, the solver can be initialised using:
int init();
which returns an error code (0 on success). This is currently called in bout++.cxx:
if (solver.init()) {
output.write("Failed to initialise solver. Aborting\n");
return(1);
}
which passes the (physics module) RHS function PhysicsModel::rhs() to the solver along with the number and size of the output steps.
typedef int (*MonitorFunc)(BoutReal simtime, int iter, int NOUT);
int run(MonitorFunc f);
1
Taken from a talk by L.Chacon available here https://bout2011.llnl.gov/pdf/talks/Chacon_bout2011.pdf
2
See paper https://arxiv.org/abs/1209.2054 for an application to 2-fluid equations
3
This InvertPar class can handle cases with closed field-lines and twist-shift boundary conditions for tokamak simulations
|
2022-08-18 05:05:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.548265278339386, "perplexity": 2617.892023428046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00270.warc.gz"}
|
http://rpg.stackexchange.com/questions/14445/rogue-weapons-spiked-chain
|
# Rogue Weapons Spiked Chain
I have been designing a rogue character and it is my understanding that a rogue character is able to use light blades. Therefore I took Spiked chain training which specifies that both ends of the chain would be considered light blades. However the character builder would not accept this. I need to know if my logic is flawed or if there are other feats I need to take to make this work.
-
Can you clarify where exactly the Character Builder is having a problem? Is it not letting you take the spiked chain training feat, or is it not letting you wield the spiked chain? Also, keep in mind that spiked chain training is a multi-class feat; you're not eligible for it if you already have another multi-class feat. – Oblivious Sage May 18 '12 at 17:49
it simply will not allow me equip the chain and I do not have another multi-class feat – Varentena May 18 '12 at 17:50
Which character builder? – Brian Ballsun-Stanton May 19 '12 at 1:52
For more accurate debugging, please paste your character into your answer. – Brian Ballsun-Stanton May 19 '12 at 2:01
Spiked Chain Training is different from Weapon Proficiency (Spiked Chain).
Note that, in its default state, a spiked chain is a superior weapon that belongs to the flail group.
Only through taking spiked chain training does it also become a light blade. Therefore, in order to effectively use a spiked chain as light blade as thief, you must have spiked chain training. This feat counts as a multiclass feat. If you have the wrong feat, you may equip the chain but not have it be usable by certain powers, thereby having them appear "blank."
A valid and tested build would be:
====== Created Using Wizards of the Coast D&D Character Builder ======
level 1
Human, Rogue
Rogue Tactics: Brutal Scoundrel
Rogue: Rogue Weapon Talent
FINAL ABILITY SCORES
Str 14, Con 10, Dex 20, Int 10, Wis 10, Cha 9.
STARTING ABILITY SCORES
Str 14, Con 10, Dex 18, Int 10, Wis 10, Cha 9.
AC: 17 Fort: 13 Reflex: 18 Will: 11
HP: 22 Surges: 6 Surge Value: 5
TRAINED SKILLS
Stealth +10, Thievery +10
UNTRAINED SKILLS
Acrobatics +5, Arcana, Bluff -1, Diplomacy -1, Dungeoneering, Endurance, Heal, History, Insight, Intimidate -1, Nature, Perception, Religion, Streetwise -1, Athletics +2
FEATS
Human: Spiked Chain Training
Level 1: Light Blade Expertise
POWERS
Rogue at-will 1: Riposte Strike
Rogue at-will 1: Piercing Strike
Rogue encounter 1: Sly Lunge
Rogue daily 1: Checking Jab
ITEMS
Leather Armor, Spiked chain
====== Copy to Clipboard and Press the Import Button on the Summary Tab ======
Note, however, that because of the rogue's +1 to accuracy with daggers, the spiked chain offers only a rough increase of +1.03 damage on a given attack at level 1 and far less as levels increase due to the nature of scaling bonuses. Considering the difficulty of achieving combat advantage with a reach weapon, there is no normal benefit to wielding a spiked chain as a rogue.
-
The chain technically counts as an exotic weapon so without the exotic weapon (spiked chain) feat it will probably keep giving you trouble.
-
False, actually. There's some interesting wording here and multiple interacting feats. It's really annoying. – Brian Ballsun-Stanton May 19 '12 at 1:52
|
2016-07-01 09:48:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5070529580116272, "perplexity": 9756.532995320422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00173-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://www.jobilize.com/course/section/problem-2-homework-6-by-openstax?qcr=www.quizover.com
|
# 1.5 Homework 6
Page 1 / 1
(Blank Abstract)
Homework set 6 of ELEC 430, Rice University, Department of Electrical and Computer Engineering
## Problem 1
Consider the following modulation system
${s}_{0}(t)=A{P}_{T}(t)-1$
and
${s}_{1}(t)=-(A{P}_{T}(t))-1$
for $0\le t\le T$ where ${P}_{T}(t)=\begin{cases}1 & \text{if 0\le t\le T}\\ 0 & \text{otherwise}\end{cases}$
The channel is ideal with Gaussian noise which is ${}_{N}(t)=1$ for all $t$ , wide sense stationary with ${R}_{N}()=b^{2}e^{-\left|\right|}$ for all $\in \mathbb{R}$ . Consider the following receiver structure
• a) Find the optimum value of the threshold for the system ( e.g. , that minimizes the ${P}_{e}$ ). Assume that ${}_{0}={}_{1}$
• b) Find the error probability when this threshold is used.
## Problem 2
Consider a PAM system where symbols ${a}_{1}$ , ${a}_{2}$ , ${a}_{3}$ , ${a}_{4}$ are transmitted where ${a}_{n}\in \{2A, A, -A, -(2A)\}$ . The transmitted signal is
${X}_{t}=\sum_{n=1}^{4} {a}_{n}s(t-nT)$
where $s(t)$ is a rectangular pulse of duration $T$ and height of 1. Assume that we have a channel with impulse response $g(t)$ which is a rectangular pulse of duration $T$ and height 1, with white Gaussian noise with ${S}_{N}(f)=\frac{{N}_{0}}{2}$ for all $f$ .
• a) Draw a typical sample path (realization) of ${X}_{t}$ and of the received signal ${r}_{t}$ (do not forget to add a bit of noise!)
• b) Assume that the receiver knows $g(t)$ . Design a matched filter for this transmission system.
• c) Draw a typical sample path of ${Y}_{t}$ , the output of the matched filter (do not forget to add a bit ofnoise!)
• d) Find an expression (or draw) $u(nT)$ where $u(t)=(s, g, {h}^{\mathrm{opt}}(t))$ .
## Problem 3
Proakis and Salehi, problem 7.35
## Problem 4
Proakis and Salehi, problem 7.39
#### Questions & Answers
how can chip be made from sand
Eke Reply
are nano particles real
Missy Reply
yeah
Joseph
Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master?
Lale Reply
no can't
Lohitha
where we get a research paper on Nano chemistry....?
Maira Reply
nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review
Ali
what are the products of Nano chemistry?
Maira Reply
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
Google
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
Hafiz Reply
revolt
da
Application of nanotechnology in medicine
has a lot of application modern world
Kamaluddeen
yes
narayan
what is variations in raman spectra for nanomaterials
Jyoti Reply
ya I also want to know the raman spectra
Bhagvanji
I only see partial conversation and what's the question here!
Crow Reply
what about nanotechnology for water purification
RAW Reply
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
Brian Reply
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
LITNING Reply
what is a peer
LITNING Reply
What is meant by 'nano scale'?
LITNING Reply
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
Bob Reply
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply
### Read also:
#### Get Jobilize Job Search Mobile App in your pocket Now!
Source: OpenStax, Digital communication systems. OpenStax CNX. Jan 22, 2004 Download for free at http://cnx.org/content/col10134/1.3
Google Play and the Google Play logo are trademarks of Google Inc.
Notification Switch
Would you like to follow the 'Digital communication systems' conversation and receive update notifications?
By Richley Crapo By Brooke Delaney By Mahee Boo By Stephen Voron By Anonymous User By Madison Christian By By Christine Zeelie By David Martin By John Gabrieli
|
2021-05-14 21:49:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5363888740539551, "perplexity": 4048.770241653278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00229.warc.gz"}
|
https://stuartmarks.wordpress.com/
|
Feeds:
Posts
## Writing Stateful Stream Operations
The distinct() stream operation compares the stream’s elements using Object.equals(). That is, for any set of stream elements that are all equals() to each other, the distinct() operation will let just one of them through. However, sometimes you want the notion of “distinct” to be based on some property or other value derived from the stream element, but not the value itself. You could use map() to map the stream element into some derived value and use distinct() on those, but the result would be a stream of those derived values, not the original stream element.
It would be nice if there were some construct like
distinct(Function<T,U> keyExtractor)
that would call keyExtractor to derive the values that are compared for uniqueness, but there isn’t. However, it’s not too difficult to write your own.
The first insight is that you can think of the distinct() operation as a stateful filter. It’s like a filter() operation, which takes a predicate that determines whether to let the element though. It’s stateful because whether it lets an element through is determined by what elements it has seen previously.
This state needs to be maintained somewhere. Internally, the distinct() operation keeps a Set that contains elements that have been seen previously, but it’s buried inside the operation and we can’t get to it from application code. But we could write something similar ourselves. The usual way to maintain state in Java is to create a class that has fields in which the state is maintained. We need a predicate, and that predicate could be a method on that class. This will work, but it’s rather cumbersome.
The second insight is that lambdas can capture local variables from their enclosing lexical environment. These local variables cannot be mutated, but if they are references to mutable objects, those objects can be mutated. Thus we can write a higher-order function whose local variables contain references to the state objects, and we can have our higher-order function return a lambda that captures those locals and does its processing based on the captured, mutable state.
This function will want to take a keyExtractor function that’s used to derive a value from each stream element. Conceptually we’ll want a Set to keep track of values we’ve seen already. However, in case our stream is run in parallel, we’ll want some thread-safe data structure. A ConcurrentHashMap is a simple way to do this, with each existing key representing membership in the set, and the value being a dummy object such as the empty string. (That’s how many Set implementations in the JDK work already.) Ideally we’d want to use an existing object as the dummy value and not create one each time. The empty string literal is used many times in the core JDK classes, so it’s certainly already in the constant pool.
Here’s what the code looks like:
public static <T> Predicate<T> distinctByKey(
Function<? super T,Object> keyExtractor) {
Map<Object,String> seen = new ConcurrentHashMap<>();
return t -> seen.put(keyExtractor.apply(t), "") == null;
}
This is a bit subtle. This is intended to be used within a filter() operation, so we’re returning a lambda that’s a predicate that computes a boolean based on whether it’s seen the value before. This value is derived from the stream element by calling the key extractor function. The put() method returns the previous value in the map, or null if there was no value. That’s the case we’re interested in, so if it returns null, we want the predicate to return true just for this first time. Subsequent times it will return non-null, so we return false those times, so the filter operation won’t pass through the element in those cases. I had used putIfAbsent() at first, since it has first-time-only semantics, but it turns out to be unnecessary, and using put() makes the code a bit shorter.
Here’s how it’s used. Suppose we have a Book class that has fields for title and author, and the obvious constructor and getters, and we have a list of books that we want to process:
List<Book> list = Arrays.asList(
new Book("This Side of Paradise", "F. Scott Fitzgerald"),
new Book("The Beautiful and Damned", "F. Scott Fitzgerald"),
new Book("The Great Gatsby", "F. Scott Fitzgerald"),
new Book("Tender is the Night", "F. Scott Fitzgerald"),
new Book("The Sound and the Fury", "William Faulkner"),
new Book("Absalom, Absalom!", "William Faulkner"),
new Book("Intruder in the Dust", "William Faulkner"),
new Book("The Sun Also Rises", "Ernest Hemingway"),
new Book("A Farewell to Arms", "Ernest Hemingway"),
new Book("The Old Man and the Sea", "Ernest Hemingway"),
new Book("For Whom the Bell Tolls", "Ernest Hemingway"),
new Book("A Moveable Feast", "Ernest Hemingway")
);
If we wanted one book from each author, we could do this:
list.stream()
.filter(distinctByKey(Book::getAuthor))
.forEach(System.out::println);
The output from running this is:
Book[This Side of Paradise,F. Scott Fitzgerald]
Book[The Sound and the Fury,William Faulkner]
Book[The Sun Also Rises,Ernest Hemingway]
Since this is a sequential stream, the first book by each author is the one that ends up in the output. If we were to run this stream in parallel, it would still work “correctly” in that one book from each author would be output. However, which book is output would differ from run to run.
It takes a bit of tinkering to use higher-order functions to write stateful stream operations. You can see the evolution of my thinking by looking at my answer to a Stackoverflow question on this topic. I started out writing a class, but after chipping away at it a bit I realized a class was no longer necessary and a higher-order function could be used instead. This is a powerful technique that can be used to write all kinds of stateful stream operations. You just have to make sure to be careful they’re thread-safe.
## Knuth: Christmas Trees and Grand Theft Auto
Earlier this month I had the pleasure of attending Donald Knuth’s 20th annual Christmas Tree Lecture. (announcement, video)
For me it was quite a bit of nostalgia. When I was a Computer Science student at Stanford, uh, a number of decades ago, of course I had to take a class from Knuth, who was (and still is) a Very Famous Computer Science Professor. The class was nominally a computer science class, but it was mostly math. In fact, it was all math. I’m good at math, having completed several semesters of college-level math even after placing out of introductory calculus classes. However, I’m not really, really good at math, which is what you have to be to keep up with Knuth. As a result I spent most of that class wallowing in confusion.
Knuth’s Christmas Tree lectures aren’t about Christmas trees, of course, but they are ostensibly about the Computer Science kind of trees, and they occur around Christmastime. Hence, Christmas Tree. But they aren’t so much about Computer Science as they are about math. So it was quite a familiar feeling for me to sit in a Stanford lecture hall, for the first time in decades, listening to Knuth give a lecture on math, with me wallowing in confusion most of the time.
# ∞
A few years after I left Stanford, Knuth (along with Ronald Graham and Oren Patashnik) published Concrete Mathematics:
For some reason I was compelled to buy it. I probably bought it because it must be a Very Important Computer Science Book because it has the name of a Very Famous Computer Science Professor on it. So I bought it and flipped through it, but it was mostly about math, and I’m good at math but not that good, so I eventually shelved it and didn’t look at it for a long time.
# ∞
A number of years later (perhaps April, 2008) I was quite into playing Grand Theft Auto: Vice City. Yes, this is that notorious game where you’re the criminal and you do a variety of nasty things. One of the different things you can do is to steal a police car and run “Vigilante Missions” where you chase down and kill criminals. You start off at the first level where the goal is to kill one criminal, and you get a reward of $50. At the second level, you have to kill two criminals, and the reward is$200. At level three, there are three criminals, and the reward is $450. At level four the reward is$800, and at level five the reward is 1,250. The rewards keep rising at each level, even though the number of criminals maxes out after a while. The reward given at each level isn’t necessarily obvious. After playing for a while, I paused the game to see if there was any information about this on the internet. Various gaming guide and FAQ authors have gone to the trouble of documenting the reward amounts. For example, the Grand Theft Auto: Vice City Vigilante Reward FAQ lists the reward for each level, as well as the cumulative reward, for every level up to level 1,000! Furthermore, it gives a formula of sorts for computing the reward amount for each level: You take the level you are at, subtract one, multiply by 100, add 50 and add that to the previous level’s reward to compute the level you want. (edited for clarity) Stated mathematically, this is a recurrence relation where the reward for level n is given as follows: $R_n = 100(n-1) + 50 + R_{n-1}$ This can be simplified a bit: $R_n = 100n - 50 + R_{n-1}$ It’s pretty easy to see that each level k adds a reward of 100k – 50 dollars, so this can be expressed as a summation: $R_n = \sum\limits_{k=1}^{n}(100k - 50)$ This is pretty easy to simplify to closed form: \begin{aligned} \\ R_n &= \sum\limits_{k=1}^{n}(100k - 50) \\ &= 100 \sum\limits_{k=1}^{n} k - \sum\limits_{k=1}^{n} 50 \\ \end{aligned} That first summation term is simply the sum of integers from 1 to n so it can be replaced with the well-known formula for that sum: \begin{aligned} \\ R_n &= 100\frac{n(n+1)}{2} - 50n \\ &= 50n^2 \end{aligned} OK, pretty simple. But this is just the reward for a given level n. What about the cumulative reward for having completed all levels 1 through n? Now this is a bit more interesting. Let’s define the cumulative reward for level n as follows: \begin{aligned} \\ S_n &= \sum\limits_{k=1}^{n} 50k^2 \\ \end{aligned} I don’t know how the guy made the table for levels up to 1,000. He might have used a spreadsheet, or he might have even played out all 1,000 levels and painstakingly kept track of the reward and cumulative reward at each level. (That would be entirely believable; gamers are nothing if not compulsive.) Looking at the “formula” he gave for computing the reward at each level, I’m quite certain he didn’t compute a closed form for the cumulative reward at each level. Clearly, it was important for me to figure that out. At this point I had forgotten the game and was busy filling up sheets of paper with equations. Deriving a closed form for this should be simple, right? Just perturb the indexes: \begin{aligned} \\ S_n &= \sum\limits_{k=1}^{n} 50k^2 \\ S_{n+1} &= \sum\limits_{k=1}^{n+1} 50k^2 \\ S_n + 50(n+1)^2 &= 50 + \sum\limits_{k=2}^{n+1}50k^2 \\ &= 50 + \sum\limits_{k=1}^{n}50(k+1)^2 \\ &= 50 + \sum\limits_{k=1}^{n}50(k^2 + 2k + 1) \\ &= 50 + \sum\limits_{k=1}^{n}50k^2 + \sum\limits_{k=1}^{n} 100k + \sum\limits_{k=1}^{n} 50 \\ &= 50 + S_n + 100 \sum\limits_{k=1}^{n} k + 50n \\ \end{aligned} Oh crap. The $S_n$ terms just cancel out. That’s what we’re trying to solve for. I tried several different techniques but nothing I tried worked. I was stumped. Hunting around on the web didn’t give me any helpful ideas, either. At this point I couldn’t finish the game, because I just had to know how to compute the cumulative reward for completing n levels of the mission. I mean, this was important, right? Swimming from the depths of my memory came a realization that somewhere I might have a book that would have some helpful information about solving sums and recurrences like this. I quickly found and pulled Concrete Mathematics from the shelf, blew the dust off, and curled up on the couch and started reading. Sure enough, right there at the start of section 2.5, General Methods, there’s a discussion of how to use several techniques to find the sum of the first n squares. (Page 41, equation 2.37) Great! How did Knuth et. al. solve it? Method 0 is simply to look it up. There’s a well-known formula for this available in many reference books such as the CRC Handbook. But that’s cheating. I needed to know how to derive it. I kept reading. Method 1 is to guess the answer and prove it by induction. I’m really bad at guessing stuff like this. I kept reading. Method 2 is to perturb the sum. Aha! This is what I tried to do. Where did I go wrong? Turns out they did the same thing I did and ran into exactly the same problem — the desired sum cancels out. But they had one additional bit of insight: in attempting to find the sum of n squares, they ended up deriving the sum of the first n integers. Huh? Let’s pick up where we left off: \begin{aligned} \\ S_n + 50(n+1)^2 &= 50 + S_n + 100 \sum\limits_{k=1}^{n} k + 50n \\ 50n^2 + 50n &= 100 \sum\limits_{k=1}^{n} k \\ \frac{n(n+1)}{2} &= \sum\limits_{k=1}^{n} k \\ \end{aligned} Knuth et. al. continue by conjecturing that, if we tried to perturb the sum of cubes, a closed form for the sum of squares might just fall out. Let’s try that! \begin{aligned} \\ T_n &= \sum\limits_{k=1}^{n} k^3 \\ T_{n+1} &= \sum\limits_{k=1}^{n+1} k^3 \\ T_n + (n+1)^3 &= 1 + \sum\limits_{k=2}^{n+1} k^3 \\ &= 1 + \sum\limits_{k=1}^{n} (k+1)^3 \\ &= 1 + \sum\limits_{k=1}^{n} (k^3 + 3k^2 + 3k + 1) \\ &= 1 + \sum\limits_{k=1}^{n} k^3 + 3 \sum\limits_{k=1}^{n} k^2 + 3 \sum\limits_{k=1}^{n} k \ + \sum\limits_{k=1}^{n} 1 \\ &= 1 + T_n + 3 \sum\limits_{k=1}^{n} k^2 + \frac{3}{2}n(n+1) + n \end{aligned} OK, there’s our sum of squares term. Canceling $T_n$ and simplifying, \begin{aligned} \\ n^3 + 3n^2 + 3n &= 3 \sum\limits_{k=1}^{n} k^2 + \frac{3}{2}n^2 + \frac{3}{2}n + n \\ n^3 + \frac{3}{2}n^2 + \frac{1}{2}n &= 3 \sum\limits_{k=1}^{n} k^2 \\ \frac{1}{3}n^3 + \frac{1}{2}n^2 + \frac{1}{6}n &= \sum\limits_{k=1}^{n} k^2 \\ \frac{2n^3 + 3n^2 + n}{6} &= \sum\limits_{k=1}^{n} k^2 \\ \frac{n(n+1)(2n+1)}{6} &= \sum\limits_{k=1}^{n} k^2 \\ \end{aligned} And there we have it. This matches the formula given in the CRC Handbook. The solution to my particular problem, the cumulative reward for Vigilante Mission levels, is simply 50 times this quantity. # ∞ What does this have to do with Christmas trees? Not much, but it does have a lot to do with Donald Knuth. It was good to see his lecture, even if I didn’t understand a lot of it. But I did apparently pick up something from his class, and I was able to figure out how to solve a problem (even if it was a silly one) with help from one of his books. And the equations this article use the WordPress plugin for LaTeX, which of course originally came from Knuth. So here’s to you, Don, have a happy new year! ## JavaOne 2014 Schedule I have a jam-packed schedule for JavaOne 2014. My sessions are as follows: TUT3371 Jump-Starting Lambda (Tue 30 Sep 0830 Hilton Yosemite B/C) This is my gentle introduction to lambda tutorial. Download presentation (PDF). CON3374 Lambda Q&A Panel (Tue 30 Sep 1230 Hilton Yosemite B/C) This panel session will explore the impact of Java 8 Lambdas on the Java ecosystem. BOF6244 You’ve Got Your Streams On My Collections! (Tue 30 Sep 1900 Hilton Yosemite A) Community discussion of collections and the new streams APIs. IGN12431 Ignite Session (Tue 30 Sep 1900 Hilton Imperial A) This is at the same time as the BOF, so my session will be later on, perhaps 2000 or so. Make sure to come; I have a little surprise planned! CON3372 Parallel Streams Workshop (Wed 1 Oct 1000 Hilton Yosemite A) Writing parallel streams code can be easy and effective, but you have to avoid some pitfalls. Download Presentation (PDF) CON6377 Debt and Deprecation (Wed 1 Oct 1500 Hilton Yosemite A) Given by my alter ego, Dr. Deprecator, this talk explores the principles and prescriptions of deprecation. Download Presentation (PDF) HOL3373 Lambda Programming Laboratory (Thu 2 Oct 1200-1400 Hilton Franciscan A/B) This is your chance to try out some lambda expressions and stream APIs introduced in Java 8, in order to solve a couple dozen challenging exercises. View Introductory Slide Presentation. Download Exercises (NetBeans Project). See you in San Francisco! ## Java 8 Lambda and Streams Overview Here’s the slide presentation I gave at the Silicon Valley JavaFX Users Group this evening, June 4th 2014: This is an approximately one-hour talk that covers lambdas, default methods, method references, and the streams API. It necessarily leaves out a lot of details, but it serves as a reasonably brief overview to the new features we’ve added in Java 8. (This is the same presentation I gave at the Japan JUG a couple weeks ago.) The Meetup page for the SVJUGFX event is here: Meetup Page The video replay isn’t available as of this writing, but I imagine that a link will be placed there once the replay is available. Finally, the source code for the “Bug Dates” charting application I showed in the talk is here: Bug Dates demo (github) ## Another Shell Test Pitfall We discovered another bug in a shell script test the other day. It’s incredibly simple and stupid but it’s been covering up errors for years. As usual, there’s a story to be told. About a month ago, my colleague Tristan Yan fixed some tests in RMI to remove the use of fixed ports. The bug for this is JDK-7190106 and this is the changeset. Using a fixed port causes intermittent failures when there happens to be something else already running that’s using the same port. Over the past year or so we’ve been changing the RMI tests to use system-assigned ports in order to avoid such collisions. Tristan’s fix was another in the step of ridding the RMI test suite of its use of fixed ports. Using system-assigned ports requires using a Java-based test library, and these tests were shell scripts, so part of Tristan’s fix was also converting them to Java. The converted tests worked mostly fine, except that over the following few weeks an occasional test failure would occur. The serialization benchmark, newly rewritten into Java, would occasionally fail with a StackOverflowError. Clearly, something must have changed, since we had never seen this error occur with the shell script version of the serialization benchmark. Could the conversion from shell script to Java have changed something that caused excessive stack usage? It turns out that the serialization benchmark does use a larger-than-ordinary amount of stack space. One of the tests loads a class that is fifty levels deep in the class inheritance hierarchy. This wasn’t a case of infinite recursion. Class loading uses a lot of stack space, and this test sometimes caused a StackOverflowError. Why didn’t the stack overflow occur with the shell script test? The answer is … it did! It turns out that the shell script version of the serialization test was throwing StackOverflowError occasionally all along, but never reported failure. It’s pretty easy to force the test to overflow the stack by specifying a very small stack size (e.g., -Xss200k). Even when it threw a StackOverflowError, the test would still indicate that it passed. Why did this happen? After the test preamble, last line of the shell script invoked the JVM to run the serial benchmark like this: TESTJAVA/bin/java \
${TESTVMOPTS} \ -cp$TESTCLASSES \
bench.serial.Main \
-c \$TESTSRC/bench/serial/jtreg-config &
Do you see the bug?
The pass/fail status of a shell script test is the exit status of the script. By usual UNIX conventions, a zero exit status means pass and a nonzero exit status means failure. In turn, the exit status of the script is the exit status of the last command the script executes. The problem here is that the last command is executed in the background, and the “exit status” of a command run in the background is always zero. This is true regardless of whether the background command is still running or whether it has exited with a nonzero status. Thus, no matter what happens in the actual test, this shell script will always indicate that it passed!
It’s a simple matter to experiment with the -Xss parameter in both the shell script and Java versions of the test to verify they both use comparable amounts of stack space. And given that the test workload sometimes overflowed the stack, the fix is to specify a sufficiently large stack size to ensure that this doesn’t happen. See JDK-8030284 and the changeset that fixes it.
How did this happen in the first place? I’m not entirely sure, but this serialization benchmark test was probably derived from another shell script test right nearby, an RMI benchmark. Tristan also rewrote the RMI benchmark into a Java test, but it’s a bit more complicated. The RMI benchmark needs to run a server and a client in separate JVMs. Simplified, the RMI benchmark shell script looked something like this:
echo "Starting RMI benchmark server "
java bench.rmi.Main -server &
# wait for the server to start
sleep 10
echo "Starting RMI benchmark client "
java bench.rmi.Main -client
When the serialization benchmark script was derived from the RMI benchmark script, the original author simply deleted the invocation of the RMI client side and modified the server-side invocation command line to run the serialization benchmark instead of the RMI benchmark, and left it running in the background.
(This test also exhibits another pathology, that of sleeping for a fixed amount of time in order to wait for a server to start. If the server is slow to start, this can result in an intermittent failure. If the server starts quickly, the test must still wait the full ten seconds. The Java rewrite fixes this as well, by starting the server in the first JVM, and forking the client JVM only after the server has been initialized.)
This is clearly another example of the fragility of shell script tests. A single character editing error in the script destroyed this test’s results!
1. Testing Pitfalls (pdf). I gave this presentation at the TestFest around the time of Devoxx UK in March, 2013. This presentation has a lot of material on the problems that can arise with shell script tests in OpenJDK that use the jtreg test harness.
2. Jon Gibbons (maintainer of jtreg) has an article entitled Shelling Tests that also explains some of the issues with shell script tests. It also describes the progress being made converting shell tests to Java in the langtools repository of OpenJDK.
## JavaOne 2013 Has Begun!
This year’s JavaOne has begun! The keynotes were today, in Moscone for the first time since Oracle acquired Sun. It was a bit strange, having a little bit of JavaOne in the midst of Oracle OpenWorld. Red was everywhere. The rest of the week, JavaOne is in The Zone in the hotels by Union Square.
As usual, I’m involved in several activities at JavaOne. Oddly enough I’m not on any normal technical sessions. But I have one of almost everything else: a tutorial, a BOF, and a Hands-on lab.
Jump-starting Lambda Programming [TUT3877] – 10:00am-12:00pm Monday.
A gentle introduction to lambdas. This is early in the schedule, so you should start here, and then progress to some of the other, more advanced lambda sessions later in the conference.
[update] Slides available here: TUT3877_Marks-JumpStartingLambda-v6.pdf
Ten Things You Should Know When Writing Good Unit Test Cases in Java [BOF4255] – 6:30pm-7:15pm Monday.
Paul Thwaite (IBM) submitted this and invited me to contribute. I think we have some good ideas to share. Ideally a BOF should be a conversation among the audience members and the speakers. This might be difficult, as it looks like over 250 people have signed up so far! It’s great that there’s so much interest in testing.
[update] Paul has posted his slides.
Lambda Programming Laboratory [HOL3970] – 12:30pm-2:30pm Wednesday.
Try your hand at solving a dozen lambda-based exercises. They start off simple but they can get quite challenging. You’ll also have a chance to play with a JavaFX application that illustrates how some Streams library features work.
[update] I’ve uploaded the lab exercises in the form of a NetBeans project (zip format). Use the JDK 8 Developer Preview build or newer and use NetBeans 7.4 RC1 or newer.
Java DEMOgrounds in the Exhibition Hall – 9:30am-5:00pm Monday through Wednesday.
The Java SE booth in the DEMOgrounds has a small lambda demo running in NetBeans. I wrote it (plug, plug). I plan to be here from 2:00pm-3:00pm on Monday (the dedicated exhibition hours, when no sessions are running) so drop by to chat, ask questions, or to play around with the demo code.
Enjoy the show!
## The Fixed Point of Anthony Weiner
No, not that fixed point.
In the current sex-scandal-of-the-week, New York mayoral candidate Anthony Weiner has basically admitted to sending lewd messages under the pseudonym “Carlos Danger.” Where the heck did that name come from?
Clearly, there is a function that maps from one’s ordinary name to one’s “Carlos Danger” name. Slate has helpfully provided an implementation of the Carlos Danger name generator function. Using this tool, for example, one can determine that the Carlos Danger name for me (Stuart Marks) is Ricardo Distress. Hm, not too interesting. Of course, the Carlos Danger name for Anthony Weiner is Carlos Danger.
Now, what is the Carlos Danger name for Carlos Danger? It must be Carlos Danger, right? Apparently not, as the generator reveals that it is Felipe Menace.
Inspecting the source code of the web page reveals that the generator function basically hashes the input names a couple times and uses those values to index into predefined tables of Carlos-Danger-style first and last names. So, unlike Anthony Weiner, which is special-cased in the code, there’s nothing special about Carlos Danger. It’ll just map into some apparently-random pair of entries from the tables.
If the Carlos Danger name for Carlos Danger isn’t Carlos Danger, is there some other name whose Carlos Danger name is itself? Since there is a fairly small, fixed set of names, this is pretty easy to find out by searching the entire name space, as it were. A quick transliteration of the function into Java later (including a small wrestling match with character encodings), I have the answer:
• The Carlos Danger name for Mariano Dynamite is Mariano Dynamite.
• The Carlos Danger name for Miguel Ãngel Distress is Miguel Ãngel Distress.
You heard it here first, folks.
Finally, if you ever run into Ricardo Distress, tell him I said hi.
|
2015-03-03 12:38:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553573727607727, "perplexity": 1624.3903435012808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463287.91/warc/CC-MAIN-20150226074103-00148-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/a-problem-by-rocco-dalto-13/
|
# A calculus problem by Rocco Dalto
Calculus Level pending
The area of $${\bf 2 x^2 - xy + 2y^2 - 2 = 0 }$$ can be represented as $${\bf \large \frac{a \pi}{\sqrt{b}}, }$$ where $${\bf a}$$ and $${\bf b}$$ are coprime positive integers and $${\bf b}$$ is square free.
Find: $${\bf a + b .}$$
×
|
2018-04-26 21:09:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548431038856506, "perplexity": 531.7012065829001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00156.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=vyuru&paperid=333&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Vestnik YuUrGU. Ser. Mat. Model. Progr.: Year: Volume: Issue: Page: Find
Vestnik YuUrGU. Ser. Mat. Model. Progr., 2016, Volume 9, Issue 3, Pages 105–118 (Mi vyuru333)
Programming & Computer Software
Coefficients identification in fractional diffusion models by the method of time integral characteristics
S. Yu. Lukashchuk
Ufa State Aviation Technical University, Ufa, Russian Federation
Abstract: Inverse problems of identification of the fractional diffusivity and the order of fractional differentiation are considered for linear fractional anomalous diffusion equations with the Riemann–Liouville and Caputo fractional derivatives. As an additional information about the anomalous diffusion process, the concentration functions are assumed to be known at several arbitrary inner points of calculation domain. Numerically-analytical algorithms are constructed for identification of two required parameters of the fractional diffusion equations by approximately known initial data. These algorithms are based on the method of time integral characteristics and use the Laplace transform in time. The Laplace variable can be considered as a regularization parameter in these algorithms. It is shown that the inverse problems under consideration are reduced to the identification problem for a new single parameter which is formed by the fractional diffusivity, the order of fractional differentiation and the Laplace variable. Estimations of the upper error bound for this parameter are derived. A technique of optimal Laplace variable determination based on minimization of these estimations is described. The proposed algorithms are implemented in the AD-TIC package for the Maple software. A brief discussion of this package is also presented.
Keywords: anomalous diffusion; fractional derivatives; inverse coefficient problem; identification algorithm; software package.
Funding Agency Grant Number Ministry of Education and Science of the Russian Federation 11.G34.31.0042 This work was supported by the grant of the Ministry of Education and Science of the Russian Federation (contract No. 11.G34.31.0042 with Ufa State Aviation Technical University and leading scientist Professor N.H. Ibragimov).
DOI: https://doi.org/10.14529/mmp160309
Full text: PDF file (500 kB)
References: PDF file HTML file
Bibliographic databases:
UDC: 517.9
MSC: 65M32
Language:
Citation: S. Yu. Lukashchuk, “Coefficients identification in fractional diffusion models by the method of time integral characteristics”, Vestnik YuUrGU. Ser. Mat. Model. Progr., 9:3 (2016), 105–118
Citation in format AMSBIB
\Bibitem{Luk16} \by S.~Yu.~Lukashchuk \paper Coefficients identification in fractional diffusion models by the method of time integral characteristics \jour Vestnik YuUrGU. Ser. Mat. Model. Progr. \yr 2016 \vol 9 \issue 3 \pages 105--118 \mathnet{http://mi.mathnet.ru/vyuru333} \crossref{https://doi.org/10.14529/mmp160309} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000390881400009} \elib{http://elibrary.ru/item.asp?id=25717239}
|
2020-01-17 19:37:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3106224238872528, "perplexity": 3271.308478833358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00510.warc.gz"}
|
https://socratic.org/questions/what-is-the-distance-between-a-1-3-and-point-b-5-5
|
# What is the distance between A(-1,-3) and point B(5,5)?
Feb 18, 2016
$10$
#### Explanation:
You are going to have to use the distance formula. That states that the distance between two points is $\sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}$ (it basically makes a triangle with side lengths $\left({x}_{2} - {x}_{1}\right)$ and $\left({y}_{2} - {y}_{1}\right)$ and then uses the Pythagorean Theorem.
For more information on where the distance formula came from, see this website.
We can just plug into this equation to get the distance.
$\sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}$
=$\sqrt{{\left(5 - \left(- 1\right)\right)}^{2} + {\left(5 - \left(- 3\right)\right)}^{2}}$
=$\sqrt{{\left(6\right)}^{2} + {\left(8\right)}^{2}}$
=$\sqrt{36 + 64}$
=$\sqrt{100}$
=$10$
|
2019-10-22 11:56:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945537805557251, "perplexity": 283.78117825608075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987817685.87/warc/CC-MAIN-20191022104415-20191022131915-00025.warc.gz"}
|
https://docs.displayr.com/wiki/Cronbach%27s_Alpha
|
Cronbach's Alpha
Cronbach's alpha is an estimate of the squared correlation of the estimated values of a sample obtained using a Multi-Item Scale and their true values (e.g., the squared correlation between the average IQ as measured in an IQ test and the true intelligence). (Note that the 'squared correlation' is equivalent to the proportion of variance explained.)
Calculation
$\alpha = {K \over K-1 } \left(1 - {\sum_{i=1}^K \sigma^2_{Y_i}\over \sigma^2_X}\right)$
where $K$ is the number of items, $\sigma^2_X$ is the variance of the sum of all of the items, and $\sigma^2_{Y_i}$ is the variance of the ith item.
Interpretation
A score of 0.95 or greater implies that a measurement is acccurate enough to be regarded as the truth. In practice measurements of around 0.7 are often regarded as acceptable for real-world applications. Values of substantially less than this can be useful if all that is being done is comparing the multi-item scale's estimates by sub-groups (e.g., comparing differences by gender).
|
2019-07-15 22:14:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917281031608582, "perplexity": 445.67990823462145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524254.28/warc/CC-MAIN-20190715215144-20190716001144-00432.warc.gz"}
|
https://www.hackmath.net/en/math-problem/2128
|
# Variations 3rd class
From how many elements we can create 13,800 variations 3rd class without repeating?
Result
n = 25
#### Solution:
$V_3(n) = \dfrac{ n! }{ (n-3)! } = \dfrac{ n(n-1)(n-2)(n-3)! }{ (n-3)! } = n(n-1)(n-2) = 13800 \ \\ n(n-1)(n-2) = 13800 \ \\ n \approx \sqrt[3]{13800} = 23.986 \ \\ 22 \le n \le 26 \ \\ V(3,22) = 22 . 21 . 20 = 9240 \ \\ V(3,23) = 23 . 22 . 21 = 10626 \ \\ V(3,24) = 24 . 23 . 22 = 12144 \ \\ V(3,25) = 25 . 24 . 23 = 13800 \ \\ V(3,26) = 26 . 25 . 24 = 15600 \ \\ \ \\ n = 25$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
Would you like to compute count of combinations?
## Next similar math problems:
1. Variations 4/2
Determine the number of items when the count of variations of fourth class without repeating is 600 times larger than the count of variations of second class without repetition.
2. Variation equation
Solve combinatorics equation: V(2, x+8)=72
3. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
4. Square root 2
If the square root of 3m2 +22 and -x = 0, and x=7, what is m?
5. Men, women and children
On the trip went men, women and children in the ratio 2:3:5 by bus. Children pay 60 crowns and adults 150. How many women were on the bus when a bus was paid 4,200 crowns?
6. School marks
Boris has a total of 22 marks. Ones have 3 times less than threes. There are two twos. How many has ones and threes?
7. Cans
How many cans must be put in the bottom row if we want 182 cans arrange in 13 rows above so that each subsequent row has always been one tin less? How many cans will be in the top row?
8. Insert into GP
Between numbers 5 and 640 insert as many numbers to form geometric progression so sum of the numbers you entered will be 630. How many numbers you must insert?
9. Intercept with axis
F(x)=log(x+4)-2, what is the x intercept
10. Variable
Find variable P: PP plus P x P plus P = 160
11. Asymptote
What is the vertical asymptote of ?
12. Linear imaginary equation
Given that ? "this is z star" Find the value of the complex number z.
13. Ball game
Richard, Denis and Denise together scored 932 goals. Denis scored 4 goals over Denise but Denis scored 24 goals less than Richard. Determine the number of goals for each player.
14. Cards
Suppose that are three cards in the hats. One is red on both sides, one of which is black on both sides, and a third one side red and the second black. We are pulled out of a hat randomly one card and we see that one side of it is red. What is the probab
15. Solve 3
Solve quadratic equation: (6n+1) (4n-1) = 3n2
16. Family
The twins Nina and Ema have five years younger brother Michal. Together have 43 years. How old is Michal?
17. Simple equation
Solve for x: 3(x + 2) = x - 18
|
2020-02-20 10:41:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3879162073135376, "perplexity": 1831.3756564803132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00164.warc.gz"}
|
https://www.jkong.co.uk/category/uk/
|
Browse Category
# Chips on the First Floor
Every so often, I spend some time filling out a crossword puzzle. There are quick crosswords where only definitions of terms are given, and cryptic crosswords which include both a definition and some wordplay. Especially for quick crosswords, these clues can be ambiguous – for instance, Fruit (5) could be APPLE, MELON, LEMON, GUAVA or GRAPE; OLIVE if one wants to be technical, and YIELD if one wishes to be indirect.
To resolve this ambiguity, about half of the letters in a quick crossword are checked. This means that their cells are at the intersection of two words, and the corresponding letters must match.
With a recent puzzle I was attempting, I had a clue with a definition for ‘show impatience (5, 2, 3, 3)’. I didn’t get this immediately, but with a few crossing letters in the middle I quickly wrote down CHOMP AT THE BIT. This was fine until I had a down clue with definition ‘problem (7)’ which was D_L_M_O. This should clearly be DILEMMA. It was a cryptic crossword, so I was able to check CHAMP AT THE BIT with the wordplay part, and it made sense. (The clue was “show impatience in talk about politician with silly hat, I bet” – which is CHAT around (an MP and then an anagram of HAT I BET).) The “original” expression is actually CHAMP, though I’ve only heard of the CHOMP version before.
I sometimes have difficulty with crosswords in the UK (and sometimes with crosswords from the US as well) owing to regional variations in English. Singaporean English follows the UK in terms of spelling. However, in terms of definitions, things vary. For example:
• Common with UK usage:
• Tuition refers to additional small-group classes (like in the UK), not the fees one might pay at university (US).
• biscuit is a baked good that’s usually sweet (like in the UK) and probably shouldn’t be eaten with gravy; an American biscuit is a bit more scone-like.
• Common with US usage:
• Chips are thin fried slices of potato (same as US). The word refers to fried strips of potato in the UK (which themselves are fries in both Singapore and the US); the thin slices are called crisps in the UK.
• The first floor of a building is the ground floor (same as US); in the UK that’s the first floor above ground (which is the second floor in Singapore and the US).
Without venturing into Singlish (which incorporates terms from Chinese, Hokkien, Malay and several other languages), there are also terms that aren’t in common with either American or British English. Some of these pertain to local entities. Economy rice is a type of food served in food courts, and the MRT is Singapore’s subway network – though I’ve heard several uses of it as a generic term, much like Xerox for copying.
Others seem a little more random. Sports shoes refer to trainers specifically, and don’t refer to water shoes or hiking boots which are used for sport. The five Cs refer to cash, cars, credit cards, country club memberships and condominiums – five things starting with the letter C that materialistic Singaporeans often chase.
I’ve been resident in the UK for around six years now. This is obviously fewer than the number I’ve spent in Singapore (about 21), though the years in the UK are more recent. I’ve gotten used to the British expressions, especially for life in the UK (I generally like chunky chips more than crisps, and correctly distinguishing the first and ground floors is important for getting around). I don’t think I’ve had too many issues with remembering the correct versions of terms to use when in Singapore or in the US – having had to deal with these inconsistencies has helped here.
# A Quiet Winter Night
I made my way down the staircase at the end of the bridge. It was late on Thursday, somewhere between 10 and 11 pm. The snow was still falling (though quite lightly). What struck me the most, however, was the silence. For a few brief moments, it seemed like it could have been some kind of post-apocalyptic world, or the Rapture; I was alone, at least as far as I could tell.
The UK has witnessed an unexpected patch of cold weather over the past week or so. I have pretty good cold resistance and thus don’t normally detect weather changes that much, though this was of course painfully obvious. It’s possible that more snow fell in London in the last week than in the past five years – at least for the times I’ve been around.
I enjoyed the snow on the first day, though that’s about as long its welcome lasted. For some reason, I generally associate snow with bitterness and harshness, rather than the alleged fascination that people in the UK might tend to have. The reason cited in the article is that it tends to remind people of youth or Christmas, which is fair – I certainly don’t associate the former with snow, and although there is much media about snowy Christmases, I don’t think I have experienced one. I think more Dead of Winter (a board game about surviving winter with zombies) than Jingle Bells. It was interesting to experience a different kind of weather, nonetheless, but the practical pains of dealing with it came to the forefront very quickly.
For me at least, the most pertinent issue was travel disruptions. My commute is typically a 20 minute walk. though because of the snow and ice I had to be substantially more careful with my footing. I easily took 30-plus minutes to traverse the same route. Thankfully I don’t have to drive or take the train in to work, but it certainly affected my colleagues too (and it did mean the office was quieter, which in turn affected me). Expectedly, quite a few flights were cancelled as well; I wasn’t travelling, but this would have certainly been an annoyance if I was.
This wasn’t a factor the last time round, but snow can also cause problems in supply chains. I don’t think I was affected this time, but there was a similar incident in New York early last year; I was waiting on a package to be delivered. Thankfully it was delivered on the morning of the day I was flying back to London, though I got somewhat anxious about whether it would arrive on time. In general of course this could be a much larger logistics problem.
Low temperatures were another factor, though much less significant for me. To paraphrase Elsa, at least most of the time the cold never bothered me anyway – though there were two instances where I decided it made sense to switch on the heating. Apart from that, I’m not sure I did much to deal with the temperatures on their own. I did dress marginally more warmly than normal (which in say low single digit weather is a light jacket), but that was about it.
It’s also suggested that there are correlations between cold weather and various types of illnesses, such as on this NHS page. Of course, I recognise that the impacts on people who spend more time exposed to the cold (e.g. people who work outdoors, rough sleepers) would be substantially greater.
The recent weather also seems to have had a sobering effect on my thoughts. Some of this might be driven by confounding factors (shorter days, more darkness, fewer social gatherings etc). This isn’t bad per se, though I find myself easily engrossed in thoughts that may be counterproductive at times. I also read an article in the Guardian questioning why the UK was unprepared for the snow; while I’m not sure I agree with the central thesis (I don’t have the data, but this may be viewed as an actuarial decision; spending $2 to prevent a 1/10th risk of losing$10 may not be worth it), but there was a point on extreme weather making societal inequalities starkly obvious which I can follow.
The weather is forecasted to return to more normal levels in the week ahead. If that holds, I’ll appreciate the easier travel and that more people are in the office. I’ll count myself fortunate that it hasn’t impacted my routines and plans that much, at least for now.
# Running the Gauntlet (Hotseat: OCR Mathematics C1-C4)
#### Background
The GCE A Level is a school-leaving qualification that students in the UK take at the end of high school. Students usually take exams for 3-4 subjects. The exams are graded on a scale from A* to U (though not with all characters in between); typically an A* is awarded to the roughly top 8-9 percent of students.
This is a rather different type of challenge – previous installments of this series have featured especially difficult exams (or rather, competitions; only the MAT is realistically speaking an exam there). I’ve usually struggled to finish in the time limit (I didn’t finish the AIME and barely finished the MAT; I had some spare time on the BMO R1, but still not that much). I could of course do this in the same way as the other tests, though the score distribution would likely be close to the ceiling, with random variation simply down to careless mistakes.
Interestingly, the UK has multiple exam boards, so for this discussion we’ll be looking at OCR, which here stands not for Optical Character Recognition, but for Oxford, Cambridge and RSA (the Royal Society of Arts). The A level Maths curriculum is split into five strands: core (C), further pure (FP), mechanics (M), statistics (S) and decision (D); each strand features between two and four modules, which generally are part of a linear dependency chain – apart from FP, where FP3 is not dependent on FP2 (though it still is dependent on FP1). For the Mathematics A level, students need to take four modules from the core strand, and two additional “applied” modules; Further Mathematics involves two of the FP strand modules plus any four additional modules (but these cannot overlap with the mathematics A level ones). Thus, a student pursuing a Further Mathematics A level will take 12 distinct modules, including C1 – C4 and at least two FP modules, for example C1-4, FP{1,3}, S1-4, D1 and M1.
(In high school I took the IB diploma programme instead, which did have Further Mathematics (FM), though I didn’t take it as I picked Computer Science instead. That was before Computer Science became a group 4 subject; even then, I think I would still have wanted to do Physics, and thus would not have taken FM in any case.)
#### Setup
I attempted the June 2015 series of exams (C1 – C4). Each of these papers is set for 90 minutes, and is a problem set that features between about seven and ten multi-part questions. The overall maximum mark is 72 (a bit of a strange number; perhaps to give 1 minute and 15 seconds per mark?). To make things a little more interesting, we define a performance metric
$P = \dfrac{M^2}{T}$
where M is the proportion of marks scored, and T is the proportion of time used. For example, scoring 100 percent in half of the time allowed results in a metric of 2; scoring 50 percent of the marks using up all of the time yields a metric of 0.25. The penalty is deliberately harsher than proportional, to limit the benefit of gaming the system (i.e. finding the easiest marks and only attempting those questions).
Most of the errors were results of arithmetical or algebraic slips (there weren’t any questions which I didn’t know how to answer, though I did make a rather egregious error on C3, and stumbled a little on C4 with trying to do a complex substitution for an integral, rather than preprocessing the term). There are a few things I noted:
• The scores for the AS-level modules (C1, C2) were considerably higher than that for the A-level modules (C3, C4). This is fairly expected, given that students only taking AS Mathematics would still need to do C1 and C2. Furthermore, from reading examiners’ reports the expectation in these exams is that students should have enough time to answer all of the questions.
• The score for C1 was much higher than that for C2. I think there are two reasons for this – firstly, C1 is meant to be an introductory module; and secondly, no calculators are allowed in C1, meaning that examiners have to allocate time for students to perform calculations (which as far as I’m aware is something I’m relatively quick at).
• The score for C4 was actually slightly higher than that for C3 (contrary to a possibly expected consistent decrease). While there is meant to be a linear progression, I certainly found the C3 paper notably tougher than that for C4 as well. That said, this may come from a perspective of someone aiming to secure all marks as opposed to the quantity required for a pass or an A.
We also see the penalty effect of the metric kicking in; it might be down to mental anchoring, but observe that perfect performances on C1 and C2 in the same amount of time would have yielded performance numbers just above 5 and 3, respectively.
#### Selected Problems in Depth
##### C3, Question 9
Given $f(\theta) = \sin(\theta + 30^{\circ}) + \cos(\theta + 60^{\circ})$, show that $f(\theta) = \cos(\theta)$ and that $f(4\theta) + 4f(2\theta) \equiv 8\cos^4\theta - 3$. Then determine the greatest and least values of $\frac{1}{f(4\theta) + 4f(2\theta) + 7}$ as $\theta$ varies, and solve the equation, for $0^{\circ} \leq \alpha \leq 60^{\circ}$,
$\sin(12\alpha + 30^{\circ}) + \cos(12\alpha + 60^{\circ}) + 4\sin(6\alpha + 30^{\circ}) + 4\cos(6\alpha + 30^{\circ}) = 1$
This might have appeared a little intimidating, though it isn’t too bad if worked through carefully. The first expression is derived fairly quickly by using the addition formulas for sine and cosine. I then wasted a bit of time on the second part by trying to be cheeky and applying De Moivre’s theorem (so, for instance, $\cos(4\theta)$ is the real part of $e^{i(4\theta)}$ which is the binomial expansion of $(\cos \theta + i \sin \theta)^4$), subsequently using $\sin^2 x = 1 - \cos^2 x$ where needed. This of course worked, but yielded a rather unpleasant algebra bash that could have been avoided by simply applying the double angle formulas multiple times.
The “range” part involved substitution and then reasoning on the range of $\cos^4\theta$ (to be between 0 and 1). The final equation looked like a mouthful; using the result we had at the beginning yields
$f (12 \alpha) + 4 f (6 \alpha) = 1$
and then using a substitution like $\beta = 3 \alpha$, we can reduce the equation to $8 \cos^4 \beta - 3 = 1$. We then get $\cos \beta = \pm \left( \frac{1}{2} \right)^{(1/4)}$ and we can finish by dividing the values of $\beta$ by 3 to recover $\alpha$.
##### C4, Question 6
Using the quotient rule, show that the derivative of $\frac{\cos x}{\sin x}$ is $\frac{-1}{\sin^2x}$. Then show that
$\displaystyle \int_{\frac{1}{6}\pi}^{\frac{1}{4}\pi} \dfrac{\sqrt{1 + \cos 2x}}{\sin x \sin 2x} = \dfrac{1}{2}\left(\sqrt{6} - \sqrt{2}\right)$
The first part is easy (you’re given the answer, and even told how to do it). The second was more interesting; my first instinct was to attempt to substitute $t = \sqrt{1 + \cos 2x}$ which removed the square root, but it was extremely difficult to rewrite the resulting expression in terms of $t$ as opposed to $x$. I then noticed that there was a nice way to eliminate the square root with $\cos 2x = 2 \cos^2 x - 1$. The integrand then simplifies down into a constant multiple of $\frac{-1}{\sin^2x}$; using the first result and simplifying the resultant expression should yield the result. That said, I wasted a fair bit of time here with the initial substitution attempt.
#### Meta-Analysis
To some extent this is difficult, because students don’t generally do A levels in this way (for very good reasons), and I’m sure that there must be students out there who could similarly blast through the modules in less than half the time given or better (but there is no data about this). Nonetheless, the A level boards usually publish Examiners’ Reports, which can be fairly interesting to read through though generally lacking in data. The C3 report was fairly rich in detail, though; and the 68/72 score was actually not too great (notice that “8% of candidates scored 70 or higher”). Indeed the aforementioned question 9 caused difficulties, though the preceding question 8 on logarithms was hardest in terms of having the lowest proportion of candidates recording full marks.
# Navigating Tube Fares
A bit of an additional post for the week, as I’ve had a little bit more spare time! This post is a more fully-fleshed out response to a question my friend Andrea had, about the value of an annual travelcard.
I’ve started doing my preliminary accounts for 2016, and one of the things I examined was my transport expenditure. I typically try to use what’s known as zero-based budgeting (that is, each category and the value assigned to it is justified from fresh assumptions, rather than say raising the previous year’s data by RPI and calling it a day). Of course there’s some flexibility (I’m not going to pass up a social gathering just because of finances, unless it’s insanely expensive – which is unlikely given the background of my friends, or at least the activities we take part in together).
There’s a column of 86.50s, corresponding to a string of monthly zone 1-2 Travelcards purchased on student discount. We then have a crash to two low months as I was in the US and Singapore respectively, a figure just over 100 for November, and December looks to be closing around 50; I didn’t purchase any Travelcards after August. At the time, I made these decisions because I was unsure if going for the annual Travelcard was a reasonable idea, especially given that I would frequently not be in London owing to international travels, both for work and for personal affairs. The total cost for the category for the year was 894.68; this is lower than normal because I didn’t purchase any flights this year. I’ve been a bit cautious having been deployed internationally on quite a few occasions; I didn’t realise that you can refund the remaining value of a Travelcard!
This would have been 924 if I bought an annual zone 1-2 Travelcard (sadly, I’d now need 1,320 as I’m no longer a student); that said, with one I might have travelled more as well. Also, I was out for two months and started occasionally walking to the office in December. You can get refunds on the remaining value of a Travelcard – that said, I’m not sure repeatedly canceling and then repurchasing annual Travelcards is permissible, and it seems like it would certainly be inconvenient. Loss shouldn’t be too major of a concern, as Oyster cards can be registered to an online account which one can use to transfer a season pass away from a lost card. (I’ve done this before, though with a monthly pass.)
I think a question would then be as follows: exactly how frequently (in terms of number of days) do I need to use the Tube to make pay-as-you-go (PAYG)/monthly/annual Travelcards the best choice? We can examine that under a few assumptions:
• The traveller is an adult.
• All journeys are within Zone 1.
• PAYG is implemented through contactless, so weekly caps apply.
• The year begins on a Monday (this matters for weekly capping computations).
• 16/7 trips per day (that’s reasonably realistic for me).
• (Somewhat cheeky) If one travels for N days one travels for the first N days of the year.
• Journeys on day are made between 0430 of D and 0430 of day D + 1.
• The “greedy monthly flexible” (GMF) strategy works as follows:
• It buys monthly travelcards as long as there are full months remaining.
• For the partial month (if one exists), it uses the cheaper of:
• a monthly travelcard
• PAYG (with weekly capping)
Obviously GMF dominates a pure PAYG strategy, because for full months a monthly travelcard always beats PAYG (consider February), and for partial months GMF considers PAYG, so it does at least as well as PAYG. If I’m not wrong GMF is optimal under these contrived conditions: it intuitively seems difficult to recover from burning through February, the shortest month, without buying the monthly travelcard as you’d need four weekly ones. However, in the general case GMF is certainly not optimal (consider the period February 28 – March 31; you can buy the Travelcard on February 28, which expires March 27, and then pay for four days of fares, or pay February 28 and buy the Travelcard on March 1; the optimal strategy saves three days of fares).
The fare if one has to travel for N days is reflected in the graph below; and unsurprisingly the flexible methods are superior for small N but inferior for large N. Our model has a break-even point at about 314-315 days.
The final decision, unsurprisingly, boils down to the level of certainty you can have about your travels. If you don’t expect to be spending more than around 50 days outside of the UK, the annual travelcard seems like an idea worthy of consideration especially if you know when said days lie. That said, we have made two key assumptions, one of which favours the monthly strategy and one of which favours the annual one:
• An upfront lump-sum payment is needed if you’re using the annual scheme. Our analysis did not account for the time value of money (you would need to discount the monthly payments to today to get a fairer comparison of the two).
• However with the monthly strategy we’ve assumed that plans are known well in advance (at least a month) and implementation is done perfectly. In practice, there are likely to be some minor errors or plans not aligning neatly on month boundaries that will result in slightly higher fares.
I personally don’t expect to travel more than that, but I won’t be getting an annual card next year, for other reasons. (In particular, that “16/7 trips per day” assumption is unlikely to be valid, but that’s a subject for another post.)
|
2018-11-16 09:42:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5159304141998291, "perplexity": 1245.1417384848703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743007.0/warc/CC-MAIN-20181116091028-20181116113028-00461.warc.gz"}
|
https://ltwork.net/choose-each-angle-to-complete-the-linear-pairs-zamy-and--1462563
|
# Choose each angle to complete the linear pairs. ZAMY and ZAMX BMY and ZBMX and ZAMX and
###### Question:
Choose each angle to complete the linear pairs.
ZAMY and ZAMX
BMY and
ZBMX and
ZAMX and
### Using the image above, is it likely that the UK will experience any earthquakes or volcanoes. Give a reason for your answer. *
Using the image above, is it likely that the UK will experience any earthquakes or volcanoes. Give a reason for your answer. * $Using the image above, is it likely that the UK will experience any earthquakes or volcanoes. Give$...
### T= 3.069 P= 0.002313 The counselor should his hypothesis. There is evidence that the mean difference
t= 3.069 P= 0.002313 The counselor should his hypothesis. There is evidence that the mean difference in GPA from seventh grade to eighth grade for is ....
### Which of the following is the most appropriate unit to describe the rate at which water flows out through
Which of the following is the most appropriate unit to describe the rate at which water flows out through a standard household faucet? minutes per gallon, because the independent quantity is volume of water in gallons and dependent quantity is time in minutes. gallons per minute, because the indepe...
### 3 reasons people benefit from trees
3 reasons people benefit from trees...
### Which of the following is a value associated with due process
Which of the following is a value associated with due process...
### Lucia finds that (3x²+3x+5)+(7x²-9x+8)=10x²-12x+13. what error did lucia make? a)she found the difference instead of
Lucia finds that (3x²+3x+5)+(7x²-9x+8)=10x²-12x+13. what error did lucia make? a)she found the difference instead of the sum. b)she combined the terms 3x² and 7x² incorrectly. c)she combined the terms 3x and –9x incorrectly. d)she did not combine all like terms....
### Two lenghts of a triangle are shown. Which could be the lenghts of BC? 11 13 15 18
Two lenghts of a triangle are shown. Which could be the lenghts of BC? 11 13 15 18...
### Write an essay explaining to a genie in a lamp why your wish should be granted.
Write an essay explaining to a genie in a lamp why your wish should be granted....
### Streetsboro ceramics produces large planters to be used in urban landscaping projects. a special earth
Streetsboro ceramics produces large planters to be used in urban landscaping projects. a special earth clay is used to make the planters. the standard quantity of clay used for each planter is 24 pounds. the company uses a standard cost of \$ 1.96 per pound of clay. streetsboro produced 3,500 planter...
### Whats my percentage of beauty JK I JUST WANNA GIVE FREE POINTS BUT MUST ANSWER QUSTION OR JUST SAY THANKS
whats my percentage of beauty JK I JUST WANNA GIVE FREE POINTS BUT MUST ANSWER QUSTION OR JUST SAY THANKS ALMOST CHRISTMAS...
### Why was bombing railways an important part of the allied strategy
Why was bombing railways an important part of the allied strategy...
### Algebra:What is the value of this expression when t = -12?-3|t − 8| + 1.5A. 61.5B. 13.5C. -10.5D.
Algebra: What is the value of this expression when t = -12? -3|t − 8| + 1.5 A. 61.5 B. 13.5 C. -10.5 D. -58.5 Thank you for answering!~~ ZoomZoom44...
### The function f(x))X. F (x)-6 1-3 22. 55. 38. 0
The function f(x)) X. F (x) -6 1 -3 2 2. 5 5. 3 8. 0...
|
2023-02-03 22:15:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3546297252178192, "perplexity": 3507.411009963459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00175.warc.gz"}
|
https://help.pdq.com/hc/en-us/community/posts/115000012311-Scan-Freespace-of-Physical-Disk
|
• Colby Bouma
I think your best bet would be to use "PDQInventory.exe ImportCustomFields". You will need to save the data to a CSV file and import it into a Custom Field with that command.
Thank you for your idea.
If I have understood correctly, we have to scan our computers by running the mentioned Powershell command (outside of PDQ Inventory) and after this, we have to import the results to PDQ Inventory?
Hmm, I had rather hoped, that we can use the "custom Tools" in PDQ Inventory (http://documentation.adminarsenal.com/PDQInventory/10.1.0.0/index.html?using-custom-tools.htm) . But I don't know how to save eg. PercentFree in a Custom Field of PDQ Inventory.
Best regards,
Andreas
This likely *could* be done with a custom tool, but you'll still be leveraging pdqinventory.exe and not a scanner to complete the task.
What you'd want to do is create a new object for each machine you are running the code against which 2 NoteProperties: ComputerName and FreeSpace.
Next you would pipe the values of that object to a CSV file
(Think:
Foreach($computer in$remotecommandtargets){$FreeSapce = <your code here>$obj = New-Object PSObject$obj | Add-member -Type NoteProperty -Name ComputerName -Value$computer$obj | Add-member -Type NoteProperty -Name FreeSpace -Value$FreeSpace\$obj | Out-File -Path \path\to\csv -Append -Force}
)
Then you would create another block of code that leverages pdqinventory.exe to bring that custom field data back into Inventory.
Custom Fields are setup in Preferences in Inventory here:
This documentation should get you started with understanding how custom fields work:
And here is some further documentation on Importing through the PDQ interface:
I'm not up to speed on the pdqinventory sql tables, so maybe Colby or Shane could be of better help as far as building out the query and or use of the ImportCustomFields command line tool. I'd hate to take an untested guess at it and be wrong.
• Colby Bouma
@Bohren
I'm a little confused about what you're looking for. The PowerShell example you provided seems to grab the same information PDQ does. Are you looking for unpartitioned space on the disks? Once I understand exactly what you're looking for I will put together a Tool for Inventory.
• Colby Bouma
This is what I have so far based on what I think you're asking for. It's a SQL report. The biggest flaw is that Inventory doesn't scan special partitions like System Reserved, so this will always report Free Space > 0 for the system drive of computers >= Vista. I will put in a feature request for that.
Physical Disk Free Space.xml
<?xml version="1.0" encoding="utf-8"?><AdminArsenal.Export Code="PDQInventory" Name="PDQ Inventory" Version="12.2.0.0" MinimumVersion="3.1"> <Report> <ReportDefinition name="Definition"> <Sql>/* Sources: http://stackoverflow.com/a/4110597 http://www.sqlitetutorial.net/sqlite-inner-join/ http://www.sqlitetutorial.net/sqlite-group-by/ https://sqlite.org/lang_comment.html*/SELECT Computers.Name AS "Computer Name", DiskDrives.DiskDeviceId AS "Disk Device Id", -- This subtracts the total size of all scanned partitions from each disk ( ( DiskDrives.Size - LogicalDisksTotal.Size ) / 1024 / 1024 ) AS "Disk Free Space MB"FROM -- This aggregates the size of all paritions for each disk ( SELECT ComputerId, DiskDeviceId, SUM(Size) AS "Size" FROM LogicalDisks GROUP BY ComputerId, DiskDeviceId ) LogicalDisksTotalINNER JOIN Computers ON Computers.ComputerId = DiskDrives.ComputerIdINNER JOIN -- This 2 part INNER JOIN is required for computers that have multiple disks DiskDrives ON DiskDrives.ComputerId = LogicalDisksTotal.ComputerId AND DiskDrives.DiskDeviceId = LogicalDisksTotal.DiskDeviceIdWHERE <ComputerFilter>ORDER BY "Computer Name", "Disk Device Id"</Sql> <ReportDefinitionTypeName>SqlReportDefinition</ReportDefinitionTypeName> </ReportDefinition> <Description>Report for https://support.adminarsenal.com/hc/en-us/community/posts/115000012311</Description> <Name>Physical Disk Free Space</Name> <ReportFolderId value="1" /> <ReportType>SqlReport</ReportType> </Report></AdminArsenal.Export>
It's a little easier to read with some syntax highlighting:
Hi there
Thank you, Stephen and Colby, for your informations and your efforts! That's really kind of you!
@Colby: Sorry for confusing. My english is really bad - I know :-( - but I try to explain our problem again :-)
We have several computers with SSD Harddisks (size of 112 GB) inside. With an Imaging tool, we have created a Master image and deployed this on these computers.
This basic image comes standard with an approx. size of 36 GB. Because of the Snapshot technology, each change on the imaged Computer (ex. creating of user profiles, installation of applications etc.) will increase (or grow up) the basic image.
The size of the Logical Drive does not change, but the snapshot components creates additional files and saves these also on the harddisk.
In fact, not only the image file is stored on the harddisk, but also the snapshot files. Therefore, the free space on the hard disk shrinks and shrinks.
The next deployment of master images, we will do without this snapshot technology. Until this deployment, the existing imaged computers have to been monitored. When the hard disk is full, a start of the computer is no more possible. So I have to act proactively.
In your SQL query, Colby, you create (if I have understood correctly) a difference between the size of the physical and the logical drive. Because the logical drive size of 110 GB does not change, this report shows me for the most of the concerned systems a value (free size) of 125 MB.
I will also try the solution from Stephen and let you know, if I have found a solution.
Thank you again!
Best regards,
Andreas
• Colby Bouma
Your English isn't bad at all! You just have an unusual situation and I wasn't sure what you were trying to ask for :)
What is the name of this imaging software? If we understand how it works we will be more capable of helping you. When you said Virtual Disks in your original post I thought you were talking about virtual machines :) This sounds way more interesting.
|
2021-04-19 17:47:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2560842037200928, "perplexity": 3590.575475416249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00023.warc.gz"}
|
https://www.funkypenguin.co.nz/how-to/monitor-osx-with-icinga-nagios-nrpe/
|
### Tags
I have a fairly comprehensive Icinga monitoring platform monitoring my various linux hosts, but one area which has been lacking until now is the monitoring of the OSX Mavericks Mac Mini that I use for a home media center. Considering this is used by my family to watch TV/Movies, play music, and manage iPhoto, it’s arguably one of the most important hosts to monitor carefully. Of course, I could monitor its state (up or down) by pinging it from Icinga, but I wanted to know more than that. I’ve had issues in the past with running out of disk space on the host, and I’m all to familiar with the risks of 4-year-old hardware using spindled disks. This solution enables me to monitor the following on OSX with Icinga:
• Disk Usage
• Time Machine Status
• Disk Health
# Prerequisites
## Homebrew Install
Homebrew as per the docs on the website (or just paste in the line below):
ruby -e "\$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
# Configuration Edit /usr/local/etc/nrpe.cfg, and set at least the
allowed_hosts directive to the IP of your Icinga host It may also be necessary (because of the location the plugins are installed to) to change occurances of
/usr/local/Cellar/nrpe/2.15/ To /usr/local/Cellar/nagios-plugins/1.5/ Further, customize disk usage plugin as follows, by removing reference to the specific device (/ instead of /dev/) # Disk usage (use plugin supplied by nagios-plugins from brew) command[check_disk]=/usr/local/Cellar/nrpe/2.15/sbin/check_disk -w 20% -c 10% -p / To start NRPE immediately, run: launchctl load ~/Library/LaunchAgents/homebrew.mxcl.nrpe.plist
## Install smartmontools Use Brew again to install smartmontools, necessary for the disk health check: brew install nrpe smartmontools
https://github.com/jedda/OSX-Monitoring-Tools (I used check_smart.sh and check_time_machine_currency.sh) To avoid future updates breaking non-brew-managed plugins, put them in ~/bin/. Symlink smartctl into ~/bin to avoid future conflicts: ln -s /usr/local/Cellar/smartmontools/6.2/sbin/smartctl ~/bin/ Edit check_smart.sh, and replace every occurance of /opt/local/libexec/nagios/smartctl with /Users/<youruser>/bin/smartctl Add the following to /usr/local/etc/nrpe.cfg: # Disk Health (use custom plugin) command[check_disk_health]=/Users/<youruser>/bin/check_smart.sh -d disk0 -g "badSectors,reallocSectors,powerOnHours,tempCelcius,retiredBlockCount,lifetimeWrites,lifetimeReads" # Time Machine (use custom plugin) command[check_timemachine]=/Users/<youruser>/bin/check_time_machine_currency.sh -w 360 -c 720 And restarted NRPE
|
2018-08-21 18:11:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24905140697956085, "perplexity": 14372.674698041474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218391.84/warc/CC-MAIN-20180821171406-20180821191406-00601.warc.gz"}
|
http://mathoverflow.net/feeds/question/118988
|
Automorphisms of $SL_n$ as a variety - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T20:58:25Z http://mathoverflow.net/feeds/question/118988 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/118988/automorphisms-of-sl-n-as-a-variety Automorphisms of $SL_n$ as a variety Mikhail Borovoi 2013-01-15T16:12:22Z 2013-01-15T19:45:09Z <p>What are the automorphisms of $SL_n$ as an algebraic variety?</p> <p>In other words, let $k$ be an algebraically closed field of characteristic 0 (e.g., $k=\mathbb{C}$). Let $\tau$ be an automorphism of $SL_n$ regarded as an <em>algebraic variety</em> over $k$. Assume that $\tau$ takes the unit element $e$ of $G$ to itself. Is it true that $\tau$ is an automorphism of $SL_n$ as an <em>algebraic group</em> over $k$?</p> http://mathoverflow.net/questions/118988/automorphisms-of-sl-n-as-a-variety/118992#118992 Answer by Mariano Suárez-Alvarez for Automorphisms of $SL_n$ as a variety Mariano Suárez-Alvarez 2013-01-15T16:37:43Z 2013-01-15T17:05:03Z <p>The coordinate ring when $n=2$ is $A=k[a,b,c,d]/(ad-bc-1)$. </p> <p>If $f\in k[b,c]$, there is an automorphism $\phi:A\to A$ such that $\phi(a)=a+bf$, $\phi(c)=c+df$, $\phi(b)=b$ and $\phi(d)=d$. </p> <p>One could conjecture that the automorphism group in this case is generated by $SL_2$, inversion and this sort of triangular automorphisms, much as in the Makar-Limanov–Jung–van der Kulk theorem for $k[x,y]$ (This is a <em>very</em> optimistic conjecture, though: this is a $3$-dimensional affine variety quite close to affine space and there are non-tame automorphisms of the latter...)</p> <p>In general, I doubt we know the automorphism group.</p> http://mathoverflow.net/questions/118988/automorphisms-of-sl-n-as-a-variety/119011#119011 Answer by Jason Starr for Automorphisms of $SL_n$ as a variety Jason Starr 2013-01-15T18:41:19Z 2013-01-15T19:45:09Z <p>The automorphism group is <B>massive</B>! </p> <p><I>Flexible varieties and automorphism groups</I>, I. Arzhantsev, H. Flenner, S. Kaliman, F. Kutzschebauch, M. Zaidenberg, <a href="http://arxiv.org/abs/1011.5375" rel="nofollow">http://arxiv.org/abs/1011.5375</a>.</p>
|
2013-05-25 20:58:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868230223655701, "perplexity": 1440.9647438589147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706413448/warc/CC-MAIN-20130516121333-00099-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.nature.com/articles/s41598-017-11475-8?error=cookies_not_supported&code=1bd943b6-8107-4dd3-a7ad-8051af4f3e5c
|
Article | Open
Temperature-Controlled Direct Imprinting of Ag Ionic Ink: Flexible Metal Grid Transparent Conductors with Enhanced Electromechanical Durability
• Scientific Reportsvolume 7, Article number: 11220 (2017)
• doi:10.1038/s41598-017-11475-8
• Download Citation
Received:
Accepted:
Published online:
Abstract
Next-generation transparent conductors (TCs) require excellent electromechanical durability under mechanical deformations as well as high electrical conductivity and transparency. Here we introduce a method for the fabrication of highly conductive, low-porosity, flexible metal grid TCs via temperature-controlled direct imprinting (TCDI) of Ag ionic ink. The TCDI technique based on two-step heating is capable of not only stably capturing the Ag ionic ink, but also reducing the porosity of thermally decomposed Ag nanoparticle structures by eliminating large amounts of organic complexes. The porosity reduction of metal grid TCs on a glass substrate leads to a significant decrease of the sheet resistance from 21.5 to 5.5 Ω sq−1 with an optical transmittance of 91% at λ = 550 nm. The low-porosity metal grid TCs are effectively embedded to uniform, thin and transparent polymer films with negligible resistance changes from the glass substrate having strong interfacial fracture energy (~8.2 J m−2). Finally, as the porosity decreases, the flexible metal grid TCs show a significantly enhanced electromechanical durability under bending stresses. Organic light‐emitting diodes based on the flexible metal grid TCs as anode electrodes are demonstrated.
Introduction
Transparent conductors (TCs) are indispensable in a variety of optoelectronic devices, including organic light-emitting diodes (OLEDs), organic solar cells, and touch screen panels1. Indium tin oxide (ITO)-based films have been the industrial standard for a long time due to their low electrical sheet resistance (Rs) and high optical transmittance (T); however, ITO-based films include several drawbacks, including inherent brittleness and the need for expensive sputtering processes2. Alternative materials to ITO-based films, including conducting polymers3, graphene4, carbon nanotubes5, random metal nanowire networks6,7,8,9,10, and regular metal grids11, have been reported for use in flexible optoelectronic devices. Among these materials, regular metal grids offer many advantages, such as facile control over their grid width and spacing, scalability to large-area application and low junction resistance12, 13.
Cost-effective and solution-processed metal grid TCs on flexible substrates have been reported using two different fabrication schemes: (1) grid-patterned cavity is formed into a polymer substrate using hot embossing process and then is filled with metal nanoparticle (NP) ink14,15,16. This method facilitates the fabrication of metal grid structures with the relatively high aspect ratio (=height/width), while they should be sintered at low temperature (<150 °C) due to thermal degradation of the polymer substrate; (2) metal NP structures on a glass substrate are fabricated using a variety of wet deposition methods, such as direct imprinting (DI)17, 18, gravure printing19, electrohydrodynamic jet printing20, 21 and inkjet printing22. Although this method is not suitable for fabricating the high aspect ratio structures of metal NPs, the metal grid TCs on the glass substrate can be sintered at high-temperatures (>200 °C) to improve the electrical conductivity23, 24. After high-temperature sintering process, the metal grid TCs are transferred from the glass substrate to the polymer substrate for the fabrication of flexible metal grid TCs.
Among the wet deposition methods, the DI of colloidal metal NPs can directly produce micro/nanoscale metal structures at low-costs and in a high-throughput manner without expensive etching steps and metal evaporation25, 26. Recently, our group suggested a reservoir-assisted DI method based on an Ag ionic ink, which has several advantages such as higher metal content and less aggregation than colloidal metal NPs, for the fabrication of high-performance flexible metal grid TCs17, 18. However, the Ag NP structures fabricated by the thermal decomposition of Ag ionic ink include numerous micro/nanoscale pores due to the elimination of organic complexes27, 28. Unfortunately, those pores can deteriorate electrical and mechanical properties of the flexible metal NP structures29,30,31,32,33. First, the formation of many pores precludes the generation of highly conductive metal grid TCs. Second, since high-temperature sintering process (~300 °C) induces strong interfacial fracture energy (IFE) between the metal and the substrate, micro/nanoscale pores can cause serious damage to metal NP structures during the transfer process. Finally, as the cracks are initiated and propagated near numerous pores under mechanical deformations, the flexible metal grid TCs may be easily damaged by static or dynamic bending stresses. The formation of micro/nanoscale pores inside the metal NP structures should be minimized in the development of flexible optoelectronic applications.
In order to resolve this problem, we introduce a novel temperature-controlled direct imprinting (TCDI) process of Ag ionic ink based on two-step heating for the generation of highly conductive, low-porosity, flexible metal grid TCs. This TCDI technique led to stably capturing the Ag ionic ink and reducing the porosity of thermally decomposed Ag NP structures. The electrical resistivity (ρm) and Rs of the metal grid TCs at a fixed transmittance (T550nm) at 550 nm were represented as a function of the organic complex contents and geometrical calculation. The effect of porosity on the transfer process of metal grid TCs to the polymer film was explored at different sintering temperatures. In addition, the effect of porosity on an electromechanical durability of the flexible metal grid TCs was examined under the static and dynamic bending stresses. Finally, the utility of flexible metal grid TCs as anode electrodes was demonstrated by fabricating of OLEDs.
Results
Temperature-controlled direct imprinting of Ag ionic ink
The TCDI of Ag ionic ink, which involves the two-step heating, is schematically illustrated in Fig. 1a: (i) Ag ionic ink (10 μL) was imprinted on a fluorinated glass substrate using a grid-patterned mold under low pressure (P = 120 kPa) and low temperature; (ii) the first heating step was performed at an evaporation temperature (TE) of 50 °C over 5 min to stably capture the Ag ionic ink inside the grid-patterned cavity. It should be noted that when the Ag ionic ink is initially heated at an increased TE, large amounts of the filled ink leave from the grid-patterned cavity through a liquid film between the mold and the substrate due to carbon dioxide generation during the thermal decomposition of Ag ions34; (iii) for a liquid film with negligible thickness (hf ≈ 0), the second heating step was carried out at decomposition temperature (TD) = 90 °C to eliminate organic complexes derived from the thermal decomposition of the Ag ions and to improve their thermal decomposition rate; (iv) after complete solvent evaporation, the grid-patterned mold was carefully removed from the substrate. The Ag NP structures were thermally treated at sintering temperature (TS) = 300 °C for 10 min. Figure 1b shows a schematic illustration of (i) the two-step heating of TCDI and (ii) the one-step heating of typical DI, respectively. In Fig. 1b(iii-iv), focused ion beam-scanning electron microscope (FIB-SEM) images show a cross-section of the metal grid line structures fabricated using TCDI and DI of Ag ionic ink, respectively. As TD was increased from 50 °C to 90 °C, the micro/nanoscale pores in the metal grids were significantly reduced. This micro/nanoscale pores may be mainly generated by the elimination of organic complexes during the thermal decomposition process. Amounts of organic complexes inside the Ag NP structures were compared using thermogravimetric analysis (TGA) and derivative thermogravimetric (DTG) analysis of Ag NP-organic complex powders, respectively. Figure 1c shows the TGA and DTG curves obtained from Ag NP-organic complex powders after complete drying at TD = 50 and 90 °C, respectively. The Ag NP-organic complex powders dried at the lower TD showed the larger weight loss and the faster decomposition rate in the temperature range of 100–200 °C. The Ag NP-organic complex powders dried at TD = 50 and 90 °C showed a weight loss of 21.24 and 6.17%, respectively. These results indicated that the metal grids generated using the DI of Ag ionic ink may include a greater fraction of the organic complex by weight, about 15% more, than those obtained using the TCDI of Ag ionic ink. Figure 1d plots a weight loss of the Ag NP-organic complex powders dried at different values of TD. As TD increased from 50 to 100 °C, the organic complex contents of the powder decreased. The inset shows photographic images of the metal-organic complex powders. The Ag NP-complex powders dried at TD = 50 °C showed dark black due to high organic complex contents (left), while those obtained at TD = 90 °C showed bright silver due to relatively low organic complex contents (right).
Highly conductive, low-porosity metal grid TCs
Figure 2a shows a SEM image of the Ag NP structures fabricated over the grid-patterned mold area (10.5 mm × 10.5 mm). The presence of unwanted residual layers within the grid spacing was negligible. The inset shows a magnified SEM image of the Ag NP structures at the intersection of metal grids. A linewidth (~8 μm) of the metal grid was smaller than that of the original grid-patterned cavity (15 μm) due to mold deformation. Figure 2b shows the values of Rs and T550nm for the metal grid TCs generated using the TCDI of Ag ionic ink. As TD increased from 50 to 90 °C, Rs of the metal grid TCs decreased significantly, holding T550nm over 91.2%; however, the value of Rs for the metal grid TCs fabricated at TD = 100 °C increased and fluctuated as solvent evaporation and thermal decomposition occurred near the boiling point of the base-solvent (toluene, 110.6 °C). Figure S1(a) shows transmittance spectra over a wavelength range of 350–800 nm of the metal grid TCs fabricated at different values of TD. The spectral transmittance of metal grid TCs fabricated at TD = 100 °C did significantly decrease from 350 to 500 nm. Also, figure of merit (FoM), which generally predicts the performances of TCs, was compared for the metal grid TCs generated at different values of TD in Figure S1(b). As TD increased to 90 °C, the FoM of the metal grid TCs was increased from 250 to 572. The value of ρm of thermally reduced Ag NP line structures fabricated at different values of TD is compared in Fig. 2c. The value of ρm could be expressed using the following relationship:
$ρ m (x)= ρ o exp( k p w p ),$
(1)
where ρo is the electrical resistivity of the thermally reduced metal structures without organic complex contents, wp is the weight percent of organic complex and kp is a correction factor from experimental data. The correction factor was determined to be kp = 0.0727. The value of ρo (6.5287 × 10−6 Ω cm) was 4 times larger than that of the bulk Ag (1.59 × 10−6 Ω cm).
The Rs of the metal grid TCs was calculated based on the structural size and organic complex weight. The equation used to predict Rs was constructed by modifying the expression for Rs obtained using a square wire network, as suggested by Van de Groep et al.13. The cross-section of the Ag NP structures was regarded as trapezoidal, and Rs was predicted using the following equation:
$R s = 2 ρ m k m ( d 1 + d 2 ) ( s m + d 1 ) h m = 2 ρ o k m ( d 1 + d 2 ) ( s m + d 1 ) h m exp( k p w p ),$
(2)
where hm is the thickness of the metal grid line, d1 is the bottom width and d2 is the top width of the metal grid line, and s is the spacing between the metal grid lines. The value of km depends on a difference between the predicted shape of the Ag NP structures and their actual shape. The mean values of d1, d2, hm, sm for the metal grid line were 8 μm, 6 μm, 2 μm and 257 μm, respectively. The correction factor, extracted from experimental results, was determined to be km = 3.74. Figure 2d indicates that the Rs of the metal grid TCs fabricated at different values of TD could be suitably fit to the theoretical values.
Transfer process of metal grid TCs
The metal grid TCs on the glass substrate should be transferred onto the polymer substrate for use in flexible optoelectronic devices. A transfer process of metal grid TCs provided the flexibility and significantly reduced the surface roughness that can cause optoelectronic device failure. Figure 3a shows a schematic illustration of the transfer process, based on a sandwich structure (glass substrate/polymer film/glass substrate) to generate a uniform, thin and transparent polymer film on both sides. It should be noted that when the polymer film becomes thinner, the flexible metal grid TCs show better electromechanical durability under the same bending radius (r) due to a decrease of the nominal bending strain35. The Norland Optical Adhesive 81 (NOA 81), which shows a high optical transparency over a wide spectral range with a good mechanical flexibility and a strong adhesion force to metal structures, is used as the polymer film. The NOA 81 solution was poured onto the metal grids-patterned glass substrate, and then was covered by the glass substrate. After the UV exposure, the metal grids embedded within NOA 81 film were detached from two glass substrates.
The effect of the porosity reduction on the transfer process of metal grid TCs was explored by measuring the resistance changes of metal grid TCs sintered at different values of TS. The higher TS induces the more increase of the IFE between the metal and the glass substrate, which causes serious damage to the flexible metal grid TCs during the transfer process. Especially, the metal structures with many pores are prone to be broken due to the increase of the IFE. In Figure S2(a), the IFE of Ag NP film on the glass substrate was quantitatively measured using a double cantilever beam (DCB) fracture mechanics testing method at different values of TS. In Figure S2(b), the inset shows a schematic diagram of the DCB test specimen consisting of a fluorinated glass substrate (donor substrate), a Ag NP film, an adhesion layer, and the plasma-treated glass substrate (receiving substrate). Figure 3b plots the IFE between the Ag NP film and the substrate after TS over the range 200–300 °C. Although the glass substrate was treated by the fluorine, the IFE between the Ag NP film and the glass substrate depended strongly on TS. The Ag NP films sintered at TS = 300 °C, in particular, displayed 3-fold larger IFE (~8.2 J m−2) than those sintered at TS = 200 °C (~2.7 J m−2). Figure 3c shows the normalized resistance change (∆R/Ro) and Rs of the flexible metal grid TCs after the transfer process of the metal grid TCs sintered TS = 200–300 °C. As TS increased, ∆R/Ro of the flexible metal grid TCs based on the DI of Ag ionic ink (TD = 50 °C) significantly increased from 33% to ~104% due to an increase of the IFE. On the other hand, the flexible metal grid TCs obtained using the TCDI of Ag ionic ink (TD = 90 °C) displayed a small value of ∆R/Ro (~6%). Although the IFE increased, the porosity reduction led to preventing a significant increase in ∆R/Ro from serious mechanical damage. The Rs of the flexible metal grid TCs fabricated at TD = 50 and 90 °C showed 5.5 and 52.5 Ω sq−1, respectively.
Highly conductive, low-porosity, flexible metal grid TCs
Figure 4a(i) shows a cross-sectional FIB-SEM image of the metal grid-embedded the NOA 81 film. The metal grid structures were buried below the NOA 81 surface. Figure 4a(ii) shows an atomic force microscopy (AFM) surface profile of the flexible metal grid TCs, which presented smooth surfaces with a root-mean-square surface roughness of 6.7 nm and a maximum peak-to-valley value of 43.8 nm. Figure 4b compares the values of T and Rs for the flexible metal grid TCs with the values obtained from a commercially available ITO-coated polyethylene terephthalate (PET) film. Here, it should be noted that the transmittance spectra of the bare NOA 81 film, the flexible metal grid TCs, and the ITO-coated PET film include the transmittance through the substrate. The flexible metal grid TCs performed better (Rs of 5.5 Ω sq−1 and T550nm of 81.47%) than the ITO-coated PET film (Rs of 15.0 Ω sq−1 and T550nm of 78.35%). The T of the ITO-coated PET film fluctuated over the wavelength range of 350–800 nm, whereas the corresponding values of the flexible metal grid TCs remained constant.
The electromechanical durability of the flexible metal grid TCs under static and dynamic bending stresses is an important factor to be considered in flexible optoelectronic devices35, 36. As the number of pores in the metal grid structures decreased, the mechanical durability of the flexible metal grid TCs can be improved by inhibiting pore-induced crack initiation and propagation under bending stresses29,30,31,32,33. This resulted in reducing the electrical resistance changes of the metal grid structures under bending stresses. Figure 4c plots the values of ∆R/Ro obtained from the flexible metal grid TCs fabricated using TCDI and DI of Ag ionic ink under a static bending test. The value of ∆R/Ro of the ITO-coated PET film under a static bending test was evaluated as a reference. As the flexible metal grid TCs were bent up to r = 1 mm, the values of ΔR/Ro increased for these two flexible metal grid TCs (Ro = 6.3 and 21.0 Ω sq−1) by a factor of 2.2 and 113.3 respectively. The R/Ro of the flexible metal grid TCs fabricated using the DI of Ag ionic ink significantly increased at r = 1 mm due to serious damage. Figure 4d plots the values of ∆R/Ro obtained from the flexible metal grid TCs fabricated using TCDI and DI of Ag ionic ink under a dynamic bending test with r = 7.5 mm. The value of ∆R/Ro of the ITO-coated PET film under a dynamic bending test was evaluated as a reference. After the only several bending cycles, the resistance of the ITO-coated PET film significantly increased to a few tens of kΩ due to formation of numerous and visible cracks. After 1000 cycles of repeated bending/relaxation, ∆R/Ro obtained from the flexible metal grid TCs (Ro = 7.4 and 43.5 Ω sq−1) increased by a factor of 1.5 and 86.4, respectively. When TD increased, the porosity reduction of the flexible metal grids led to decreasing ∆R/Ro under static and dynamic bending stresses (a significant decrease in the relative ratio of resistance change by more than 51 times at r = 1 mm and by more than 57 times after the 1000 bending cycles, respectively). The low-porosity, flexible metal grid TCs fabricated using the TCDI of Ag ionic ink showed an excellent electromechanical durability under static and dynamic bending tests, indicating their great potential for use in various flexible optoelectronic devices.
Discussion
Applications to flexible organic light-emitting diodes
Figure 5a shows a schematic diagram of OLEDs based on the flexible metal grid TCs. Indium zinc oxide (IZO) layers with a thickness of 50 nm were used to cover the flexible metal grid TCs, which exhibit Rs of 5.5 Ω sq−1 and T550nm of 81.47%. Poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) and MoO3 layers were used on the top of IZO films for efficient hole injection. The 4,4′-Bis(N- carbazolyl)-1,1′-biphenyl (CBP) layer doped with green phosphorescent emitters of bis(2-(2-pyridinyl-N)phenyl-C)(acetylacetonate) iridium (III) (Ir(ppy)2acac, 7 wt%) was used as an emitting layer. The CBP and 2,2′,2″-(1,3,5-Benzinetriyl)-tris(1-phenyl-1-H-benzimidazole) (TPBi) layers were used as hole- and electron- transport layers, respectively. Figure 5b shows the normalized electro-luminescence spectrum of the flexible OLEDs in the normal direction in the wavelength range of 350–700 nm, which is similar to that of the ITO-based TCs. The inset shows the operating image of the flexible OLEDs under bending. The OLEDs fabricated on the flexible metal grid TCs show an electrically stable operation due to their relatively good uniformity and low roughness, as shown in Fig. 5c. The inset shows the angular emission profile, which was used to obtain the external quantum efficiency (ηEQE) and power efficiency (ηPE) of the devices. Therefore, the OLEDs based on the flexible metal grid TCs show the comparable values of ηEQE and ηPE with those of ITO-based OLEDs, as shown in Fig. 5d. The flexible OLEDs exhibit ηEQE of 22.0% and ηPE of 61.8 lm W−1 at 111.6 cd m−2, while ITO-based OLEDs show ηEQE of 21.5% and ηPE of 60.6 lm W−1 at 116.2 cd m−2, respectively.
In summary, we demonstrated the fabrication of highly conductive, low-porosity, flexible metal grid TCs via the TCDI of Ag ionic ink. As TD increased during the thermal decomposition of Ag ionic ink, the porosity of metal grid structures was significantly reduced due to the elimination of large amounts of organic complexes. Also, both ρm and Rs of metal grid TCs were estimated based on the weight percent of organic complex and geometrical calculations. The porosity reduction led to improving the optoelectrical properties of metal grid TCs (Rs < 6 Ω sq−1 at T550nm = 91%) and preventing serious damage (∆R/Ro < 6%) from the strong IFE between the metal and the substrate during the transfer process. In addition, we verified that the porosity reduction resulted in significantly enhancing an electromechanical durability of the flexible metal grid TCs under static and dynamic bending stresses. The OLED based on flexible metal grid TCs as anode electrodes was demonstrated. The uniformity, reliability and scalability of the metal grid TCs should be further improved by the combination of uniform pressing-stable detaching system and the exact control of fluidic properties (viscosity and surface tension) before this technology can be commercialized. We believe that this strategy can provide a useful approach to enhance optoelectrical properties and an electromechanical durability of solution-processed metallic TCs for the development of next-generation flexible optoelectronic devices.
Methods
Ag ionic ink
The Ag ionic ink (TEC-IJ-060, Inktec) consisted of Ag ions (Ag alkyl carbamate complexes), a base solvent (methanol and toluene) and additives. Prior to the TCDI of Ag ionic ink, methanol was evaporated at 100 °C for 10 min to improve the concentrated ink filling. The Ag alkyl carbamate complexes were decomposed to Ag NPs, carbon dioxide, and the corresponding alkyl amines by heating above 50 °C for a few minutes34.
PDMS mold
The PDMS solution (Sylgard 184, Dow Corning) was generated by mixing the silicon elastomer kit and a curing agent (10:1), and this mixture was poured onto a SU-8 master. After PDMS curing at 100 °C for 1 hour, a PDMS mold was carefully released from the SU-8 master. In the PDMS mold, the grid-patterned cavity was designed with a width (w) of 15 μm and a spacing (s) of 250 μm by calculating the geometrical shadow zone, T = s2/(s + w)2. The value of a cavity height was limited to 7.5 μm to prevent the destruction of Ag NP structures during the detachment of PDMS mold. However, the value of T550nm measured from the metal grid TCs was higher than that obtained using the geometrical calculation of grid-patterned mold due to mold deformation (93.2 and 89%, respectively).
Effect of pressure
The effect of P, ranged from 30 to 300 kPa, was examined by considering the Poiseuille’s law (hf4 is inversely proportional to ∆P)37. At 30 kPa, most of the ink was not captured inside the grid-patterned mold due to a large value of hf. The ink was effectively captured inside the mold cavity over 100 kPa. The pressure was optimized at 120 kPa by comparing Rs and T550nm of the metal grid TCs. Also, the relatively uniform P based on horizontal levels led to decreasing both Rs and resistance fluctuations of the metal grids.
Transfer process
All types of glass substrate, including soldalime glass and borosilicate glass, were treated using 1 H, 1 H, 2 H, 2H-perfluorooctyl-trichlorosilane (448931, Sigma-Aldrich) for 3 min in a vacuum chamber. After the sintering process, a metal grid-patterned glass substrate (soldalime) was re-treated with the fluorinated silane. The NOA 81 solution was poured onto the metal grid-patterned glass substrate, and it was covered by a borosilicate glass substrate. After UV exposure, the borosilicate glass substrate was detached from the NOA film, and then the metal grid-embedded within NOA 81 film (its thickness of 150 μm) was detached from the sodalime glass substrate.
Fabrication of flexible organic light-emitting diodes
The IZO film (~50 nm) was deposited on the flexible metal grid TCs using an RF sputtering system (RF power: 120 W). The PEDOT:PSS (Clevios PVP AI4083, Heraeus, 45 nm) layers were spin-coated on the top of the IZO films. PEDOT:PSS was coated also on pre-coated ITO glass substrates (<12 Ω sq−1, AMG Inc., Korea) as a reference after O2 plasma treatment (70 W, 1 min). The samples with PEDOT:PSS were dried to remove any residual solvent at 100 °C for 10 min on the hot plate. Finally, the PEDOT:PSS-coated samples were loaded into a thermal evaporator for deposition of organic, metal oxide layers, and metal electrodes under high vacuum (3 × 10−6 Torr) conditions. The stacked multilayers of OLEDs were composed of MoO3 (10 nm)/CBP (20 nm)/CBP doped with 7 wt% of Ir(ppy)2acac (20 nm)/TPBi (55 nm)/LiF (1 nm)/Al (100 nm).
DCB test
The Ag NP film/glass substrate (sodalime) specimens were fabricated with the size of 37.5 mm × 9 mm. The specimens were sandwiched by an additional glass beam using an epoxy (353ND, Epoxy Technology). The final structure of the DCB test specimen was the glass substrate/epoxy/Ag/glass substrate. The connection parts with the DCB test machine and aluminum loading tabs were attached to the specimens with the selective epoxy (DP420, 3M) and the specimens were cured at 120 °C for 1 h in a convection oven.
Characterization
The images of the metal grid TCs were measured using field-emission SEM (S-4800, Hitachi). Cross-sectional images of the metal grid TCs were measured using FIB-SEM (Helios Nanolab 600, FEI), and the surface roughness of the flexible metal grid TCs was measured using AFM (XE-100, Park Systems). The transmittance spectra were measured using a UV-VIS-NIR spectrophotometer (Lambda 1050, Perkin-Elmer). The Rs of the metal grid TCs was measured using the two-terminal method and four-point probe method (4200-SCS, Keithley)38. Two electrodes between the metal grids, separated by a square area (25 mm2), were fabricated using conductive pens (CW2200MTP and CW2900, ITW Chemtronics).
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. 1.
Ellmer, K. Past achievements and future challenges in the development of optically transparent electrodes. Nat. Photonics 6, 809–817 (2012).
2. 2.
Kumar, A. & Zhou, C. The race to replace tin-doped indium oxide: which material will win? ACS Nano 4, 11–14 (2010).
3. 3.
Vosgueritchian, M. et al. Highly conductive and transparent PEDOT:PSS films with a fluorosurfactant for stretchable and flexible transparent electrodes. Adv. Funct. Mater. 22, 421–428 (2012).
4. 4.
Bae, S. et al. Roll-to-roll production of 30-inch graphene films for transparent electrodes. Nat. Nanotechnol. 5, 574–578 (2010).
5. 5.
Zhang, D. et al. Transparent, conductive, and flexible carbon nanotube films and their application in organic light-emitting diodes. Nano Lett. 6, 1880–1886 (2006).
6. 6.
Mutiso, R. M. et al. Integrating simulations and experiments to predict sheet resistance and optical transmittance in nanowire films for transparent conductors. ACS Nano 7, 7654–7663 (2013).
7. 7.
Wu, H. et al. A transparent electrode based on a metal nanotrough network. Nat. Nanotechnol. 8, 421–425 (2013).
8. 8.
Han, B. et al. Uniform self‐forming metallic network as a high‐performance transparent conductive electrode. Adv. Mater. 26, 873–877 (2014).
9. 9.
Bao, C. et al. In situ fabrication of highly conductive metal nanowire networks with high transmittance from deep-ultraviolet to near-infrared. ACS Nano 9, 2502–2509 (2015).
10. 10.
Choi, D. Y. et al. Annealing-free, flexible silver nanowire–polymer composite electrodes via a continuous two-step spray-coating method. Nanoscale 5, 977–983 (2013).
11. 11.
Kang, M.-G. & Guo, L. J. Nanoimprinted semitransparent metal electrodes and their application in organic light-emitting diodes. Adv. Mater. 19, 1391–1396 (2007).
12. 12.
Park, J. H. et al. Flexible and transparent metallic grid electrodes prepared by evaporative assembly. ACS Appl. Mater. Inter. 6, 12380–12387 (2014).
13. 13.
Groep, J. et al. Transparent conducting silver nanowire networks. Nano Lett. 12, 3138–3144 (2012).
14. 14.
Yu, J.-S. et al. Silver front electrode grids for ITO-free all printed polymer solar cells with embedded and raised topographies, prepared by thermal imprint, flexographic and inkjet roll-to-roll processes. Nanoscale 4, 6032–6040 (2012).
15. 15.
Yu, J.-S. et al. Transparent conductive film with printable embedded patterns for organic solar cells. Sol. Energy Mater. Sol. Cells 109, 142–147 (2013).
16. 16.
Li, Y. et al. High-efficiency robust perovskite solar cells on ultrathin flexible substrates. Nat. Commun. 7, 10214 (2016).
17. 17.
Oh, Y. S. et al. Direct imprinting of thermally reduced silver nanoparticles via deformation-driven ink injection for high-performance, flexible metal grid embedded transparent conductors. RSC Adv. 5, 64661–64668 (2015).
18. 18.
Oh, Y. S. et al. High-performance, solution-processed, embedded multiscale metallic transparent conductors. ACS Appl. Mater. Inter. 8, 10937–10945 (2016).
19. 19.
Jung, S. et al. Extremely flexible transparent conducting electrodes for organic devices. Adv. Energy Mater. 4, 1300474 (2014).
20. 20.
Jang, Y. et al. Invisible metal-grid transparent electrode prepared by electrohydrodynamic (EHD) jet printing. J. Phys. D: Appl. Phys. 46, 155103 (2013).
21. 21.
Schneider, J. et al. Electrohydrodynamic nanodrip printing of high aspect ratio metal grid transparent electrodes. Adv. Funct. Mater. 26, 833–840 (2016).
22. 22.
Zhang, Z. et al. Controlled inkjetting of a conductive pattern of silver nanoparticles based on the coffee-ring effect. Adv. Mater. 25, 6714–6718 (2013).
23. 23.
Lee, Y. et al. Thermal pressing of a metal-grid transparent electrode into a plastic substrate for flexible electronic devices. J. Mater. Chem. C 4, 7577–7583 (2016).
24. 24.
Kim, I. et al. Roll-offset printed transparent conducting electrode for organic solar cells. Thin Solid Films 580, 21–28 (2015).
25. 25.
Ko, S. H. et al. Direct nanoimprinting of metal nanoparticles for nanoscale electronics fabrication. Nano Lett. 7, 1869–1877 (2007).
26. 26.
Park, I. et al. Nanoscale patterning and electronics on flexible substrate by direct nanoimprinting of metallic nanoparticles. Adv. Mater. 20, 489–496 (2008).
27. 27.
Kim, J. H. et al. Enhancing adhesion of screen‐printed silver nanopaste films. Adv. Mater. Interfaces 2, 1500283 (2015).
28. 28.
Lee, I. et al. Interfacial toughening of solution processed Ag nanoparticle thin films by organic residuals. Nanotechnology 23, 485704 (2012).
29. 29.
Gerard, D. A. & Koss, D. A. Porosity and crack initiation during low cycle fatigue. Mater. Sci. Eng., A 129, 77–85 (1990).
30. 30.
Lee, H.-Y. et al. Effects of bending fatigue on the electrical resistance in metallic films on flexible substrates. Met. Mater. Int. 16, 947–951 (2010).
31. 31.
Sim, G.-D. et al. Tensile and fatigue behaviors of printed ag thin films on flexible substrates. Appl. Phys. Lett. 101, 191907 (2012).
32. 32.
Kim, S. et al. Tensile characteristics of metal nanoparticle films on flexible polymer substrates for printed electronics applications. Nanotechnology 24, 085701 (2013).
33. 33.
Eun, K. et al. Electromechanical properties of printed copper ink film using a white flash light annealing process for flexible electronics. Microelectron. Reliab. 55, 838–845 (2015).
34. 34.
Kwak, W.-G. et al. Preparation of silver-coated cotton fabrics using silver carbamate via thermal reduction and their properties. Carbohydr. Polym. 115, 317–324 (2015).
35. 35.
Yeo, J. et al. Flexible supercapacitor fabrication by room temperature rapid laser processing of roll-to-roll printed metal nanoparticle ink for wearable electronics application. J. Power Sources 246, 562–568 (2014).
36. 36.
Choi, D. Y. et al. Highly conductive, bendable, embedded ag nanoparticle wire arrays via convective self‐assembly: Hybridization into ag nanowire transparent conductors. Adv. Funct. Mater. 25, 3888–3898 (2015).
37. 37.
Cheng, W. et al. Nanopatterning self-assembled nanoparticle superlattices by moulding microdroplets. Nat. Nanotechnol. 3, 682–690 (2008).
38. 38.
Hsu, P.-C. et al. Performance enhancement of metal nanowire transparent conducting electrodes by mesoscale metal wires. Nat. Commun. 4, 2522 (2013).
Download references
Acknowledgements
This work was supported by the KUSTAR-KAIST Institute, and by the Basic Science Research Program (N01160766) of the National Research Foundation of Korea (NRF) funded by the Ministry of Education, and the Global Leading Technology Program (N10042433) funded by the Ministry of Trade, Industry and Energy, and by the Research Program “Technology Development of Low Cost Flexible Lighting Surface”, which is a part of the R&D Program of Electronics and Telecommunications Research Institute (ETRI).
Author information
Affiliations
1. Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
• Yong Suk Oh
• , Hyesun Choi
• , Sung-Uk Lee
• , Kyeong-Soo Yun
• , Taek-Soo Kim
• , Inkyu Park
• & Hyung Jin Sung
2. Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
• Jaeho Lee
• , Hyunwoo Lee
• & Seunghyup Yoo
3. Powder & Ceramics Division, Korea Institute of Materials and Science, Changwon, 51508, Korea
• Dong Yun Choi
Contributions
Y.S.O. conceived the research. I.P. and H.J.S. supervised the research. Y.S.O., D.Y.C., S.U.L., K.S.Y., S.Y. and T.S.K. designed the experiments. Y.S.O., H.S.C., J.L. and H.L. performed the experiments. Y.S.O., H.S.C., J.L., H.L., D.Y.C., S.U.L., K.S.Y., S.Y., T.S.K., I.P. and H.J.S. contributed to the manuscript preparation. All authors have given approval to the final version of the manuscript.
Competing Interests
The authors declare that they have no competing interests.
Corresponding authors
Correspondence to Inkyu Park or Hyung Jin Sung.
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
|
2018-03-24 00:43:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5431175231933594, "perplexity": 6349.205821723087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00404.warc.gz"}
|
https://www.groundai.com/project/stability-of-stochastic-approximations-with-controlled-markov-noise-and-temporal-difference-learning/
|
Stability of Stochastic Approximations with ‘Controlled Markov’ Noise and Temporal Difference Learning
# Stability of Stochastic Approximations with ‘Controlled Markov’ Noise and Temporal Difference Learning
Arunselvan Ramaswamy arunselvan@csa.iisc.ernet.in Shalabh Bhatnagar shalabh@csa.iisc.ernet.in
###### Abstract
In this paper we present a ‘stability theorem’ for stochastic approximation (SA) algorithms with ‘controlled Markov’ noise. Such algorithms were first studied by Borkar in 2006. Specifically, sufficient conditions are presented which guarantee the stability of the iterates. Further, under these conditions the iterates are shown to track a solution to the differential inclusion defined in terms of the ergodic occupation measures associated with the ‘controlled Markov’ process. As an application to our main result we present an improvement to a general form of temporal difference learning algorithms. Specifically, we present sufficient conditions for their stability and convergence using our framework. This paper builds on the works of Borkar and Benveniste, Metivier and Priouret.
## 1 Introduction
Let us begin by considering the general form of stochastic approximation algorithms:
xn+1=xn+a(n)(h(xn)+Mn+1), where (1)
is a Lipschitz continuous function;
is the given step-size sequence such that and ;
is the sequence of square integrable martingale difference terms.
In 1996, Benaïm [3] showed that the asymptotic behavior of recursion (1) can be determined by studying the asymptotic behavior of the associated o.d.e.
˙x(t)=h(x(t)).
This technique is popularly known as the ODE method and was originally developed by Ljung in 1977 [9]. In [3] it is assumed that , in other words the iterates are assumed to be stable. In many cases the stability assumption becomes a bottleneck in using the ODE method. This bottleneck was overcome by Borkar and Meyn in 1999 [8]. Specifically, they developed sufficient conditions that guarantee the ‘stability and convergence’ of recursion (1).
In many applications, the noise-process is Markovian in nature. Stochastic approximation algorithms with ‘Markov Noise’ have been extensively studied in Benveniste et. al. [5]. These results have been extended to the case when the noise is ‘controlled Markov’ by Borkar [6]. Specifically, the asymptotics of the iterates are described via a limiting differential inclusion () that is defined in terms of the ergodic occupation measures of the Markov process. As explained in [6], the motivation for such a study stems from the fact that in many cases the noise-process is not Markov, but its lack of Markov property comes through its dependence on a time-varying ‘control’ process. In particular this is the case with many reinforcement learning algorithms. In [6], the iterates are assumed to be stable, which as explained earlier poses a bottleneck, especially in analyzing algorithms from reinforcement learning. The aim of this paper is to overcome this bottleneck. In other words, we present sufficient conditions for the ‘stability and convergence’ of stochastic approximation algorithms with ‘controlled Markov’ noise. Finally, as an application setting, we consider a general form of the temporal difference learning algorithms in reinforcement learning and present weaker sufficient conditions (than those in literature) that guarantee their stability and convergence using our framework.
The organization of this paper is as follows:
In Section 2.1 we present the definitions and notations involved in this paper. In Section 2.2 we discuss the assumptions involved in proving the stability of the iterates given by (3).
In Section 3 we show the stability of the iterates under the assumptions outlined in Section 2.2 (Theorem 1).
In Section 4 we present additional assumptions which coupled with assumptions from Section 2.2 are used to prove the ‘stability and convergence’ of recursion (3) (Theorem 2). Specifically, Theorem 2 states that under the aforementioned sets of assumptions the iterates are stable and converge to an internally chain transitive invariant set associated with For the definition of the reader is referred to Section 4.
In Section 5 we discuss an application of Theorem 2. We present sufficient conditions for the ‘stability and convergence’ of a general form of temporal difference learning algorithms, in reinforcement learning.
## 2 Preliminaries and Assumptions
### 2.1 Notations & Definitions
In this section we present the definitions and notations used in this paper for the purpose of easy reference. Note that they can be found in Benaïm et. al. [4], Aubin et. al. [1], [2] and Borkar [7].
Marchaud Map: A set-valued map } is called a Marchaud map if it satisfies the following properties:
• For each , is convex and compact.
• (point-wise boundedness) For each , for some .
• is an upper-semicontinuous map.
We say that is upper-semicontinuous, if given sequences (in ) and (in ) with , and , , . In other words, the graph of , , is closed in .
If the set-valued map is Marchaud, then the differential inclusion (DI) given by
˙x(t) ∈ H(x(t)) (2)
is guaranteed to have at least one solution that is absolutely continuous. The reader is referred to Aubin & Cellina[1] for more details.
If x is an absolutely continuous map satisfying (2) then we say that .
A set-valued semiflow associated with (2) is defined on as follows:
. Let , define
ΦB(M):=⋃{t∈B, x∈M}Φt(x).
Let , the set be defined by Similarly the limit set of a solution x is given by .
Invariant Set: is invariant if for every there exists a trajectory, x, entirely in with . In other words, with , for all .
Internally Chain Transitive Set: is said to be internally chain transitive if is compact and for every , and we have the following: There exist that are solutions to the differential inclusion , a sequence and real numbers greater than such that: and for . The sequence is called an chain in from to .
Given and , define the distance between and by . We define the -open neighborhood of by . The -closed neighborhood of is defined by .
Attracting Set: is an attracting set if it is compact and there exists a neighborhood such that for any there exists such that . Then is called the fundamental neighborhood of . In addition to being compact if the attracting set is also invariant then it is called an attractor. The basin of attraction of is given by . The set is Lyapunov stable if for all , such that . We use and interchangeably to denote the dependence of on .
The open ball of radius around is represented by , while the closed ball is represented by .
Upper limit of sequences of sets: Let be a sequence of sets in . The upper-limit of is given by .
### 2.2 Assumptions
Let us consider a stochastic approximation algorithm with ‘controlled Markov’ noise in .
xn+1=xn+a(n)[h(xn,yn)+Mn+1], where (3)
• is a jointly continuous map with a compact metric space. The map is Lipschitz continuous in the first component, further its constant does not change with the second component. Let the Lipschitz constant be . This is assumption in Section 2 of Borkar [6]. Here we call it (A1).
• The step-size sequence is such that for all , and . Without loss of generality let . This is assumption in Section 2 of Borkar [6]. Here we call it (A3).
• is a sequence of square integrable martingale difference terms, that also contribute to the noise. They are related to by
E[∥Mn+1∥2 | Fn]≤K(1+∥xn∥2), where n≥0.
This is assumption in Section 2 of Borkar [6]. Here we call it (A2).
• is the -valued ‘Controlled Markov’ process.
Note that is assumed to be polish in [6]. As stated in , in this paper we let be a compact metric space, hence polish. Among the assumptions made in [6], are relevant to prove the stability of the iterates. The remaining assumptions are listed in Section 4 where we present the result on the ‘stability and convergence’ of the iterates given by (3). See Borkar [6] for more details.
• For each , we define functions by .
• We define the limiting map by , where is the upper-limit of a sequence of sets (see Section 2.1).
• For each define .
We replace the stability assumption in [6] with the following two assumptions.
• If , and = for some , then .
• There exists an attracting set, , associated with such that . Further, is a subset of some fundamental neighborhood of .
Assumption , discussed in Section 5, is a sufficient condition for to be satisfied. One could say that constitutes the ‘Lyapunov function’ condition for . We shall show that is a Marchaud map in Lemma 2. As explained in [1], it follows that the DI, , has at least one solution that is absolutely continuous. Hence assumption is meaningful.
We begin by showing that satisfies for all . Fix , and , we have
∥hc(x1,y)−hc(x2,y)∥=∥h(cx1,y)/c−h(cx2,y)/c∥,
∥h(cx1,y)/c−h(cx2,y)/c∥≤L∥cx1−cx2∥/c, hence
∥hc(x1,y)−hc(x2,y)∥≤L∥x1−x2∥.
We thus have that is Lipschitz continuous in the first component with Lipschitz constant . Further, for a fixed this constant does not change with . Since was arbitrarily chosen it follows that is the Lipschitz constant associated with every . It is trivially true that is a jointly continuous map.
Fix , and , then
∥hc(x,y)−hc(0,y)∥≤L∥x−0∥, hence
∥hc(x,y)∥≤∥hc(0,y)∥+L∥x∥.
Since is a continuous function on (a compact set) and we have for some . Thus
∥hc(x,y)∥≤K(1+∥x∥), where K=L∨M.
We may assume without loss of generality that is such that also holds for all (assumption ). Again does not change with .
Fix and . As explained in the previous paragraph we have,
supc≥1 ∥hc(x,y)∥≤K(1+∥x∥).
The upper-limit of , , is clearly non-empty. Recall that and . Hence,
supu∈h∞(x,y) ∥u∥≤K(1+∥x∥) and supu∈H(x) ∥u∥≤K(1+∥x∥). (4)
We need to show that is a Marchaud map. Before we do that, let us prove an auxiliary result.
###### Lemma 1.
Suppose in , in , and . Then .
###### Proof.
Consider the following inequality:
∥hcn(x,yn)−u∥≤∥hcn(xn,yn)−u∥+∥hcn(x,yn)−hcn(xn,yn)∥.
Since and , we get
limcn→∞hcn(x,yn)=u.
It follows from that . ∎
The following is a direct consequence of Lemma 1: If in , and then . If this is not so, then without loss of generality we have that for some . Since is compact, such that and for some and some . We have , , and . It follows from Lemma 1 that . This is a contradiction.
###### Lemma 2.
is a Marchaud map.
###### Proof.
Recall that . As explained earlier (cf. (2.2)),
supu∈H(x)∥u∥≤K(1+∥x∥).
Hence is point-wise bounded. From the definition of it follows that is convex and compact for each .
It is left to show that is upper semi-continuous. Let , and , . We need to show that . If this is not true, then there exists a linear functional on , say , such that and , for some and . Since , there exists such that for each , i.e., , here is used to denote the set . For the sake of notational convenience let us denote by for all . We claim that for all . We shall prove this claim later, for now we assume that the claim is true and proceed.
Pick for each . Let for some . Since is norm bounded it contains a convergent subsequence, say . Let . Since , such that . The sequence is chosen such that for each . Since is from a compact set, there exists a convergent subsequence. For the sake of notational convenience (without loss of generality) we assume that the sequence itself has a limit, i.e., for some . We have the following: , , , and for . It follows from Lemma 1 that . Since and for each , we have that . This contradicts .
It remains to prove that for all . If this were not true, then such that for all . It follows that for each . Since , such that for all , . This leads to a contradiction. ∎
## 3 Stability Theorem
Let us construct the linear interpolated trajectory for from the sequence . Define and , . Let and for let
¯¯¯x(t) := (t(n+1)−tt(n+1)−t(n)) ¯¯¯x(t(n)) + (t−t(n)t(n+1)−t(n)) ¯¯¯x(t(n+1)).
Define and for . Observe that there exists a subsequence, , of such that for all .
We use to construct the rescaled trajectory, , for . Let for some and define , where . Also, let , . The rescaled martingale difference terms are given by , .
We define a piece-wise constant trajectory, , using the rescaled trajectory as follows: Let and . Define . Let us define another piece-wise constant trajectory using as follows: Let for all .
Recall that is an attracting set associated with (see assumption in section 2.2). Let , then . Choose such that . Fix , where is defined in section 2.1. Let be a solution to such that , then for all .
Consider the following recursion:
¯¯¯x(t(k+1)) = ¯¯¯x(t(k)) + a(k)(h(¯¯¯x(t(k)),yk) + Mk+1),
such that . Multiplying both sides by , we get the following rescaled recursion:
^x(t(k+1)) = ^x(t(k)) + a(k)(hr(n)(^x(t(k)),yk) + ^Mk+1). (5)
Note that .
The following two lemmas can be found in Borkar & Meyn [8] (that however does not consider ‘controlled Markov’ noise). It is shown there that the ‘martingale noise’ sequence converges almost surely. We present the results below using our setting.
.
###### Proof.
Recall that and . It is enough to show that
supm(n)
for some that is independent of . Let us fix and such that and . Consider the following rescaled recursion:
^x(t(k)) = ^x(t(k−1)) + a(k−1)(^z(t(k−1)) + ^Mk).
Unfolding the above we get,
^x(t(k)) = ^x(t(m(n))) + k−1∑l=m(n)a(l)(^z(t(l)) + ^Ml+1).
Taking expectation of the square of the norms on both sides we get,
E∥^x(t(k))∥2 = E∥∥ ∥∥^x(t(m(n))) + k−1∑l=m(n)a(l)(^z(t(l)) + ^Ml+1)∥∥ ∥∥2.
It follows from the Minkowski inequality that,
E1/2∥^x(t(k))∥2≤E1/2∥^x(Tn)∥2 + k−1∑l=m(n)a(l)(E1/2∥^z(t(l))∥2 + E1/2∥^Ml+1∥2).
For each such that , . Further, . Observe that (since ). Using these observations we get the following:
E1/2∥^x(t(k))∥2≤ 1 + k−1∑l=m(n)a(l)(KE1/2(1+∥^x(t(l))∥)2 + √KE1/2(1+∥^x(t(l))∥2)), (6)
E1/2∥^x(t(k))∥2≤ 1 + k−1∑l=m(n)a(l)(K(1+E1/2∥^x(t(l))∥2) + √K(1+E1/2∥^x(t(l))∥2)), (7)
E1/2∥^x(t(k))∥2 ≤ [1+(K+√K)(T+1)]+(K+√K)k−1∑l=m(n)a(l)E1/2∥^x(t(l))∥2. (8)
Applying the discrete version of Gronwall inequality we now get,
E1/2∥^x(t(k))∥2 ≤ [1+(K+√K)(T+1)]e(K+√K)(T+1).
Let us define . Clearly is independent of and the claim follows. ∎
###### Lemma 4.
The sequence , , converges almost surely, where for all .
###### Proof.
It is enough to prove that
∞∑k=0E[∥a(k)^Mk+1∥2 | Fk] < ∞ a.s.
E[∞∑k=0a(k)2E[∥^Mk+1∥2 | Fk]] < ∞.
From assumption we get
E[∞∑k=0a(k)2E[∥^Mk+1∥2 | Fk]]≤∞∑k=0a(k)2K(1+E∥^x(t(k))∥2).
The claim now follows from Lemma 3 and . ∎
Let , , be the solution (up to time ) to with initial condition . Clearly,
xn(t) = ^x(Tn)+∫t0^z(Tn+s)ds. (9)
###### Proof.
Let such that , where . First we prove the lemma when . Consider the following:
^x(t)=(t(m(n)+k+1)−ta(m(n)+k))^x(t(m(n)+k))+(t−t(m(n)+k)a(m(n)+k))^x(t(m(n)+k+1)). (10)
Substituting for in the above equation we get:
^x(t)=(t(m(n)+k+1)−ta(m(n)+k))^x(t(m(n)+k))+(t−t(m(n)+k)a(m(n)+k))(^x(t(m(n)+k))+a(m(n)+k)(hr(n)(^x(t(m(n)+k)),ym(n)+k)+^Mm(n)+k+1)), (11)
hence,
^x(t)=^x(t(m(n)+k))+(t−t(m(n)+k))(hr(n)(^x(t(m(n)+k)),ym(n)+k)+^Mm(n)+k+1). (12)
Unfolding , we get (see (5)),
^x(t)=^x(Tn)+k−1∑l=0a(m(n)+l)(hr(n)(^x(t(m(n)+l)),ym(n)+l)+
|
2019-09-23 10:55:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822651147842407, "perplexity": 451.81954777218846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00389.warc.gz"}
|
http://mathematica.stackexchange.com/questions/4249/what-is-the-proper-method-to-load-a-mathematica-package-inside-a-dynamicmodule?answertab=oldest
|
# What is the proper method to load a Mathematica package inside a DynamicModule
I have a DynamicModule that requires loading of Mathematica packages. The usual methods do not seem to apply (Needs["MathematicaPackage"]], etc) I've tried
Dynamic[Initialization :> (Needs["Package"])];
with only partial success.
I'm sure this is done regularly, but cannot find any MMA documentation to that effect.
What's the best way to load Mathematica packages in Dynamic or DynamicModule and Manipulate?
-
Can you expand a bit on what “with only partial success” means? What happened, what is your issue? – F'x Apr 14 '12 at 14:57
Certainly! Initial results provided package errors using Needs, and it was easy to see the package was not used in the module. Now the errors are related to LibraryFunction::cfta: Argument {{bla,bla,bla},<<44>> at position 2 should be a rank 2 tensor of machine-size real numbers. >> and only a single package loading error followed by the usual list errors related to the package functions that did not load. Thanks for you help! – R Hall Apr 14 '12 at 15:03
Possibly related: mathematica.stackexchange.com/questions/595/… – Leonid Shifrin Apr 14 '12 at 17:36
@LeonidShifrin Not exactly, but thank's for the suggestion. Just wrestling with some of the newer Dynamic functionality and breaking things nicely by trial and error. ;) Thanks again! – R Hall Apr 15 '12 at 19:43
While Initialization is a useful option, it is only evaluated when the body of the DynamicModule is displayed on screen. See some explanation on the evaluation sequence here.
## With DynamicModule
I've constructed a TestPackage for this case, it contains the globally exported variable $Test, with a value of 123. The following example shows that the package is not loaded (note that you have to clear dynamic content and variables (e.g. restart kernel) for each example below to start from a clear state of memory): DynamicModule[{},$Test, Initialization :> (Needs["TestPackage"])]
$Test It doesn't work with an explicit TestPackage$Test call either indicating that $Test (and thus the body) is evaluated before the Initialization code. This is somewhat contraintuitive, as one expects the initialization code to be evaluated before anything else. I should note here that the package is loaded, it is just loaded too late. Any later cells calling for $Test return the correct value though. One way to overcome this in the given setup is to use the explicit name of $Test in a dynamic output, which updates the unrecognized $Test output when the package is finally loaded during evaluation:
DynamicModule[{}, Dynamic[TestPackage$Test], Initialization :> Needs["TestPackage"])] 123 The second example is even more strange. While the package loading code is included in the body, $Test is still not recognized:
DynamicModule[{}, Needs["TestPackage"]; $Test] Using the explicit name of the variable helps here, indicating that parsing (done before evaluation) causes $Test to be a local variable that is not recognized as TestPackage$Test: DynamicModule[{}, Needs["TestPackage"]; TestPackage$Test]
123
Of course the easiest way is:
Needs["TestPackage"];
DynamicModule[{}, $Test] 123 Update And the Wolfram way is (see this conference material): "Can be solved by two Needs[] statements...one to fulfill Shift+Enter evaluation and one to fulfill Dynamic evaluation. Note that this is only an issue because of $ContextPath. Any other initialization code could have safely appeared one time inside of Initialization."
Needs["TestPackage"];
DynamicModule[{}, Dynamic[$Test], Initialization :> Needs["TestPackage"])] 123 ## With Manipulate Following the Wolfram way, one needs the outer package loading, but in Manipulate, SaveDefinitions works correctly grabbing the definition of $Test from the package:
Needs["TestPackage"];
Manipulate[$Test, {$Test, None}, SaveDefinitions -> True]
123
## With Dynamic
According to the above examples, with Dynamic (each cell should be evaluated in a fresh kernel):
Dynamic[$Test, Initialization :> (Needs["TestPackage"])] Global$Test
Dynamic[TestPackage$Test, Initialization :> (Needs["TestPackage"])] 123 Dynamic[Needs["TestPackage"];$Test]
Global$Test Dynamic[Needs["TestPackage"]; TestPackage$Test]
123`
-
Thanks very much @István Zachar! This is an excellent and thorough explanation! – R Hall Apr 14 '12 at 17:52
@RHall And I myself have made some useful discoveries during this fieldtrip :) – István Zachar Apr 14 '12 at 18:04
|
2015-07-30 23:08:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2168775498867035, "perplexity": 3067.017275881317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987775.70/warc/CC-MAIN-20150728002307-00185-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/488022-pointer-to-an-item-in-an-stdvector/
|
• 12
• 12
• 9
• 10
• 13
# Pointer to an item in an std::vector
This topic is 3649 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I use C++. Let's say I have a vector of ints.
vector <int> int_vector;
int_vector.push_back(3);
int_vector.push_back(4);
int_vector.push_back(5);
Now I make a pointer to...the second int.
int* ptr = &int_vector[1];
If add another int to the vector after that, like "int_vector.push_back(6);", does that invalidate my pointer? Should I not be making pointers to items in a vector anyway? I believe this is causing some weird behavior I'm experiencing right now. Also, what then is the best way to refer to items in a vector? I would just use an index (like make an int that counts from the beginning of the vector (0) to the item I want), but that index is invalidated any time I delete an item before it. Edit: I think iterators might be the answer?
##### Share on other sites
iterator/pointers into a vector are invalidated whenever it is resized. Therefore any operation which might potentially cause a vector to resize itself should be assumed to invalidate all iterators (or pointers) to the contents of the vector.
If you don't want this behavior you'll need to use another container like a std::list. List never invalidates iterators to any of it's elements (except obviously iterators to deleted elements are invalid).
Alternatively you could maintain a vector of pointers (or better yet smart pointers, or even better boost::ptr_vector)
|
2018-03-21 19:09:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17875654995441437, "perplexity": 1848.8047297917692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647681.81/warc/CC-MAIN-20180321180325-20180321200325-00197.warc.gz"}
|
http://m-phi.blogspot.nl/2011_05_01_archive.html
|
## Monday, 30 May 2011
### Roy's Fortnightly Puzzle: Volume 3
Okay, this one has been keeping me up at night for at least two decades.
What is the value of 0^0?
Mathematicians seem to agree that if it has a unique value at all, the value is 1 (there seems to be less consensus on the antecedent here, however).
The puzzle, of course, is not merely figuring out the answer. The puzzle is to say something about what the criteria for deciding such a question might be. In particular:
Is mathematical practice and convenience the only arbiter here?
Might the philosopher of mathematics have something interesting to say about cases like this?
Is this an uncomfortable situation for the platonist (since the choice of 1 over 0 seems like a case where the facts are just stipulated for convenience, and not 'discovered' via examination of the platonic forms or whatnot)?
Have at it.
### On the origins of analytic philosophy
(Cross-posted at NewAPPS.)
Greg Frost-Arnold has a nice post on the origins of the phrase ‘analytic philosophy’, in particular on when it began to be used roughly with its current meaning. He has a useful chart showing that already in the 1960s the phrase was being regularly used, whereas ‘continental philosophy’ only became more prominent as a phrase in the 1990s.
But besides the question of uses of the terminology, in comments to the post different people have been presenting insightful remarks on the origins of the very idea of ‘analytic philosophy’ as a particular way of doing philosophy. Greg himself (in comments) offers the following observation as a starting point:
Go back to Europe in 1932. We have the following 3 intellectual groups: the phenomenologists (esp. Husserl and Heidegger), the folks dedicated to (something like) Moorean analysis, and the Vienna Circle and their intellectual allies (perhaps we include here the Lvov-Warsaw school).
He then goes on to argue, and correctly to my mind, that analytic philosophy emerged essentially from the ‘fusion’ of the last two groups, the Mooreans and those who viewed logical analysis as the main philosophical methodology (Vienna Circle etc.). Continental philosophy would have naturally emerged from the phenomenology tradition. (An aside: for a while, I entertained the hypothesis that the analytic/continental divide could be explained by observing which of the Critiques each tradition focused on: mostly the first for analytic philosophy, mostly the third for continental philosophy. But Eric Schiesser told me at some point that this hypothesis doesn’t work, and although I forgot his arguments, I found them compelling then.)
Anyway, I think Greg hit the nail on the head in identifying Moorean ‘common sense’ philosophy and logical positivism (or at least a strong emphasis on the role of logic as a methodology for philosophy) as the two main sources for the development of analytic philosophy as we know it. But this also means that this tradition has always had a mild schizophrenic component, so to speak (and I hope disability philosophers will not take offense here!). In a sense, Moorean common sense philosophy is inimical to the conception of logical analysis in question, which at least to some extent *is* about discovering new facts.
Let me make this more precise. In comments, Greg quotes the ‘statement of policy’ of the very first edition of the journal Analysis, in 1933:
"the contributions to be published will be concerned, as a rule, with the elucidation or explanation of facts, ... the general nature of which is, by common consent, already known; rather than with attempts to establish new kinds of facts" (p.1).
Clearly, it doesn’t get more Moorean than this… And why am I saying that this is in tension with the project of using logical analysis as a key philosophical methodology? After all, one can (and does) also use logical analysis for the explanation of facts which are already known. However, as I see it, this is definitely not the core of the good old Leibnizian idea of using logic to discover new truths, which was to a great extent one of Frege’s main sources of inspiration (although it is true that the very logicist project can be seen as the search for a logical analysis of already known mathematical facts; but it seems to me that this is really just the beginning of the story).
Let me mention a couple of examples. In ‘On Denoting’, when Russell claims that the actual logical form of sentences such as ‘The present king of France is bald’ is not the subject-predicate form but rather a tri-partite claim, he is clearly questioning ‘already known facts’ by common consent. Similarly, theTractatus is filled with un-commonsensical statements. A little later, in his works on truth and logical consequence, Tarski does take something that he himself refers to as the ‘common conception’ of these concepts as his starting point to formulate criteria of adequacy for his formal theories. But it’s clear that these formal theories are intended to go much beyond than just offering an elucidation of these ‘common conceptions’. In the same vein, Carnapian explication does not in any way rule out the possibility of establishing ‘new kinds of facts’ about its object of analysis (which may well be already known facts, but as a starting point).
It is no secret to anybody that I am no enthusiast of Moorean intuition-based philosophical methodology (as explained here and here), so I am not neutral on any of this. But the genealogy offered by Greg Frost-Arnold seems to me to highlight the fact that this tension between sticking to (and explaining) what is known and discovering new facts (in particular, by means of logical analysis) has always been at the heart of analytic philosophy, insofar as it emerged from the confluence of two rather distinct approaches to philosophy. This explains for example the reactions to Michael Streven’s definition of philosophy in a post at Leiter’s blog: “The point of philosophy is to defy common sense.” (For the record, I couldn’t agree more.) It’s Mooreanism still alive and kicking! But modern counterparts of logical positivism are also alive and kicking, and it is surprising that people are not more aware of the intrinsic conflicting nature of philosophical methodology within analytic philosophy.
(Let me also add a little plug for Mike Beaney's great entry on the concept of analysis at SEP.)
## Wednesday, 25 May 2011
### MCMP video podcasts (Berit Brogaard, Volker Halbach)
LMU’s Virtuelle Hochschule, Armin Rubner and Erik Keller have done a fantastic job preparing MCMP's first two video podcasts. Berit Brogaard (UMSL) and Volker Halbach (New College, Oxford) have the honour of being first out.
Brogaard: Do 'Looks' Reports Reflect the Contents of Perception?
Halbach: The conservativity of truth and the disentanglement of syntax and semantics
Check them out here.
## Tuesday, 24 May 2011
Okay, so some of you might have noticed by now that I am into comics, and am going to freely inform you about math-and-logic related comics that I run across (both the good - see Peanuts below - and the bad - see Superman below). Hell, I think I am the only person in the world publishing on both logic and on the aesthetics of comics. Give me a break ;)
Anyway, most of you probably are already aware of A. Doxiadis et alia's Logicomix - the story of Russell's logic in comic book form (I reviewed it for History and Philosophy of Logic, for anyone interested in that sort of thing!)
Recently another comic involving the history of logic has appeared: Paul Hornschemeier's The Three Paradoxes. The comic tells five distinct, but interconnected stories. The title derives from one of these stories: Hornschemeier's brilliant seven-page depiction of Zeno of Elea presenting his three paradoxes of motion to the Athenian philosophers (he leaves out the Stadium on Parmenides' advice!)
The Zeno material in the comic is connected to a passage where Hornschemeier sees a scarred childhood acquaintance for the first time in years and finds himself tongue-tied. Afterwards he and his father compare this 'frozen' feeling - this inability to speak or move appropriately - to Zeno's paradoxes of motion.
There are a couple notable things about the portion of the comic that depicts Zeno explaining his puzzles to the Athenians. First off is the fact that Hornschemeier gets the paradoxes right (something Doxiadis couldn't be bothered to do in Logicomix even though his co-author is a professional computer scientist - see the review above for details). Second, however, is his depiction of Socrates and Socrates' reaction to Zeno. After hearing about the paradox he exclaims:
"Man, no offense. But are you guys retarded?"
Then he asks what happens if Zeno is successful, and they all come to believe Zeno's views - views they didn't previously hold. He then answers his own question:
"Here is Athens we call that shit change. Change! What do they call that shit over in Elea?"
It's both hilarious and, in my opinion, completing in keeping with Socrates' character (insofar as we know what that is).
Good stuff. I recommend it.
[By the way, I have decided to let the offensive use of 'retarded' slide, since the comic is so great otherwise.]
## Sunday, 22 May 2011
### European formal philosophers, you've been warned...
...here.
(Folk over at Choice and Inference are very amused.)
UPDATE: Leiter has added an update to his original post.
It has come to my attention that one self-identified "European formal philosopher," who shares the concerns about the editorial misconduct at Synthese, has taken offense at my attempt to introduce a note of levity into this affair. So I hereby make clear that, of course, not all formal philosophers in Europe are willing to excuse the editorial misconduct; the title was prompted primarily with one particularly obtuse defender of the EICs in mind (Reinhard Muskens, a frequent commenter at the New APPS blog). I apologize for the unfair implication!
It that's about me, neither did I take offense nor am I a 'self-identified "European formal philosopher"', but the clarification is much appreciated (less so for the use of the term 'obtuse' here).
## Saturday, 21 May 2011
### How to Write Proofs, 2
In the earlier post, "How to Write Proofs, A Quick Guide", I gave the example of a simple kind of result that one might come across (and should know how to prove) in intermediate logic:
(*) Suppose $S_0$ is $P \rightarrow Q$ and $S_{n+1}$ is $P \rightarrow S_n$. Show that, for all $n$, $S_n$ is equivalent to $P \rightarrow Q$.
The obvious proof uses induction (on $n$, as one says). We want to show that: every number $n \in \mathbb{N}$ has a certain property: namely, that the formula $S_n$ is equivalent to $P \rightarrow Q$. We can show this by induction. Induction says that if a property holds of $0$, and holds of $k+1$ whenever it holds of $k$, then it holds of all numbers. So, induction proofs proceed in three steps. First (the base step) show the property holds of $0$. Second (the induction step) show that, assuming it holds of $k$, it also holds of $k+1$. Finally, conclude that it holds of all numbers.
$\textbf{Base}$. I.e., when $n = 0$.
$S_0$ is $P \rightarrow Q$. This is obviously equivalent to $P \rightarrow Q$.
$\textbf{Induction Step}$. I.e., from $k$ having the property, we want to show $k+1$ has it too.
So, suppose $S_k$ is equivalent to $P \rightarrow Q$. (This is the Induction Hypothesis.) We want to show that $S_{k+1}$ is equivalent to $P \rightarrow Q$ as well.
First, note that $S_{k+1}$ is defined to be $P \rightarrow S_k$. The Induction Hypothesis tells us that $S_k$ is equivalent to $P \rightarrow Q$. But we can use the simple lemma that "the substitution of equivalents leads to equivalents". So, we can conclude that $S_{k+1}$ is equivalent to $P \rightarrow (P \rightarrow Q)$. We only then need to show that this is equivalent $P \rightarrow Q$. I.e., that $P \rightarrow (P \rightarrow Q) \equiv P \rightarrow Q$. This can be done with a truth table. So, that completes the induction step.
From the Base Step and the Induction Step, we can conclude, using the Induction Principle, that for all $n$, $S_n$ is equivalent to $P \rightarrow Q$, as required. $\sf{QED}$.
## Thursday, 19 May 2011
### But WHAT is an open problem in mathematics?
UPDATE: This is the second post in a series of three on the same topic; the first one is here and the third one is here.
Two days ago I reported here on an ongoing debate over at the FOM list on some controversial statements made by Fields medalist Voevodsky on the status of the consistency of PA as a mathematical problem. In particular, I mentioned that Harvey Friedman had reported sending a message to Voevodsky, asking for clarifications: "how you view the usual mathematical proof that Peano Arithmetic is consistent, and to what extent and in what sense is "the consistency of Peano Arithmetic" a genuine open problem in mathematics."
Now Friedman reports that Voevodsky has replied:
Such a comment will take some time to write ...
To put it very shortly I think that in-consistency of Peano arithmetic as well as in-consistency of ZFC are open and very interesting problems in mathematics. Consistency on the other hand is not an interesting problem since it has been shown by Goedel to be impossible to proof [sic].
So, what do we make of this? I am now more and more convinced that Toby Meadows (in correspondence) has it right when he says that there are different senses of a 'mathematical open problem' floating around. Toby suggested a weak and a strong reading of 'open problem/question' in this context:
On a weak reading of "open question", we might understand it as a question that is worth asking ... perhaps a research programme worth taking up. In this case, the fact that so much mathematical practice presupposes Con(PA) is going to be good evidence that Con(PA) is not open on this way.
The strong reading would be "that it is not the case that the question has been settled one way or the other in such a way that it is "impossible" for it to be otherwise."
On the strong reading, Con(PA) is perhaps not an open question, as Goedel's results have shown that assuming Con(PA) we cannot prove Con(PA) with merely PA; this is exactly what Voevodsky seems to be saying. However, as again noted by Toby, it doesn't mean that ~Con(PA) is NOT an open question; we know we cannot prove Con(PA) in PA, but we don't know yet whether we can or cannot prove ~Con(PA) in PA! Admittedly, this is a *very* unlikely outcome, but there is indeed as of yet no proof that ~Con(PA) cannot be proved in PA. And this is precisely what Voevodsky seems to be saying, at least as I understand him.
Another point worth mentioning is that here he seems to be relying on a Hilbertian sense of proving the consistency of a system T: Con(T) is only proved if it is proved in T itself, and it is perhaps in this sense that he (wrongly, to my mind) dismisses Gentzen's proofs. (I have lots of thoughts on this too, including foundational worries of circularity; if T is unsound, it might well prove Con(T) even though that's not the case, in particular given that Sou(T) --> Con(T), and an unsound T may well be able to prove Sou(T). But that's for another time.)
Anyway, all this to raise the deeply philosophical question: what is an open problem in mathematics? Are there different senses of 'open problem' floating around? Might this be what is behind all the debate between FOM'ers? M-Phi'ers seem particularly well-positioned to discuss these matters, so please go ahead and shoot!
(On this note, I'm off to Rio tomorrow, so internet access is going to be disrupted. But you can of course debate without me!)
## Wednesday, 18 May 2011
### Peanuts and Platonism
Schulz, June 29, 1954.
[I recommend that you cllick on photo for larger, more legible version]
## Tuesday, 17 May 2011
### Voevodsky: "The consistency of PA is an open problem"
UPDATE: This is the first post in a series of three on the same topic; the second one is here and the third one is here.
Those of you who subscribe to the FOM list have certainly noticed that it's been "a lot more interesting than usual", in the terms of Archean Toby Meadows. For those of you who do not subscribe to the list (perhaps precisely for the reasons implicated in Toby's remark!), here's a summary (but you can read the messages yourself at the FOM archives for this month). There are some videos of lectures by Fields medalist Voevodsky made available on the internet, where (at least according to some) he seems to display a complete lack of understanding of Goedel's incompleteness results (here and here). In Neil Tennant's words:
He stated the theorem as follows (written version, projected on the screen):
It is impossible to prove the consistency of any formal reasoning system which is at least as strong as the standard axiomatization of elementary number theory ("first order arithmetic").
So he failed to inform his audience that the impossibility that Goedel actually established was the impossibility of proof-in-S of a sentence expressing the consistency of S, for any consistent and sufficiently strong system S.
As we know, Gentzen's proofs of the consistency of PA are among the most important results in proof-theory, second only to Goedel's results themselves and perhaps Prawitz' normalization results. (For a great overview of Gentzen's proofs, see von Plato's SEP entry.) What I find most astonishing about Gentzen's proofs, based on transfinite induction, is that the theory obtained by adding quantifier free transfinite induction to primitive recursive arithmetic is not stronger than PA, and yet it can prove the consistency of PA (it is not weaker either, obviously; they just prove different things altogether). One may raise eyebrows concerning transfinite induction (and apparently this is what lies behind Voevodsky's dismissal of Gentzen's results), but apparently most mathematicians and logicians seem quite convinced of the cogency of the proof. In Vaughan Pratt's words (he's my favorite regular contributor to FOM, and we've corresponded on a number of interesting topics):
Since Gentzen's proof is a reasonably straightforward induction on epsilon_0 in a system tailored to reasoning not about numbers under addition and multiplication but about proofs, one can only imagine Voevodsky rejects Gentzen's argument on the ground that Goedel's result must surely show that no plausible proof of the consistency of PA can exist, hence why bother thinking about any such proof?
So Voevodsky seems to seriously entertain the possibility of PA's inconsistency. Is it because he doesn't understand Goedels' results, or Gentzen's results, or both? Or is there something else going on? Even granting that a great mathematician is not necessarily savvy on such hair-splitting foundational issues, as pointed out by Juliette Kennedy, "that Voevodsky's coworkers in Univalent Foundations are Awodey and Coquand seals the deal for me." In other words, we have reasons to believe that Voevodsky should know his way around even in foundational issues (his website linked above says that "he is working on new foundations of mathematics based on homotopy-theoretic semantics of Martin-Lof type theories.")
H. Friedman reports having sent him a letter asking for clarifications, in particular "how you view the usual mathematical proof that Peano Arithmetic is consistent, and to what extent and in what sense is "the consistency of Peano Arithmetic" a genuine open problem in mathematics." So the debate is likely to continue, and (needless to say) it clearly has all kinds of important philosophical implications.
## Monday, 16 May 2011
### Roy's Fortnightly Puzzle: Volume 2
Sorensen's No-No paradox [2001] consists of two statements F1 and F2 such that (where H is the Godel code of formula H):
|- F1 <-> ~T(F2)
|- F2 <-> ~T(F1)
(loosely speaking, each statement is equivalent to the claim that the other is false). Now, given the F1 and F2 instances of the T-schema, we have:
|- F1 <-> ~ F2
Symmetry considerations suggest, however, that F1 and F2 ought to have the same truth value. Hence the 'paradox'.
The question I will set before you today is this. Assume we are working in a consistent extension of Q, and given a predicate P(x), consider two statements G1 and G2 such that:
|- G1 <-> P(G2)
|- G2 <-> P(G1)
Under what conditions is it the case that:
|- G1 <-> G2
In other words, when can we prove the symmetry claim?
Here is a start: If P is a provability predicate, then we can prove G1 <-> G2 (actually, we can prove both G1 and G2 individually!)
Harder?: With the set-up the same as the above, consider a 'Sorensen n-cycle':
|- G1 <-> P(G2)
|- G2 <-> P(G3)
|- G3 <-> P(G4)
: : : : : : :
|- Gn-1 <-> P(Gn)
|- Gn <-> P(G1)
Under what conditions can we prove:
|- G1 <-> G2
|- G2 <-> G3
: : : : : : :
|- Gn-1 <-> Gn
|- Gn <-> G1
Again, P being a provability predicate is sufficient.
Have at it.
[Important Disclaimer: I do not have a general solution to this. I do have some partial results that I will dole out if the question attracts sufficient interest.]
[Edited to correct typo noted by Shane]
## Sunday, 15 May 2011
### Synthese: the EiC's response to the petition
At least some of the pending issues seem to be settled by this (not all, though).
(The petition was here, and Leiter Reports is once again my source.)
## Thursday, 12 May 2011
### Philosophy and its technicalities
"There would be something badly wrong if work in the philosophy of physics were as accessible to a linguist as to a physicist, or if work in the philosophy of language were as accessible to a physicist as to a linguist."
It's from a brand new piece by James Ladyman in The Philosophers' Magazine on Philosophy that's not for the masses. Deserving.
## Wednesday, 11 May 2011
### Exciting new directions in formal semantics
(Cross-posted at New APPS)
The Amsterdam Colloquium is a bi-annual event focusing mostly (though not exclusively) on formal semantics and formal pragmatics. Its 18th installment will be held December 19 - 21, 2011 at (surprise, surprise!) the University of Amsterdam. The call for papers is out, and there will also be three thematic workshops: Formal semantics and pragmatics of sign languages, Formal semantic evidence andInquisitiveness.
Especially the first two workshops indicate that formal semantics and pragmatics as a field is moving in refreshingly new directions (and the third one is also bound to be interesting). The workshop on sign languages shows that the field is finally paying attention to the significant dissimilarities between languages expressed in different media: written, spoken, and in this case sign language. Following Roy Harris and others, I am convinced of the crucial importance of incorporating these differences into linguistic theorizing. To my mind, it is deeply misleading to speak of ‘natural language’ as a blanket term covering both speech and writing (and possibly sign languages too). Moreover, I am with Linell in identifying a chronic ‘written language bias’ in language studies in general, which means that features proper to speech (intonation, prosody) tend to be overlooked. Ironically, features proper to writing are also unduly ignored (as argued e.g. in this great paper by Sybille Kramer).
A workshop on sign language clearly indicates the realization that each form of human language must be studied also from the point of view of its specific medium and the features arising from using the medium in question (clearly, one of my motivations to insist on this aspect is my belief in the fruitfulness of embodied approaches in general). There is some work already being done within formal semantics and pragmatics on e.g. intonation (for instance, by my colleague Floris Roelofsen), and now that sign language is receiving attention in the field, it all seems to be going in a very good direction. Moreover, given that most researchers are not competent users of sign language themselves, such studies will necessarily have a strong empirical component, which brings me to the second workshop.
Working in Amsterdam, I’ve always been exposed to large amounts of formal semantics and pragmatics, and while I’ve always marveled at the technical ingenuity displayed, philosophically I always felt a bit uncomfortable with what exactly was going on. What is formal semantics a model of? Does it purport to describe actual cognitive processes of language users? Does it describe the ‘mathematical’ properties of language, considered as ontologically independent from speakers? (Roughly, this seems to have been Montague’s original take.) And as long as it is not clear what exactly the target phenomenon is for these theories, it is also not clear where the evidence to support or refute a given formal semantic theory should be coming from. So I worried about the kind of epistemic confirmation these theories were based on, and similarly about the explanatory power they could have (what exactly were they explaining?). In other words, I felt that formal semantics and pragmatics as a field was in dire need of serious methodological reflection. I was not alone there, as my boss Martin Stokhof, formerly a full-blown formal semanticist (in particular in his joint work with Jeroen Groenendijk: the logic of questions, dynamic predicate logic etc.), has focused extensively on the philosophical and methodological foundations of the enterprise in recent years, taking a rather critical stance (see his papers here; he is also supervising the dissertation of my friend and co-author Edgar Andrade-Lotero on the philosophical foundations of formal semantics).
Now, given the reservations that I’ve had through the years, I am thrilled to see a whole workshop at the Amsterdam Colloquium dedicated to methodological reflection on the foundations of formal semantics (not surprisingly, it is organized by my very talented colleagues Katrin Schulz and Galit Weidman Sassoon). From the description of the workshop:
Formal semantics as a field of linguists undergoes a rapid change with respect to the status of quantitative methodologies, the application of which is gradually becoming a standard in the field, replacing the good old 'armchair' methodology. In light of this development, we invite submissions reporting of high level formal semantic research benefiting from the use of a quantitative methodology, corpora-based, experimental, neurolinguistic, computational or other.
Clearly, this is a debate with deep philosophical implications, and particularly significant against the background of the sustained debates on methodology that have been taking place in philosophy in general over the last years. In other words, this post is not only intended as a plug for the Amsterdam Colloquium :) It is also intended to suggest that the field of formal semantics and pragmatics may be moving in exciting new directions; to my mind, if these directions continue to be pursued, the field will gain substantially in philosophical depth.
## Tuesday, 10 May 2011
### Two important blogs
One more post with feminist activism, and then I promise to give you all a break for a while! (^_^)
It occurred to me now that, after my previous post on gender imbalance, it would be good also to advertise two important blogs run by Jender of the Feminist Philosophers: What is it like to be a woman in philosophy?, with all kinds of depressing stories sent by readers (and some 'good news' stories too!), but also What we're doing about what it's like, with submissions on what people have been doing so as to improve the situation of women in philosophy.
So in a sense they form a 'bad news'/'good news' pair, and unsurprisingly, the bad news one gets a lot more submissions and readership. But lots of people have been doing all kinds of interesting things, and 'What we're doing' is an important source of ideas for those looking for inspiration on how to get involved with improving things. It deserves just as much attention as its 'bad news' sibling.
Let me also notice that, while most of my activism is in the feminist direction, I certainly keep in mind that there are all kinds of other minorities in philosophy and elsewhere facing similar difficulties. In particular, I think it's much harder for people working outside the mainstream axis North-America/Western Europe to be taken seriously in the profession; there are also strong 'geographical biases' operating, and in the medium run I'm hoping to be able to do something about this too.
## Sunday, 8 May 2011
### Competing geometries of needs
(via I love charts)
## Friday, 6 May 2011
### Super-Mathematics
And you thought super-ventriloquism was Superman's most useless power.
There are just so many things wrong with this.
### M-Phi and gender (im)balance
Philosophy has a notoriously skewed gender balance, and things tend to be even worse in more techy areas such as logic, philosophy of mathematics etc. Those of you who know me are probably aware that this is something I worry about a lot, for all kinds of reasons. For starters, poor gender balance is self-perpetuating: if there is a strong association between a certain area and men, which is reinforced precisely by the low number of women in the area, then young women who might otherwise consider pursuing their interests in the given area are very likely to feel unwelcome and uncomfortable, thus turning to areas and occupations that they feel are more 'congenial'. And so the cycle continues... Hoping that things will get better by themselves has proven to be an overly optimistic attitude, precisely in virtue of the self-perpetuating nature of the phenomenon.
So instead, I am convinced that it is only by making conscious efforts towards redressing gender imbalance that we are likely to make any progress (yes, affirmative action). It would be crucial for example to increase the visibility of the women who already do good work in a given field typically associated with 'maleness' so as to to counter the stereotype, and one way this can be done is by trying to increase the proportion of women as speakers at conferences. With this in mind, I have created a list of women working in philosophical logic and philosophy of logic, which is meant, among other things, to serve as a source of ideas for conference organizers. The list is being constantly updated, and suggestions for additions are always very welcome!
Another important measure is mentoring/coaching individual women at the early stages of their careers. It is probably very hard for men to understand that women have to counter a lot of biases to pursue their interests in a given 'male' area, including self-imposed biases. They often need to be explicitly told that they have what it takes, that they do belong in a given field, that they have good potential. Most women will profit from the kind of reassurance that might be seen as 'overkill' by men. So I ask you, fellow M-Phi'ers, to pay particular attention to the talented young women around you who are constantly confronted with feelings of inadequacy, and who could certainly benefit from some extra encouragement.
Well, there is lots more that I could write on the topic, but I will leave it at that for now. To be sure, I certainly don't intend to use this blog as a constant outlet for my feminist activism (fortunately, I already have New APPS for that!), but I thought it would be worth reminding the readers of this blog that this is an important issue, and one which requires a conscious effort to be addressed. We are so used to the situation that we usually don't even pause to think of its utter absurdity.
## Thursday, 5 May 2011
### Coherence not measured by probability
Mark Siebel has an interesting paper in the new Analysis arguing that the popular programme of measuring coherence by probability cannot succeed. The essence of his argument is that if G explains E better than H does, then {G,E} is more coherent than {H,E}, so any measure of coherence must give these two distinct measures. Yet there are such cases in which G and H are logically equivalent and for which the joint probability distributions of G and E and H and E will be the same, a consequence of which is that no probability measure will distinguish {G,E} and {H,E}. Seems pretty cogent.
## Wednesday, 4 May 2011
### Inconsistent Math Curricula Hurting US Students, Study Finds
Reading this headline in my inbox from Science Daily, I wrongly thought it might be about dialetheism, "Oh, so this is what happens when inconsistent mathematics is taught to students!". But, apparently not. It's about the effects of different levels of difficulty in mathematics curricula in different places.
A new study finds important differences in math curricula across U.S. states and school districts. The findings, published in the May issue of the American Journal of Education, suggest that many students across the country are placed at a disadvantage by less demanding curricula.
Interesting nonetheless.
## Tuesday, 3 May 2011
### The KK principle and empirical data
So, today I went up to Groningen to attend a lecture by T. Williamson, with the title "Very improbable knowledge". As it turns out, I didn't make it on time for the lecture, as there were massive train delays due to adventurous goats crossing the train-tracks, who then got hit by a train (I do feel sorry for the goats!). I arrived just in time for Q&A, and while I should probably not have had the nerve to ask a question, there was this question that just *had* to be asked.
At some point during Q&A, the KK principle came up, namely the principle that to know p entails to know that you know p (Kp --> KKp). Williamson was arguing against it (as he must have done during the lecture) on conceptual grounds, suggesting that there are situations where it simply does not seem to hold (some convoluted thought-experiment). His interlocutor at that point, Allard Tamminga, insisted that the KK principle is fundamentally correct, and it wasn't clear how the debate could be carried on any further.
So I asked Williamson whether he thought that a debate on the KK principle might benefit from attention to empirical data. He first thought that I meant carrying out surveys and asking people around whether they thought that the KK principle holds -- which sort of beats the point, as they may know the principle and yet not know that they know it! (duh...) I then clarified that I actually had psychological experiments in mind, in particular the kind of thing that has been investigated under the heading of meta-cognition in recent years (very cool stuff!). He was still not very enthusiastic about my idea, arguing that, just as psychological phenomena are not reliable guides for the truth of mathematical statements, they cannot be reliable guides for the truth of logical principles, and according to him KK is a logical principle.
But is it really? I admit that I tend to think that an awful lot of questions are ultimately empirical questions, but I take KK to be essentially about cognition, and thus not a logical principle as such (unless one wants to take the Kantian transcendental road to cognition and infer everything a priori). In fact, many of the results in the meta-cognition tradition seem to suggest that we often 'know' without knowing that we know (and similarly, that we often think we know when we actually do not know). In other words, the meta-cognition literature investigates, among other things, the accuracy with which we judge our own epistemic capacities. My hunch is that the results basically support the view that KK does not hold, but a more serious investigation would have to be carried out for more definitive claims. In any case, I think it is pretty obvious that it could be very interesting to look into the meta-cognition material from the point of view of KK.
I must say that I was surprised to see Williamson so dismissive of the possible relevance of empirical data for this issue. After all, in The Philosophy of Philosophy, he does enlist the mental models theory of reasoning to argue against the inferentialist program (something that Ole still owes us a paper on!). (Basically, his argument doesn't work, but at least it departs from the premise that empirical data could be relevant for such philosophical debates.) But today, he was not in any way sympathetic to the idea -- a shame, if you ask me.
## Monday, 2 May 2011
### Logic and external target phenomena
It is a real pleasure to have been invited to contribute to M-phi! As some of you may know, I also contribute to the New APPS blog, where I write about all kinds of things besides logic (feminism, philosophical methodology, current affairs). But from now on, I will be cross-posting my posts on logic and the more ‘formal’ parts of philosophy here; I look forward to debates with the knowledgeable readership of this blog :)
For my first post here, I’d like to discuss the very general question of what logical theories are theories of, if anything at all, and to inquire into the appropriate terminology to be used in such contexts. In a forthcoming JPL paper (co-authored with Edgar Andrade-Lotero), we start with the following remark:
Let us start with a fairly uncontroversial observation. Generally speaking, a logical system can be viewed from (at least) two equally important but fundamentally different angles: i) it can be viewed as a pair formed by a syntax, i.e. a deductive system, and a semantics, i.e. a class of mathematical structures onto which the underlying language is interpreted; or ii) it can be viewed as a triad consisting of a syntax, a semantics and the target phenomenon that the logic is intended to capture. In the first case, both syntax and semantics are viewed as autonomous mathematical structures, not owing anything to external elements. In the second case, both syntax and semantics are accountable towards the primitive target phenomenon, which may be an informally formulated concept, or even phenomena in the ‘real world’ (e.g. logics of action, logics of social interaction, quantum logic etc.). Indeed, in the second case, both syntax and semantics seek to be a ‘model’ in some sense or another of the target phenomenon.
In Chap. 12 of Doubt Truth to be a Liar Graham Priest draws a similar distinction between pure vs. applied logics. As I read him, the distinction is not really intended to differentiate logic systems as such, but rather to outline different attitudes one can have towards a logical system. He then goes on to argue that, from the applied logic point of view, the canonical application of logic is correct reasoning. In a similar vein, Paoli (JPL, 2005) draws on Quine’s distinction between immanent and transcendent predicates, and remarks:
According to Quine, in fact, logical connectives are immanent, not transcendent. There is no pretheoretical fact of the matter a theory of logical constants must account for; rather, the vicissitudes of a connective are wholly internal to a specified formal language, to a given calculus. There is nothing, in sum, that precedes or transcends formalization, no external data to “get right”.
The issue is very general and does not concern logical connectives in particular. The key opposition is between the internal features of a logical theory, and its potential relation with external phenomena, which the logical theory purports to be a model of. Although Quine (in Philosophy of Logic) seemed to mean something slightly different with his notions of immanent and transcendent predicates, I find Paoli’s appropriation of the terminology quite fitting.
My question now is: whenever there is something transcendent that a logical system is intended to capture, what is the appropriate terminology to refer to these external phenomena? In practice, the adjectives ‘pre-theoretical’ and ‘intuitive’ are often appended to whatever target phenomenon that a given logical system is ‘about’ (the ‘being about’ part is also what needs to be explained). Presumably, the idea is that the phenomenon is conceptually prior to its systematization within the theory, and this seems right according to the 'transcendent' approach, but there are problems.
Elsewhere, I have objected to the qualification of ‘pre-theoretical’ as attributed to the notion(s) of logical consequence which is (are) presumably the target of the familiar technical accounts of logical consequence (proof-theoretic, model-theoretic). The trouble with the terminology is that it suggests a theoretically neutral target phenomenon, emerging from ‘common, everyday’ practices (terms used in Tarski’s seminal 1936 paper). In truth, the notions in question are inherently couched in robust theoretical frameworks –-T. Smiley (1988) and P. Smith (2010) make similar points; the latter specifically criticizes Field’s misconception of what is ‘squeezed’ in a squeezing argument. My general worry is that, by describing these notions as ‘pre-theoretical’ and ‘intuitive’, we seem to be suggesting that they are transparent and unproblematic, whereas what is often required to make philosophical progress in these discussions is precisely a deeper understanding of the target phenomenon as such. (Shapiro’s and Prawitz’s papers in the 2005 Handbook are good attempts in this direction.)
So ‘pre-theoretical’ and ‘intuitive’ are problematic; what could possibly be used instead? I’ve been contemplating using ‘extra-systematic’ or ‘extra-theoretic’, but they don’t sound all the way right either. In a sense, perhaps there is no terminology to be used across the board, precisely because the target phenomena of different logical systems may be widely dissimilar kinds of phenomena. Some of them may come closer to what one could describe as ‘intuitive’ (e.g. the truth predicate as used in everyday language), while others will be grounded in a considerable amount of theorizing (e.g. the validity predicate, extensively discussed in blog posts recently – my general position is that it is a conceptual mistake to treat the truth predicate and the validity predicate on a par, even though there are interesting technical connections). So for the time being, I continue to use the vague and uninformative phrase ‘target phenomenon’, which is more of a place-holder, but this may well be what is required here.
(Alternatively, one may simply maintain that there are no target phenomena that logical systems seek to capture in any interesting way, i.e. that everything in a logical system is an immanent matter. Although frustrating for a variety of reasons, this remains an available move for the theorist.)
## Sunday, 1 May 2011
### Roy's Fortnightly Puzzle: Volume 1
Intuitionism and "Unless"
Anyone who has taught an introductory logic course will be familiar with the difficulties students have regarding how to properly translate "unless" into propositional logic. The correct translation of "F unless G" is:
T1. F v G
(Athough it is surprising how many supposedly professional logic teachers get this wrong). We often motivate this translation in terms of various inferences we make with "unless", taking advantage of the fact that T1 is logically equivalent to both:
T2. ~ F -> G
T3. ~ G -> F
in classical logic. By DeMorgans, the disjunctive translation of "unless' is also classically equivalent to:
T4. ~ (~ F & ~ G)
Now, here is the puzzle: How should the intuitionistic logician translate "F unless G"? Note that each of the four options is intuitionistically distinct from the other three.
The initial temptation to go with the simplest (and 'strongest') translation (T1) conflicts with the thought that the "un" in "unless" suggests the presence of negation. A more sophisticated approach is to examine inferences that intuitionists make with "unless" and determine which translation is supported. Now, all four translations support the following inferences:
I1a. F unless G, not-F, therefore not-not-G.
I2a. F unless G, not-G, therefore not-not-F.
The question then becomes whether "unless" intuitionistically supports the following stronger inference rules:
I1b. F unless G, not-F, therefore G.
I2b. F unless G, not-G, therefore F.
T1 supports both inferences, T2 supports I1b but not I2b, T3 supports I2b but not I1b, and T4 supports neither I1b nor I2b.
So, how should intuitionists translate "unless"?
### Roy's Fortnightly Puzzle: Introduction
Okay, I am starting a new feature on M-Phi, and will attempt to keep it going as long as I have ideas.
I will post a simple puzzle involving logic or the philosophy of mathematics that is either a little to 'cute' to actually publish, or which is something I thought about but didn't make any substantial progress on.
A new puzzle will be posted every two weeks (roughly), as the title suggests.
The hope, of course, is that these simple puzzles will motivate us to think about issues in a new way. Of course, if that doesn't happen, and they are merely fun, then that works too.
|
2018-01-23 01:59:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5990012884140015, "perplexity": 1312.3230196412221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891705.93/warc/CC-MAIN-20180123012644-20180123032644-00455.warc.gz"}
|
https://physics.stackexchange.com/questions/154464/quantum-cloning-of-orthonormal-states
|
Quantum cloning of orthonormal states
If I understand correctly, for two orthonormal states $\left|\psi_1\right\rangle$ and $\left|\psi_2\right\rangle$ in the Hilbert space H, there must exist a unitary transformation $U$, such that:
$$U\left|\psi_1\right\rangle\left|\alpha\right\rangle = \left|\psi_1\right\rangle\left|\psi_1\right\rangle$$
$$U\left|\psi_2\right\rangle\left|\alpha\right\rangle = \left|\psi_2\right\rangle\left|\psi_2\right\rangle$$
where $\left|\alpha\right\rangle$ is the initial state of the second subsystem, on which orthonormal states are cloned.
My question is: where can I find the reference for the formal proof of this folklore lemma?
Before answering, please see our policy on resource recommendation questions. Please write substantial answers that detail the style, content, and prerequisites of the book, paper or other resource. Explain the nature of the resource so that readers can decide which one is best suited for them rather than relying on the opinions of others. Answers containing only a reference to a book or paper will be removed!
I'm not sure about a reference for this lemma, but maybe this will help.
$$\left|\psi_1\right\rangle\left|\alpha\right\rangle, \left|\psi_2\right\rangle\left|\alpha\right\rangle$$ are an orthonormal basis of a two-dimensional subspace of your initial Hilbert space. The $$U$$ in your equations (it shouldn't be on the right side, by the way) explicitly maps this to orthonormal basis vectors of a subspace of your final Hilbert space. This restricted $$U$$ is already unitary.
Now choose any unitary mapping between the orthogonal complements of these two-dimensional subspaces and the whole map together will be unitary.
• Thanks for pointing out the editing issue - yes, $U$ should not be on the right side. I have edited that. – user36125 Dec 22 '14 at 8:27
Though the question was asked almost 2 years ago, yet since it hasn't been closed yet, maybe the following answer will be helpful.
1. $$U|\psi_1\rangle|\alpha\rangle = |\psi_1\rangle|\psi_1\rangle$$
2. $$U|\psi_2\rangle|\alpha\rangle = |\psi_2\rangle|\psi_2\rangle$$
take the inner product of (1) with (2):
left-hand side: $$\langle \psi_1|\langle \alpha|U^\dagger U |\psi_2\rangle|\alpha\rangle = \langle \psi_1|\langle \alpha |\psi_2\rangle|\alpha\rangle = \langle \psi_1|\psi_2\rangle \langle \alpha |\alpha\rangle = \langle \psi_1|\psi_2\rangle$$
right-hand side: $$\langle \psi_1| \langle \psi_1| \psi_2\rangle |\psi_2 \rangle = \langle \psi_1|\psi_2\rangle ^2$$
Notice that the results on the left-hand side and on the right-hand side differ while they should be equal if cloning were possible. In fact, they are equal only in 2 cases:
1. If the states $|\psi_1\rangle$ and $|\psi_2\rangle$ are orthogonal, i.e. $\langle \psi_1|\psi_2\rangle = 0$
2. If the states $|\psi_1\rangle$ and $|\psi_2\rangle$ are identical, i.e. $\langle \psi_1|\psi_2\rangle = 1$
In other words, only orthogonal states can be cloned by the same unitary transformation.
You may also want to refer to "Introduction to Quantum Physics and Information Processing" by Radhika Vathsan where this proof is presented in slightly different form.
|
2019-05-25 08:59:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290558457374573, "perplexity": 234.47571005902358}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00286.warc.gz"}
|
https://tex.stackexchange.com/questions/222551/how-can-i-best-impose-image-sizing-rules-for-all-figures
|
How can I best impose image sizing rules for all figures?
I want all my images to be the same size. Is there a nice way to impose this rule in the preamble?
\begin{figure}[htbp]
\centering
\includegraphics[width=6cm]{my_image}
\caption{describe my image}
\label{fig:my_image}
\end{figure}
\let\oldincludegraphics\includegraphics
\renewcommand{\includegraphics}[2][]{%
\oldincludegraphics[<default settings>,#1]{#2}}
to your document preamble (after loading graphicx), you can set up default settings to be included for every figure.
For example, you can change <default settings> to be width=6cm, height=3cm (say) and all figures will have a width of 6cm and a height of 3cm, unless you specify other local options to override this. That is, the above redefinition still allows you to use \includegraphics[width=2cm]{<image>} which will set <image> to have width 2cm (and height 3cm).
The above method is similar to adding
\setkeys{Gin}{<default settings>}
to your preamble, which sets the default key-values for the Gin family (used for the inclusion of Graphics.
• Thanks that's works great but for one thing. When I try to override it on my front page logo with \includegraphics[width=2cm]{<image>} this does not seem to work. Any ideas why that might be? – Magpie Jan 10 '15 at 20:15
• @Magpie: I tried both approaches and they allow for local overriding. You must be doing something different... – Werner Jan 10 '15 at 21:10
• Yes, I missed out the ,#1 bit somehow. Thanks for clarifying. – Magpie Jan 28 '15 at 13:18
|
2019-11-13 05:48:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364595770835876, "perplexity": 1734.771181256786}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00223.warc.gz"}
|
http://www.leadinglesson.com/problem-on-the-right-hand-rule
|
## Problem on the right hand rule
The cross product $\langle1,1,0\rangle \times \langle 0,0,1\rangle$ is either in the direction of $\langle 1, -1, 0\rangle$ or $\langle -1, 1, 0 \rangle$. Without computation, which is it?
• ## Solution
Recall that
Align the fingers of your right hand in the direction of $\langle1,1,0\rangle$. If you can curl them $90$ degrees toward $\langle0,0,1\rangle$, your thumb will be pointing in the direction of $\langle1,-1,0\rangle$.
Hence, the cross product is in the direction of $\langle 1, -1, 0\rangle$.
|
2018-06-18 09:37:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7129465341567993, "perplexity": 213.0296810374039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860168.62/warc/CC-MAIN-20180618090026-20180618110026-00435.warc.gz"}
|
http://www.showmyiq.com/forum/viewtopic.php?f=38&p=1225
|
Stuck on some Level? Ask for a hint …
If you are an agent with variable name, come along and shoot your questions
Re: Stuck on some Level? Ask for a hint …
aww... but there are hints for lvl 11 and 12! why not 13 hahah!
|
2020-04-02 20:53:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231028914451599, "perplexity": 2373.8612767215423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00159.warc.gz"}
|
http://physexams.com/exam/Electromagnetic_Induction/7
|
# Electromagnetic Induction Questions
### Questions
In the figure a magnetic field $\vec{B}=3t^2\left(-\hat{k}\right)$ is perpendicular to the rectangular loop of sides $1 \,{\rm m}$ and $2\, {\rm m}$ and total resistance $R=3\,\Omega$
(a) Find the magnetic flux through the loop
(b) Find the magnitude of the induced current at $t=2\ {\rm sec}$.
(c) Show the direction of the induced current on the figure.
(a) The magnetic flux through any loop is defined as ${\Phi }_M=\vec{B}.\vec{A}$. so
${\Phi }_M=3t^2\left(-\hat{k}\right).2\left(-\hat{k}\right)=6t^2$
(b) Faraday's law states that the induced emf in any closed loop (circuit) is the negative of the time rate of change of the magnetic flux through it i.e.
${\mathcal E}=-\frac{{\rm d}{\Phi }_M}{{\rm d}t}=-12t$
So the induce current in the loop is $I_{ind}=\left|{\mathcal E}\right|/R\$so
$I_{ind}=\frac{12t}{3}=4{\left.t\right|}_{t=2{\rm s}}=8\ {\rm A}$
Since the magnetic field $B$ is increasing, the magnetic flux through the loop is also increasing. Lenz's law states that the direction of the induced current is such that the induced magnetic field opposes the original flux change. The induced current must be in counter-clockwise.
(c) see part (b)
A circular loop of wire has a $15\,\Omega$ resistance and a radius of $40.0\ {\rm cm}$. There is initially an external magnetic field of $3.0\$Tesla going down through the loop into the page. At $t=0$, the external magnetic field begins to change in such a way that after $8\ {\rm s}$ it has a magnitude of $1\ {\rm T}$ and is pointing up through the loop out of the page). During the interval $t=0$ through $t=8\ {\rm s}$,
(a) Calculate the magnitude of the average induced current through the loop, and
(b) Determine the direction of the average induced current through the loop.
(c) Briefly explain why I had to add the word average'' in (a) and (b) above.
A tightly wrapped circular coil of radius $r=0.18\ {\rm m}$ with $100$ turns (wraps) of wire, is sitting on a table. The ends of the wire are attached to a small light bulb whose filament has a resistance of $R=375\ \Omega$. A magnetic field $\vec{B}$, which the entire coil feels, is then turned on. The field makes an angle of $37{}^\circ$ with respect to the normal to the coil and its magnitude changes according to the following graph.
(a) What is the flux $\Phi$ of the magnetic field through a single wrap of the coil at the instant $t=0.100\ {\rm s}$?
(b) What is the time rate of change of the flux through the entire coil from $t=0s$ to $0.1s$?
(c) What is the induced emf during the period from $t=0\, {\rm s}$ to $t=0.1\ {\rm s}$?
(d) What induced current flows through the filament of the light bulb during the interval from $t=0\, {\rm s}\$to $t=0.1\,{\rm s}$?
(e) Looking at the coil above the table, is the current flow through the coil clockwise or counter-clockwise?
(f) From the time $t=0.1\,{\rm s}$ to $t=0.150\, {\rm s}$, what induced emf is developed across the ends of the filament?
A rectangular loop with width $L$ and a slide wire with mass $m$ are as shown in the figure. A uniform magnetic field $\vec{B}$ is directed perpendicular to the plane of the loop into the plane of the figure. The slide wire is given an initial speed of $v_0$ and then released. There is no friction between the slide wire and the loop, and the resistance of the loop is negligible in comparison to the resistance $R$ of the slide wire.
(a) Obtain an expression for $F$, the magnitude of the force exerted on the wire while it is moving at speed $v$
(b) Show that the distance $x$ that the wire moves before coming to rest is $x=mv_0R/a^2B^2$
(a) Due to the change of flux (as the slides moves the area of the loop varies), there is an induced current in the slide wire. By definition,
$${\mathcal E}=-\frac{d}{dt}{\Phi }_B=-\frac{d}{dt}\left(\vec{B}.\hat{n}dA\right)=-B\frac{dA}{dt} \ \text{where} \ A=Lx$$
$\Longrightarrow \ {\mathcal E}=-BL\frac{dx}{dt}=-BLv$
From Ohm's law, we obtain the induced current in the slide wire:
$i_{induced}=\frac{{\mathcal E}}{R_{wire}}=\frac{BLv}{R}$
In above the minus sign is omitted, this minus is a sign of Len's law!
When the sliding wire moves to the right, the flux through the loop is increasing so by Lenz's law the direction of induced current must be counter clockwise i.e. $id\vec{l}=iL\hat{j}$. We have a current carrying wire in a uniform magnetic field, so the force exerted on it is,
$\vec{F}=id\vec{l}\times \vec{B}=iLB\ \left(\hat{j}\times \left(-\hat{k}\right)\right)=-iLB\hat{i}$
$\Longrightarrow \vec{F}=\frac{B^2L^2v}{R}\left(-\hat{i}\right)$
(b) Use the Newton's 2${}^{nd}$ law and definition of the instantaneous acceleration and some manipulation to find instantaneous velocity of the wire as follows:
$\vec{F}=m\vec{a}=m\frac{dv}{dt}\ \Longrightarrow \ -\frac{B^2L^2v}{R}=m\frac{dv}{dt}\to -\frac{B^2L^2}{Rm}dt=\frac{dv}{v}$
$\int^v_{v_0}{\frac{dv}{v}}=-\int{\frac{B^2L^2}{mR}dt}\to \ {\ln \frac{v}{v_0}\ }=-\frac{B^2L^2}{mR}t\$
$\Longrightarrow v=v_0\,{\exp \left(-\frac{B^2L^2}{mR}t\right)\ }$
We can check its correctness by substituting the initial and final time into it:
$t=0\to v=v_0\$and $t\to \infty \ \Longrightarrow v=v_0\,e^{-\infty }\to 0$
Now use the definition of the instantaneous velocity to determine the desired distance $x$
\begin{align*}
v=\frac{dx}{dt}\to x &= \int{v\left(t\right)dt}=v_0\int{{\exp \left(-\frac{B^2L^2}{mR}t\right)\ }dt}\\
&=-\frac{mRv_0}{B^2L^2}{\left.{\exp \left(-\frac{B^2L^2}{mR}t\right)\ }\right|}^t_0=\frac{mRv_0}{B^2L^2}\left(1-e^{-\frac{B^2L^2}{mR}t}\right)
\end{align*}
In the limit that $t$ becomes infinitesimally small $t\to \infty$ we get $\ x\left(t\right)=\frac{mR}{B^2L^2}v_0$.
A circular loop of wire is in a region of spatially uniform magnetic field, as shown in the figure below. The magnetic field is directed into the plane of the figure. Determine the direction (clockwise or counterclockwise) of the induced current in the loop when
(a) B is increasing;
(b) B is decreasing;
(c) B is constant with value $B_0$ Explain your reasoning.
A rectangular loop of wire with mass $m$, width $w$, vertical length $l$, and resistance $R$ falls out of a magnetic field under the influence of gravity. The magnetic field is uniform and out of the paper ($\vec{B}=B\hat{x}$) within the area shown (see sketch) and zero outside of that area. At the time shown in the sketch, the loop is exiting the magnetic field at speed $\vec{V}\left(t\right)=V\left(t\right)\hat{z}$ , where ($V\left(t\right)<0$ meaning the loop is moving downward, not upward). Suppose at time $t$ the distance from the top of the loop to the point where the magnetic field goes to zero is $z(t)$ (see sketch)..
(a) What is the relationship between $V(t)$ and $z(t)$? Be careful of your signs here, remember that $z(t)$ is positive and decreasing with time, so $dz(t)/dt<\ 0$.
(b) If we define the area vector $\vec{A}$ to be out of the page, what is the magnetic flux $f_B$ through our circuit at time $t$ (in terms of $z(t)$, not $V(t)$).
(c) What is $d\Phi_B/dt$ ? Is this positive or negative at time $t$? Be careful here, your answer should include $V(t)$(not $z(t)$and remember that $(t)<\ 0$ ).
(d) What is the direction (clockwise or counterclockwise) and magnitude of the induced current on the loop of wire?
(e) What is the direction (into the page or out of the page) of the self-magnetic field due to the induced current inside the circuit loop?
(f) Besides gravity, what other force acts on the loop in the $\pm \ z$-direction? Give the magnitude and direction of this force in terms of the quantities given. (Hint: use $d\vec{F}=Id\vec{L}\times \vec{B}$)
(g) What is the magnitude of the terminal velocity?
Determine the magnitude of the mutual inductance between an infinite straight wire and a circular loop of radius $R$, that has its center a distance $d$ from the axis of the straight wire ($d>R$)
Hint: $\int^{2\pi}_0{\frac{d\theta}{a+b{\cos \theta\ }}}=\frac{2\pi}{\sqrt{a^2-b^2}}$
Apply a current $I_1$ in the straight wire and find the flux passing through the loop.
From Ampere's law $\oint{\vec B.d\vec \ell}=\mu_0I\$ the magnetic field produced by straight wire at distance $r$ is $B_1=\frac{\mu_0I_1}{2\pi r}$. Using the definition of flux passing through the loop, we have
${\Phi }_{21}=\int {\vec{B}}_1.\hat{n}\ da=\frac{\mu_0I_1}{2\pi}\int^R_0{\rho d\rho}\int^{2\pi}_0{\frac{d\theta}{r}}$
Where $r=d+\rho\,{\cos \theta\ }$ and $da=\rho d\rho \,d\theta$ is the area element in polar coordinates.(see figure). ${\Phi }_{21}$ is the flux produced by the current $I_1$ and passed the loop.
Using the hint $\int^{2\pi}_0{\frac{d\theta}{d+\rho\,{\cos \theta\ }}=2\pi/\sqrt{d^2-\rho^2}}$ ,one can show that ${\Phi }_{21}=\mu_0I_1\ \int^R_0{\frac{\rho d\rho}{\sqrt{d^2-\rho^2}}}\ \$
For determining the integral above we use the change of variables
let $u\equiv d^2-\rho^2$ , $\rho d\rho=-\frac{du}{2}$
$\Longrightarrow {\Phi }_{21}=\mu_0I_1\ \int^R_0{\frac{\rho d\rho}{\sqrt{d^2-\rho^2}}}=-\frac{1}{2}\mu_0I_1 \int^{d^{2}-R^{2}}_{d^{2}}{\frac{du}{\sqrt{u}}}\ =\mu_0I_1 \left(d-\sqrt{d^2-R^2}\right)$
so the mutual inductance, by definition, is
$M_{21}=\frac{{\Phi }_{21}}{I_1}=\mu_0\left(d-\sqrt{d^2-R^2}\right)$
A slide wire generator is shown in the figure where there is a perpendicular constant magnetic field $B$, uniform everywhere. Assume that the total resistance of the rectangular loop to be only due to the sliding part given by $R$. Now, if the sliding rod (length $L$) is pulled at a constant velocity $v$ shown in the figure, find:
(a) The rate at which energy is dissipated around this loop.
(b) The rate at which mechanical work is done to move the rod through the magnetic field. The dimensions of the stationary parts of the loop are not specified.
A loop of wire with $N=100$ turns and a radius of $3\ {\rm m}$ is situated in a decreasing magnetic field of $1\ {\rm T/sec}$ whose direction is depicted below.
(a) What is the direction of the induced current? Draw the direction of the diagram and explain your reasoning.
(b) What is the magnitude of the current if the resistance of the loop is $10\ \Omega$?
What is the emf induced in a solenoid with an inductance of ${\rm 0.25\ H}$ if the current is uniformly reduced from ${\rm 2.0\ A}$ to ${\rm 0\ A}$ in ${\rm 1/16}$ of a second?
A conducting bar of length $D$ rotates with angular frequency $\omega$ about a pivot $P$ at one end of the bar (see the figure at right). The other end of the bar is in slipping contact with a stationary conducting wire in the shape of a circle (we only show part of that circle to keep the drawing simple). Between point $P$ and the circular wire there is a resistor $R$ as shown. Thus the bar, the resistor, and the wire form a closed conducting loop. The resistance of the bar and the circular wire are negligibly small.
There is a uniform magnetic field $B$ perpendicular to the plane of the conducting wire, as shown. What is the induced current in the loop? Express your answer in terms of $D,\ \omega,\ R,\$and $B$.
A rail gun projectile launcher'' is shown at right. A large current moves in a closed loop composed of fixed rails, a power supply, and a very light, almost frictionless bar touching the rails. A magnetic field is perpendicular to the plane of the circuit.
(a) If the bar has a length ${\rm d=24\ cm}$, a mass of ${\rm 1.5\ g}$, and is placed in a field of ${\rm 1.8\ T}$, what constant current flow is needed to accelerate the bar from rest to a speed of ${\rm 25\ m/}{\rm s}$ in a distance of ${\rm 1.0\ m}$?
(b) In what direction must the field point?
Category : Electromagnetic Induction
Most useful formula in Waves:
$\mathcal E=-\frac{d\Phi_B}{dt}$
induced electric field:
$\oint \vec E \cdot d\vec \ell=-\frac{d\Phi_B}{dt}$
$emf$ produced in a conductor with length $L$ moves in uniform $B$ with velocity $v$
$\mathcal E=vBL$
Magnetic flux through surface $A$:
$\Phi_B=\vec B \cdot \vec A$
Definition of self inductance:
$L=\frac{\Phi_B}{I}$
Self-inductance of a solenoid of length $\ell$, cross sectional area $A$ and number of turns per unit length $n$:
$L=\mu_0 n^2 A \ell$
Energy stored in an inductor:
$U=\frac{1}{2}LI^2$
Number Of Questions : 12
|
2017-08-16 21:52:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.744707465171814, "perplexity": 214.21636486149538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102663.36/warc/CC-MAIN-20170816212248-20170816232248-00533.warc.gz"}
|
https://visimphysics.com/en/mechanics/pressure/
|
## Exercise 3.1 A
Investigate the effect of the depth of the tube head on the height of the water columns. Show the results graphically. Which conclusions can you draw from the results?
Answer: Pressure in water (hydrostatic pressure) grows when going deeper. Hydrostatic pressure increases linearly when going deeper.
## Exercise 3.1 B
Explore the dependence of gas volume and pressure on a graphical representation.
Answer: It is noticed that the pressure is inversely proportional to the volume, $p\propto\frac{1}{V}$
## Exercise 3.1 C
How much pressure was in the balloon? The red liquid in the tube is water.
|
2023-03-20 21:53:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48445823788642883, "perplexity": 1066.8231365325425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00461.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/which-is-the-best-amplifier-rate-and-comment.32569/
|
# Which is the best amplifier : rate and comment
#### simeonz11
Joined Apr 29, 2008
98
poo poo poo
Last edited:
#### beenthere
Joined Apr 20, 2004
15,808
Not enough information. What is the frequency of the sine wave? What kind of load will it drive?
#### t06afre
Joined May 11, 2009
5,936
I think should go for an dedicated audio poweramp IC. How many watt do you need?
#### simeonz11
Joined Apr 29, 2008
98
Lol sorry for that obvious mistake , forgot to mention .
it will drive a 1:1 isolation transformer in the full audio ranges 300hz to 20000hz , so an inductive and resistive load .
A maximum of 5 amps .
Yes I do expect alot of heat and am prepared with big heatsinks , so about 100 watts .
Last edited:
#### t06afre
Joined May 11, 2009
5,936
You say max 5 A, are you sure you will be able to to create such a current with +/- 30 volt? Do you have some inforation about your transformer.
#### simeonz11
Joined Apr 29, 2008
98
You say max 5 A, are you sure you will be able to to create such a current with +/- 30 volt? Do you have some inforation about your transformer.
Unfortunately ,I still do not .
Probably a heavy duty 1:1 audio transformer that fits this frequency range , still looking around .
The components are just there as an example besides the op amp.
Last edited:
#### t06afre
Joined May 11, 2009
5,936
I see some problem here. First of all I think it would be best to find the transformer first. Then base your design around this transformer. Now it is to many loose ends. You may end up with a useless design.
#### simeonz11
Joined Apr 29, 2008
98
I see some problem here. First of all I think it would be best to find the transformer first. Then base your design around this transformer. Now it is to many loose ends. You may end up with a useless design.
What would those problems be so I can better inform myself ?
I am still undecided , most are expensive . I expect 5 amps with an inductive and resistive load with one of those center tapped heavy duty transformers.
My transistor gains and op amp rating looks good enough tho , the transistor pair I will choose have a gain of 50-70 rated for 15 amps
Last edited:
#### t06afre
Joined May 11, 2009
5,936
The problem is the transformer. You have a quite wide frequency range. So you might experience a phenomenon named transformer core saturation http://www.opamp-electronics.com/tutorials/ac_theory_ch_009.htm
On the bright there are is many small firms that have specialized in making custom transformers. You should call such a firm give them your specification. They will be able to advice you. Do this before you put to much effort into your project
#### Audioguru
Joined Dec 20, 2007
11,251
The output transistors must have typical gain or more or they will not be able to drive an output of 5A when driven from the max 200mA output of the opamp.
The single diode operates the output transistors in pure class-B which produces pretty bad crossover distortion.
Audio amplifiers operate in class-AB with two diodes or a Vbe multiplier transistor and the output transistors have emitter resistors. The output transistors conduct a small current all the time.
The opamp is fast so it will try to reduce the crossover distortion especially the voltage-follower where all the gain of the opamp is used as negative feedback to almost eliminate the crossover distortion.
#### simeonz11
Joined Apr 29, 2008
98
Hi Audioguru , thx for commenting .
So whats stopping me from just using a voltage follower with nothing but the 2 transistors and some feedback .
I heard it doesnt do so good and I also heard it eliminates distortion . So whats true ?
It sure would be the simplest and most efficient solution if it did work , whats your opinion ?
Last edited:
#### Audioguru
Joined Dec 20, 2007
11,251
Why don't you use a class-AB audio amplifier IC that is designed to produce extremely low distortion?
This simple class-B circuit has crossover distortion.
#### SgtWookie
Joined Jul 17, 2007
22,201
Why don't you just get something like a cheap pair of TDA7254 audio amps (about $5/ea), feed one a noninverted signal, feed the other an inverted signal, and connect the outputs to the primary of your transformer? It won't need to be center tapped that way. Avnet Express has them for under$3:
http://avnetexpress.avnet.com/store/em/EMController/Audio-Amplifier/STMicroelectronics/TDA7294V/_/R-1559185/A-1559185/An-0?action=part&catalogId=500201&langId=-1&storeId=500201&listIndex=-1
Mouser has them for about $4.50 or so. Digikey sells them for around$5.80.
You'll probably wind up needing a broadband toroidal transformer; a fairly large one. Saturation at the lower end of the frequency range will be a problem.
#### simeonz11
Joined Apr 29, 2008
98
Why don't you use a class-AB audio amplifier IC that is designed to produce extremely low distortion?
This simple class-B circuit has crossover distortion.
Can this be used @ +-24 volts ?
Can this drive 200mA , that is my absolute minimum ?
I need something simple and cheap , 5 dollars is too much , I need 6 of these 200 mA amps
#### simeonz11
Joined Apr 29, 2008
98
Looks like an efficient compromise
Last edited:
#### Audioguru
Joined Dec 20, 2007
11,251
Crossover distortion is at high audio frequencies. Opamps have extremely high gain at very low frequencies but not much gain at high audio frequencies so the negative feedback reduces but does not eliminate crossover distortion.
It is not difficault to properly bias the output transistors in class-AB so there is no cossover distortion.
|
2019-10-23 23:38:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47369176149368286, "perplexity": 3261.977231521875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00371.warc.gz"}
|
https://cran.csiro.au/web/packages/enrichwith/vignettes/bias.html
|
# Introduction
The enrichwith package provides the enrich method to enrich list-like R objects with new, relevant components. The resulting objects preserve their class, so all methods associated with them still apply.
This vignette is a short case study demonstrating how enriched glm objects can be used to implement a quasi Fisher scoring procedure for computing reduced-bias estimates in generalized linear models (GLMs). Kosmidis and Firth (2010) describe a parallel quasi Newton-Raphson procedure.
# Endometrial cancer data
Heinze and Schemper (2002) used a logistic regression model to analyse data from a study on endometrial cancer. Agresti (2015 Section 5.7) provide details on the data set. Below, we fit a probit regression model with the same linear predictor as the logistic regression model in Heinze and Schemper (2002).
# Get the data from the online supplmementary material of Agresti (2015)
data("endometrial", package = "enrichwith")
modML <- glm(HG ~ NV + PI + EH, family = binomial("probit"), data = endometrial)
theta_mle <- coef(modML)
summary(modML)
##
## Call:
## glm(formula = HG ~ NV + PI + EH, family = binomial("probit"),
## data = endometrial)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.47007 -0.67917 -0.32978 0.00008 2.74898
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 2.18093 0.85732 2.544 0.010963 *
## NV 5.80468 402.23641 0.014 0.988486
## PI -0.01886 0.02360 -0.799 0.424066
## EH -1.52576 0.43308 -3.523 0.000427 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 104.90 on 78 degrees of freedom
## Residual deviance: 56.47 on 75 degrees of freedom
## AIC: 64.47
##
## Number of Fisher Scoring iterations: 17
As is the case for the logistic regression in Heinze and Schemper (2002), the maximum likelihood (ML) estimate of the parameter for NV is actually infinite. The reported, apparently finite value is merely due to false convergence of the iterative estimation procedure. The same is true for the estimated standard error, and, hence the value r round(coef(summary(modML))["NV", "z value"], 3) for the $$z$$-statistic cannot be trusted for inference on the size of the effect for NV.
In categorical-response models like the above, the bias reduction method in Firth (1993) has been found to result in finite estimates even when the ML ones are infinite (see, Heinze and Schemper 2002, for logistic regressions; Kosmidis and Firth 2011, for multinomial regressions; Kosmidis 2014, for cumulative link models).
One of the variants of that bias reduction method is implemented in the brglm R package, which estimates binomial-response GLMs using iterative ML fits on binomial pseudo-data (see, Kosmidis 2007, Chapter 5, for details). The reduced-bias estimates for the probit regression on the endometrial data can be computed as follows.
library("brglm")
## Loading required package: profileModel
## 'brglm' will gradually be superseded by 'brglm2' (https://cran.r-project.org/package=brglm2), which provides utilities for mean and median bias reduction for all GLMs and methods for the detection of infinite estimates in binomial-response models.
modBR <- brglm(HG ~ NV + PI + EH, family = binomial("probit"), data = endometrial)
theta_brglm <- coef(modBR)
summary(modBR)
##
## Call:
## brglm(formula = HG ~ NV + PI + EH, family = binomial("probit"),
## data = endometrial)
##
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 1.91460 0.78877 2.427 0.015210 *
## NV 1.65892 0.74730 2.220 0.026427 *
## PI -0.01520 0.02089 -0.728 0.466793
## EH -1.37988 0.40329 -3.422 0.000623 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 93.983 on 78 degrees of freedom
## Residual deviance: 57.587 on 75 degrees of freedom
## AIC: 65.587
The $$z$$-statistic for NV has value 2.22 when based on the reduced-bias estimates, providing some evidence for the existence of an effect.
In the following, we use enrichwith to implement two variants of the bias reduction method via a unifying quasi Fisher scoring estimation procedure.
# Quasi Fisher scoring for bias reduction
Consider a parametric statistical model $$\mathcal{P}_\theta$$ with unknown parameter $$\theta \in \Re^p$$ and the iteration $\theta^{(k + 1)} := \theta^{(k)} + \left\{i(\theta^{(k)})\right\}^{-1} s(\theta^{(k)}) - c(\theta^{(k)}) b(\theta^{(k)})$ where $$\theta^{(k)}$$ is the value of $$\theta$$ at the $$k$$th iteration, $$s(\theta)$$ is the gradient of the log-likelihood for $$\mathcal{P}_\theta$$, $$i(\theta)$$ is the expected information matrix, and $$b(\theta)$$ is the $$O(n^{-1})$$ term in the expansion of the bias of the ML estimator of $$\theta$$ (see, for example, Cox and Snell 1968).
The above iteration defines a quasi Fisher scoring estimation procedure in general, and reduces to exact Fisher scoring for ML estimation when $$c(\theta^{(k)}) = 0_p$$, where $$0_p$$ is a vector of $$p$$ zeros.
For either $$c(\theta) = I_p$$ or $$c(\theta) = \left\{i(\theta)\right\}^{-1} j(\theta)$$, where $$I_p$$ is the $$p \times p$$ identity matrix and $$j(\theta)$$ is the observed information, $$\theta^{(\infty)}$$ (if it exists) is a reduced-bias estimate, in the sense that it corresponds an estimator with bias of smaller asymptotic order than that of the ML estimator (see, Firth 1993; Kosmidis and Firth 2010). The brglm estimates correspond to $$c(\theta) = I_p$$.
The asymptotic distribution of the reduced-bias estimators is the same to that of the ML estimator (see, Firth 1993 for details). So, the reduced-bias estimates can be readily used to calculate $$z$$-statistics.
# Implementation using enrichwith
For implementing the iteration for bias reduction, we need functions that can compute the gradient of the log-likelihood, the observed and expected information matrix, and $$b(\theta)$$ at arbitrary values of $$\theta$$.
The enrichwith package can produce those functions for any glm object through the auxiliary_functions enrichment option (to see all available enrichment options for glm objects run get_enrichment_options(modML).
library("enrichwith")
enriched_modML <- enrich(modML, with = "auxiliary functions")
Let’s extract the functions from the enriched_modML object.
# Extract the ingredients for the quasi Fisher scoring iteration from the enriched glm object
gradient <- enriched_modML$auxiliary_functions$score # gradient of the log-likelihood
information <- enriched_modML$auxiliary_functions$information # information matrix
bias <- enriched_modML$auxiliary_functions$bias # first-order bias
For the more technically minded, note here that the above functions are specific to modML in the sense that they look into a special environment for necessary objects like the model matrix, the model weights, the response vector, and so on.
This stems from the way enrichwith has been implemented. In particular, if create_enrichwith_skeleton is used, the user/developer can directly implement enrichment options to enrich objects with functions that directly depend on other components in the object to be enriched.
The following code chunk uses enriched_modML to implement the quasi Fisher scoring procedure for the analysis of the endometrial cancer data. For p <- length(theta_mle) , the starting value for the parameter vector is set to theta_current <- rep(0, p) , and the maximum number of iterations to maxit <- 100 . As stopping criterion, we use the absolute change in each parameter value with tolerance epsilon <- 1e-06
# The quasi Fisher scoring iteration using c(theta) = identity
for (k in seq.int(maxit)) {
i_matrix <- information(theta_current, type = "expected")
b_vector <- bias(theta_current)
step <- solve(i_matrix) %*% s_vector - b_vector
theta_current <- theta_current + step
# A stopping criterion
if (all(abs(step) < epsilon)) {
break
}
}
(theta_e <- drop(theta_current))
## (Intercept) NV PI EH
## 1.91460348 1.65892018 -0.01520487 -1.37987837
## attr(,"coefficients")
## [1] 0 0 0 0
## attr(,"dispersion")
## [1] 1
The estimation procedure took 8 iterations to converge, and, as expected, the estimates are numerically the same to the ones that brglm returned.
all.equal(theta_e, theta_brglm, check.attributes = FALSE, tolerance = epsilon)
## [1] TRUE
A set of alternative reduced-bias estimates can be obtained using $$c(\theta) = \left\{i(\theta)\right\}^{-1} j(\theta)$$. Starting again at theta_current <- rep(0, p)
# The quasi Fisher scoring iteration using c(theta) = solve(i(theta)) %*% j(theta)
for (k in seq.int(maxit)) {
i_matrix <- information(theta_current, type = "expected")
j_matrix <- information(theta_current, type = "observed")
b_vector <- bias(theta_current)
step <- solve(i_matrix) %*% (s_vector - j_matrix %*% b_vector)
theta_current <- theta_current + step
# A stopping criterion
if (all(abs(step) < epsilon)) {
break
}
}
(theta_o <- drop(theta_current))
## (Intercept) NV PI EH
## 1.89707065 1.72815655 -0.01471219 -1.37254188
The estimation procedure took 9 iterations to converge.
The ML estimates and the estimates from the two variants of the bias reduction method are
round(data.frame(theta_mle, theta_e, theta_o), 3)
## theta_mle theta_e theta_o
## (Intercept) 2.181 1.915 1.897
## NV 5.805 1.659 1.728
## PI -0.019 -0.015 -0.015
## EH -1.526 -1.380 -1.373
Note that the reduced-bias estimates have shrunk towards zero. This is typical for reduced-bias estimation in binomial-response GLMs (see, for example, Cordeiro and McCullagh 1991, Section 8; Kosmidis 2007, Section 5.2, 2014 for shrinkage in cumulative link models).
The corresponding $$z$$-statistics are
se_theta_mle <- sqrt(diag(solve(information(theta_mle, type = "expected"))))
se_theta_e <- sqrt(diag(solve(information(theta_e, type = "expected"))))
se_theta_o <- sqrt(diag(solve(information(theta_o, type = "expected"))))
round(data.frame(z_mle = theta_mle/se_theta_mle,
z_br_e = theta_e/se_theta_e,
z_br_o = theta_o/se_theta_o), 3)
## z_mle z_br_e z_br_o
## (Intercept) 2.544 2.427 2.407
## NV 0.009 2.220 2.215
## PI -0.799 -0.728 -0.701
## EH -3.523 -3.422 -3.411
The two variants for bias reduction result in slightly different reduced-bias estimates and $$z$$-statistics, though the $$z$$-statistics from both variants provide some evidence for the existence of an effect for NV.
# Notes
A general family of bias reduction methods is described in Kosmidis and Firth (2009).
The quasi Fisher scoring iteration that has been described here is at the core of the brglm2 R package, which provides various bias reduction methods for GLMs.
# References
Agresti, A. 2015. Foundations of Linear and Generalized Linear Models. Wiley Series in Probability and Statistics. Wiley.
Cordeiro, G. M., and P. McCullagh. 1991. “Bias Correction in Generalized Linear Models.” Rssb 53 (3): 629–43.
Cox, D. R., and E. J. Snell. 1968. “A General Definition of Residuals (with Discussion).” Journal of the Royal Statistical Society, Series B: Methodological 30: 248–75.
Firth, D. 1993. “Bias Reduction of Maximum Likelihood Estimates.” Biometrika 80 (1): 27–38.
Heinze, G., and M. Schemper. 2002. “A Solution to the Problem of Separation in Logistic Regression.” Statistics in Medicine 21: 2409–19.
Kosmidis, I. 2007. “Bias Reduction in Exponential Family Nonlinear Models.” PhD thesis, Department of Statistics, University of Warwick. http://www.ikosmidis.com/files/ikosmidis_thesis.pdf.
———. 2014. “Improved Estimation in Cumulative Link Models.” Journal of the Royal Statistical Society, Series B: Methodological 76 (1): 169–96. https://doi.org/10.1111/rssb.12025.
Kosmidis, I., and D. Firth. 2009. “Bias Reduction in Exponential Family Nonlinear Models.” Biometrika 96 (4): 793–804. https://doi.org/10.1093/biomet/asp055.
———. 2010. “A Generic Algorithm for Reducing Bias in Parametric Estimation.” Electronic Journal of Statistics 4: 1097–1112. https://doi.org/10.1214/10-EJS579.
———. 2011. “Multinomial Logit Bias Reduction via the Poisson Log-Linear Model.” Biometrika 98 (3): 755–59.
|
2022-05-28 12:46:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.782896876335144, "perplexity": 5761.321467257969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00652.warc.gz"}
|
http://tex.stackexchange.com/questions/48914/how-do-i-include-pstricks-pictures-in-a-latex-document/48924
|
# How do I include PSTricks pictures in a LaTeX document?
I use LaTeX regularly, and several times I have included .eps images. I am learning how to use PSTricks. I have used it to create .ps and .pdf files with images. There is plenty of free information on the web, but I haven't seen the answer to this question: how do you include the pictures that PSTricks produces in your LaTeX file?
-
If you have image files (EPS or PDF) you can include them with the graphicx package and \includegraphic{<filename>}. latex needs an EPS and pdflatex a PDF file. If you put both in your folder and omit the suffix in the including macro the package decied which file to use automatically.
If you want to include the PSTricks code directly you may use the {pspicture}environment.
-
If you write each PSTricks diagram in a separate TeX input file and compile it with latex-dvips-ps2pdf (much faster) or xelatex (slower) to obtain a PDF output, you might need the PDF output without white spaces around it.
To obtain the tight PDF output, use the following template for each diagram you want to make.
\documentclass[pstricks,border=12pt]{standalone}
\begin{document}
\begin{pspicture}[showgrid=true](-3,-3)(3,3)
% your drawing code goes here!
\end{pspicture}
\end{document}
It will be better if you put all your TeX input files for creating diagrams in a special folder outside other projects you have. This method will make your diagrams consumable to many projects easily. My directory structures look like below.
All my TeX input file for creating diagrams are placed in Diagrams. The PDF outputs associated with the input files are also in Diagrams for sure.
Another project, namely ProjectA, might use the diagrams you have made before. For example, let Main.tex be the input file in ProjectA folder. You can import the diagrams easily as follows.
\documentclass{article}
\usepackage{graphicx}
\graphicspath{{../Diagrams/}}
\begin{document}
\begin{figure}
\centering
\includegraphics[scale=1.5]{Template}
\caption{This is my PSTricks template.}
\label{fig:Template}
\end{figure}
\end{document}
The important points are:
• Load graphicx and set the graphics path. Adjust the scale to meet your preference.
• Compile it with pdflatex or xelatex. You cannot use latex-dvips-ps2pdf unless you have EPS equivalent for each PDF output.
-
In case anyone is interested, I ended up using \documentclass[12pt]{standalone} in the file where the picture was drawn. It worked fine. – bogus Mar 25 '12 at 19:41
@user20520: standalone internally uses preview package. – stalking is prohibited Mar 25 '12 at 19:47
@bogus: I updated the template in my answer, now it uses standalone class you prefer. – stalking is prohibited Sep 7 '13 at 2:57
Say your PSTricks figure file looks like this:
\begin{pspicture}(0,-2.4)(7.72,2.4)
\psframe[linewidth=0.04,dimen=outer](7.72,2.4)(0.0,-2.4)
\end{pspicture}
saved by the name picture_file.tex in the same folder as your main latex file.
In your latex main file, saved by the name main_file.tex, add
\usepackage{pstricks} %for embedding pspicture.
\usepackage{graphicx} %for figure environment.
Now, the picture_file.tex can be included in the main_file.tex as:
%=========================
\begin{figure}[h]
\centering{
\resizebox{0.5\textwidth}{!}{\input{picture_file.tex}}}
Only the main_file.tex needs be compiled.
2. Make sure the \documentclass[...]{...} does not have minimal or letter as the type, in which case, the figure environment would not be recognized.
|
2014-12-18 16:42:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609676003456116, "perplexity": 3339.784710504595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767274.159/warc/CC-MAIN-20141217075247-00024-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.semanticscholar.org/paper/Detection-of-Galactic-Center-source-G2-at-3.8-%CE%BCm-Witzel-Ghez/818a5a9fa6561ab7e4727e29a8927e79772d7831
|
Detection of Galactic Center source G2 at 3.8 μm during periapse passage
@article{Witzel2014DetectionOG,
title={Detection of Galactic Center source G2 at 3.8 $\mu$m during periapse passage},
author={Gunther Witzel and Andrea M. Ghez and Mark R. Morris and Breann N. Sitarski and Anna Boehle and Smadar Naoz and Randall D. Campbell and Eric Edward Becklin and Gabriela Canalizo and Samantha Chappell and Tuan Do and Jessica R. Lu and Keith Y. Matthews and Leonhard Meyer and Alan N. Stockton and Peter Wizinowich and Sylvana Yelda},
journal={The Astrophysical Journal},
year={2014},
volume={796}
}
We report new observations of the Galactic Center source G2 from the W. M. Keck Observatory. G2 is a dusty red object associated with gas that shows tidal interactions as it nears closest approach with the Galaxy’s central black hole. Our observations, conducted as G2 passed through periapse, were designed to test the proposal that G2 is a 3 earth mass gas cloud. Such a cloud should be tidally disrupted during periapse passage. The data were obtained using the Keck II laser guide star adaptive… Expand
The Post-periapsis Evolution of Galactic Center Source G1: The Second Case of a Resolved Tidal Interaction with a Supermassive Black Hole
We present new adaptive optics (AO) imaging and spectroscopic measurements of Galactic center source G1 from W. M. Keck Observatory. Our goal is to understand its nature and relationship to G2, whichExpand
The Post-pericenter Evolution of the Galactic Center Source G2
In early 2014, the fast-moving near-infrared source G2 reached its closest approach to the supermassive black hole Sgr. A* in the Galactic center. We report on the evolution of the ionized gaseousExpand
Infrared-excess Source DSO/G2 Near the Galactic Center: Theory vs. Observations
• Physics
• 2015
Based on the monitoring of the Dusty S-cluster Object (DSO/G2) during its closest approach to the Galactic Center supermassive black hole in 2014 and 2015 with ESO VLT/SINFONI, we further explore theExpand
Observations of Sagittarius A* during the pericenter passage of the G2 object with MAGIC
Context. We present the results of a multi-year monitoring campaign of the Galactic center (GC) with the MAGIC telescopes. These observations were primarily motivated by reports that a putative gasExpand
A population of dust-enshrouded objects orbiting the Galactic black hole
Observations of four additional G objects, all lying within 0.04 parsecs of the black hole and forming a class that is probably unique to this environment are reported. Expand
Monitoring the Dusty S-cluster Object (DSO/G2) on its Orbit toward the Galactic Center Black Hole
We analyze and report in detail new near-infrared (1.45-2.45 ?m) observations of the Dusty S-cluster Object (DSO/G2) during its approach to the black hole at the center of the Galaxy that wereExpand
G2 and Sgr A*: A Cosmic Fizzle At The Galactic Center
• Physics
• 2015
We carry out a series of simulations of G2-type clouds interacting with the black hole at the galactic center, to determine why no large changes in the luminosity of Sgr A* were seen, and toExpand
Detection of polarized continuum emission of the Dusty S-cluster Object (DSO/G2)
Abstract A peculiar source in the Galactic center known as the Dusty S-cluster Object (DSO/G2) moves on a highly eccentric orbit around the supermassive black hole with the pericenter passage in theExpand
Monitoring dusty sources in the vicinity of Sagittarius A*
We trace several dusty infrared sources on their orbit around the supermassive black hole (SMBH) SgrA* in the center of our galaxy. We give an overview of known and unknown sources in the directExpand
Polarized near-infrared light of the Dusty S-cluster Object (DSO/G2) at the Galactic center
We investigate an infrared-excess source called G2 or Dusty S-cluster Object (DSO), which moves on a highly eccentric orbit around the Galaxy’s central black hole, Sgr A*. We use, for the first time,Expand
References
SHOWING 1-10 OF 45 REFERENCES
KECK OBSERVATIONS OF THE GALACTIC CENTER SOURCE G2: GAS CLOUD OR STAR?
We present new observations and analysis of G2—the intriguing red emission-line object which is quickly approaching the Galaxy's central black hole. The observations were obtained with the laserExpand
The Keplerian orbit of G2
• L. Meyer, +8 authors E. Becklin
• Physics
• Proceedings of the International Astronomical Union
• 2013
Abstract We give an update of the observations and analysis of G2 – the gaseous red emission-line object that is on a very eccentric orbit around the Galaxy's central black hole and predicted to comeExpand
THE GALACTIC CENTER CLOUD G2 AND ITS GAS STREAMER
We present new, deep near-infrared SINFONI @ VLT integral field spectroscopy of the gas cloud G2 in the Galactic Center, from late 2013 August, 2014 April, and 2014 July. G2 is visible inExpand
The Supermassive Black Hole at the Center of the Milky Way
We report the detection of a variable point source, imaged at L' (3.8 μm) with the Keck II 10 m telescope's adaptive optics system, that is coincident to within 18 mas (1 σ) of the Galaxy's centralExpand
New Observations of the Gas Cloud G2 in the Galactic Center
We present new observations of the recently discovered gas cloud G2 currently falling toward the massive black hole in the Galactic Center. The new data confirm that G2 is on a highly ellipticalExpand
Pericenter passage of the gas cloud G2 in the galactic center
We have further followed the evolution of the orbital and physical properties of G2, the object currently falling toward the massive black hole in the Galactic Center on a near-radial orbit. New,Expand
Near-infrared flares from accreting gas around the supermassive black hole at the Galactic Centre
High-resolution infrared observations of Sagittarius A* reveal ‘quiescent’ emission and several flares, and traces very energetic electrons or moderately hot gas within the innermost accretion region. Expand
A gas cloud on its way towards the supermassive black hole at the Galactic Centre
The presence of a dense gas cloud approximately three times the mass of Earth that is falling into the accretion zone of Sagittarius A*, a compact radio source at the Galactic Centre, is reported. Expand
Physics of the Galactic Center Cloud G2, on Its Way toward the Supermassive Black Hole
We investigate the origin, structure, and evolution of the small gas cloud G2, which is on an orbit almost straight into the Galactic central supermassive black hole (SMBH). G2 is a sensitive probeExpand
High Proper-Motion Stars in the Vicinity of Sagittarius A*: Evidence for a Supermassive Black Hole at the Center of Our Galaxy
• Physics
• 1998
Over a 2 year period we have conducted a di†raction-limited imaging study at 2.2 km of the inner 6A ) 6A of the central stellar cluster of the Galaxy using the W. M. Keck 10 m telescope. The K-bandExpand
|
2021-10-25 12:32:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4950312674045563, "perplexity": 3094.6058266733403}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00154.warc.gz"}
|
https://www.chemeurope.com/en/encyclopedia/Boltzmann_equation.html
|
My watch list
my.chemeurope.com
# Boltzmann equation
The Boltzmann equation, also often known as the Boltzmann transport equation, devised by Ludwig Boltzmann, describes the statistical distribution of particles in a fluid. It is one of the most important equations of non-equilibrium statistical mechanics, the area of statistical mechanics that deals with systems far from thermodynamic equilibrium; for instance, when there is an applied temperature gradient or electric field. The Boltzmann equation is used to study how a fluid transports physical quantities such as heat and charge, and thus to derive transport properties such as electrical conductivity, Hall conductivity, viscosity, and thermal conductivity.
The Boltzmann equation is an equation for the time t evolution of the distribution (properly a density) function f(x, p, t) in one-particle phase space, where x and p are position and momentum, respectively. The distribution is defined such that
$f(\mathbf{x},\mathbf{p},t)\,d\mathbf{x}\,d\mathbf{p}$ is proportional to the number of particles in the phase-space volume dx dp at time t.
Consider those particles described by f experiencing an external Force F. Then f must satisfy
$f(\mathbf{x}+\frac{\mathbf{p}}{m}dt,\mathbf{p}+\mathbf{F}dt,t+dt)\,d\mathbf{x}\,d\mathbf{p}- f(\mathbf{x},\mathbf{p},t)d\mathbf{x}\,d\mathbf{p}=0$,
that is, the particle density in dx dp does not change even though force F is applied and there are no collisions between particles. However, since collisions do occur, the particle density in the phase-space volume dx dp changes.
$f(\mathbf{x}+\frac{\mathbf{p}}{m}dt,\mathbf{p}+\mathbf{F}dt,t+dt)\,d\mathbf{x}\,d\mathbf{p}- f(\mathbf{x},\mathbf{p},t)d\mathbf{x}\,d\mathbf{p}= \left. \frac{\partial f(\mathbf{x},\mathbf{p},t)}{\partial t} \right|_{\mathrm{coll}}d\mathbf{x}\,d\mathbf{p}\,dt$
Dividing the equation by dx dp dt and taking the limit, we can get the Boltzmann equation
$\frac{\partial f}{\partial t} + \frac{\partial f}{\partial \mathbf{x}} \cdot \frac{\mathbf{p}}{m} + \frac{\partial f}{\partial \mathbf{p}} \cdot \mathbf{F} = \left. \frac{\partial f}{\partial t} \right|_{\mathrm{coll}}.$
F(x, t) is the force field acting on the particles in the fluid, and m is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation is often mistaken called the Liouville equation (the Liouville Equation is a N-particle Equation).
## Stosszahl Ansatz
The above Boltzmann equation is of little practical use as it leaves the collision term unspecified. A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the 'Stosszahl Ansatz', and is also known as the 'molecular chaos assumption'. Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:
$\left. \frac{\partial f}{\partial t} \right|_{\mathrm{coll}} = \int\!\!\! \int g(\mathbf{p-p'},\mathbf{q}) (f(\mathbf{x},\mathbf{p+q},t) f(\mathbf{x},\mathbf{p'-q},t) - f(\mathbf{x},\mathbf{p},t) f(\mathbf{x},\mathbf{p'},t))\,d\mathbf{p'}\,d\mathbf{q}$
## Extensions and applications
It is also possible to write down relativistic Boltzmann equations for systems in which a number of particle species can collide and produce different species. This is how the formation of the light elements in big bang nucleosynthesis is calculated. The Boltzmann equation is also often used in dynamics, especially galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe.
In Hamiltonian mechanics, the Boltzmann equation is often written more generally as
$\hat{\mathbf{L}}[f]=\mathbf{C}[f]$,
where L is the Liouville operator describing the evolution of a phase space volume and C is the collision operator. The non-relativistic form of L is
$\hat{\mathbf{L}}_\mathrm{NR}=\frac{\partial}{\partial t}+\frac{\mathbf{p}}{m}\cdot\nabla_\mathbf{x}+\mathbf{F}\cdot\nabla_\mathbf{p},$
and the generalization to (general) relativity is
$\hat{\mathbf{L}}_\mathrm{GR}=\sum_\alpha p^\alpha\frac{\partial}{\partial x^\alpha}-\sum_{\alpha\beta\gamma}\Gamma^{\alpha}{}_{\beta\gamma}p^\beta p^\gamma\frac{\partial}{\partial p^\alpha},$
where Γ is the Christoffel symbol.
The name Boltzmann equation is also given to this equation:
• Boltzmann Distribution: ${{N_i}\over{N}} = {{g_i e^{-E_i/k_BT}}\over{Z(T)}}$
|
2019-10-22 01:56:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938374936580658, "perplexity": 385.93411480129174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00232.warc.gz"}
|
https://mohundro.com/blog/2007/06/28/a-simple-function-to-get-a-webclient-with-proxy-in-powershell
|
Published on
# A simple function to get a WebClient (with proxy) in PowerShell
Authors
I was browsing around a few days ago and came across a link to a blog post on the SAPIEN website about searching live.com from PowerShell. Sounds cool, I think, so I download it to try it out only to be thwarted by the proxy at work.
There are a lot of scripts out there that use the System.Net.WebClient and the vast majority don't take proxies into account. To get around this issue, here's a simple script that I wrote to help out:
function Get-ProxyWebClient {
$webclient = New-Object System.Net.WebClient$proxy = New-Object System.Net.WebProxy($global:ProxyUrl,$global:ProxyPort)
$proxy.Credentials = (Get-Credential).GetNetworkCredential()$webclient.Proxy = $proxy return$webclient
}
This script makes the assumption that you've already predefined the $global.ProxyUrl and$global.ProxyPort variables in your profile. It is also nice for me because it prompts me for my credentials instead of having them hard-coded in the script or in my profile.
Now I can also check the weather from PowerShell using the Show-Weather script that the guys at SAPIEN provided in their über-prompt post.
|
2022-07-04 20:47:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48783645033836365, "perplexity": 1813.9525620130462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00067.warc.gz"}
|
https://aviation.stackexchange.com/tags/airspeed/hot?filter=all
|
# Tag Info
91
The Gossamer Albatross is a human-powered plane with a top speed of 29 km/h (18mph). It was used to cross the English Channel and seems to meet the criteria of the question.
83
Because wings work on air moving past them, not ground moving below them. Heck, in a 35 knot headwind, the Antonov-2 could be rolling backwards at 2 knots and still take off!
65
I'd like to answer this question by debunking the premise of the question: that most plane crashes happen when planes fall out of the sky, and that it's like rock climbing where the higher you are, the more likely a fall will kill you. While it sounds believable, it's almost entirely false, and since it isn't diving out of the sky that kills you, lowering ...
61
The Antonov AN-2 has no stall speed quoted in the operating manual and can fly under full control at about 30 mph. Thus if the headwind is sufficiently large the aircraft will move backwards with respect to the ground.
61
It would likely create a more deadly situation. In aviation altitude is your friend. Generally speaking altitude in the case of an emergency buys you time to work the problem. Generally you want to be as high as practical for the aircraft in question. Altitude also buys you glide distance to find a suitable landing location in an emergency. Airplanes ...
59
This is because what you are looking at is the IAS indicator (Indicated Air Speed). This represents the amount of relative air which flows over and under the wings of the plane. This is what creates lift and enables the plane to fly. This is why this instrument is so important and belongs to the primary flight instruments. This is not to be confused with GS ...
52
The speed indicator in the cockpit shows indicated airspeed. Indicated airspeed is usually different than GPS speed, due to wind and aerodynamic effects. GPS speed is your speed with respect to the ground. If you are standing on terra firma it reads 0. If it reads 100 knots you will be 100NM away from where you are now in one hour, so long as you keep flying ...
46
The Harrier, Yak-38, Yak-141, XV-15, and V-22 are all fixed wing aircraft. All can hover in mid air, controlled. So they are in controlled flight at 0 velocity. At least the Harrier can even be in controlled flight flying backwards, so with negative velocity. The others may as well, I don't know.
43
It is not a unit. It is just Microsoft trying to be funny. Or to convey an idea of the magnitude. Thanks @Jackie for pointing out that around March 2019, Microsoft has released this calculator as open source under MIT license. It is available in this Github repository. The source code sheds light on the definition of these pseudo units: Description in ./...
43
Because what determines the amount of lift generated is the indicated airspeed, not the ground speed. As usual, it is always easier to think about an extreme case. If you have an aircraft with VR (speed at rotation for takeoff) of 90 knots, and there is an 80 knots head wind, in theory it will rotate with ground speed of 10 knots even though the indicated ...
43
"Speed" is not a singular term in aviation. There are many different ways to measure speed. See for example Why is there a difference between GPS Speed and Indicator speed? Most commercial jets cruise with a true airspeed in the range 400-500 knots and an indicated airspeed in the range 200-300 knots. For the purpose of passenger transportation, ground ...
41
An airplane can slow down and reduce its speed while in flight. The easiest way to do so is to reduce the amount of thrust that the engines are producing. This will produce an almost immediate reduction of the airspeed, especially if the plane is maintaining the same altitude. There are also devices called air brakes and spoilers that can be further used ...
41
You can, but you have to live with the consequences. There are several things that can happen: Depending on the vertical gusts ahead, you might not even get close to v$_{NE}$. There is another speed limit for gusty weather called v$_B$, and exceeding this will run the risk of overstressing the wing structure. Going above v$_B$ will overstress the wings in a ...
35
A propeller accelerates the air of density $\rho$ which is flowing through the propeller disc of diameter $d_P$. This can be idealized as a stream tube going through the propeller disc: The air speed ahead is $v_0 = v_{\infty}$ and the air speed aft of the propeller is $v_1 = v_0 + \Delta v$. The propeller effects a pressure change which sucks in the air ...
33
It's an illusion that the blades appear to be going slowly. It's actually a well known effect called the wagon wheel effect. Essentially the rotor is spinning at close to an even multiple of the camera's framerate divided by the number of rotors. This means that between frames the blades have moved a full quarter rotation (or a multiple of that). Creating ...
33
No because aircraft are categorized by their speed at the runway threshold (1.3 times stall speed). VAT —Speed at threshold used by ICAO (1.3 times stall speed in the landing configuration at maximum certificated landing mass) By knowing the category, ATC is able to use appropriate speeds. The category is not actually listed anywhere, so the controller ...
32
The SR-71 was only capable of Mach 3.3 flight at altitude. You can see the limitations in the pilot's operating handbook(POH) for the aircraft: At higher altitudes the relative speed of sound is lower. If we use this handy NASA calculator in a standard atmosphere the relative speed of sound at 70,000 ft. is 660 MPH making your 2193 MPH equal to Mach 3.321 ...
31
Spoilers have many uses, but first I want to distinguish types of spoilers. Airplanes will typically have ground spoilers and flight spoilers and they work like they sound. Ground spoilers only open up on the ground -- these are usually much more detrimental to lift than the flight spoilers. Flight spoilers open when actuated by the pilots Another type of ...
31
At the critical mach number, some part of the aircraft (usually the wing) will have air flowing over it at a speed in excess of mach 1. If the aircraft is not meant to fly at transonic or supersonic speeds, shock waves will flow over the wing. This can either cause the wing to stall, the control surfaces to become unresponsive, or the plane to go into the ...
30
Your airspeed does not remain constant because of inertia: it takes more time for the airplane to adapt to the new relative wind, compared to the time it takes for the wind to change. Example One: you're flying 80 knots and the headwind is 20 knots. Over a time of 3 minutes, the headwind gradually reduces from 20 knots to 10 knots. Since the change is ...
30
The P&W PT6 comes in many different varieties. The smallest PT have 500hp while the largest have 1700hp. It is not the "same engine" as you state in your question. The standard Caravan has 675hp while the other aircraft you mention have 1,200hp. That alone can account for the major difference in performance. Fixed gear and struts also add to an ...
30
It won't be pleasant. The main result of being exposed outside at altitude, besides the obvious hypothermia, frostbite and hypoxia, will be bruising from the 280-ish knot slipstream (it's the indicated airspeed that matters as far was what you feel, not the true airspeed), and injuries from being flung around by any turbulent flow you are in. Most of your ...
29
Other than the TU-144 and Concorde, the record for the fastest True Airspeed in an airliner probably belongs to a DC-8. Wikipedia Douglas DC-8 On August 21, 1961, a Douglas DC-8 broke the sound barrier at Mach 1.012 (660 mph/1,062 km/h) while in a controlled dive through 41,000 feet (12,497 m) and maintained that speed for 16 seconds. The flight was to ...
28
No, it could not fly much faster with the available energy. Lift is a question of wing area and dynamic pressure. Solar Impulse 2 has 269.5 m² wing area to carry its 2.3 tons of mass. This is a wing loading of just 8.53 kg/m²; much less than even gliders have (they start at around 30 kg/m²). This allows it to fly very slowly; if we assume it ...
28
To reduce damage in case of a bird strike. The restriction is not only for the 737-100 and -200 models, the 737 NG QRH says: WINDOW HEAT OFF In flight: WINDOWS HEAT switch (affected window) ..... OFF Limit airspeed to 250 knots maximum below 10,000 feet. Pull both WINDSHIELD AIR controls. This vents conditioned air to the inside of the windshield for ...
26
It is not only the mass that affects the landing speed. Wing area plays an important role as well. A larger wing can lift more weight at the same speed than a smaller wing. If you compare the wing loading of these aircraft the differences are smaller: A388: Maximum landing weight: 391000 kg Wing area: 845 m2 Wing loading: 463 kg/m2 B744: Maximum ...
25
This answer is written for air transport category aircraft. Introduction During take-off there are three operationally significant speeds that ensure a safe take-off: V1 - the take-off decision speed VR - the rotation speed V2 - the take-off safety speed In addition there are three technically important speeds: VMU the minimum unstick speed VMCG the ...
24
Speed of a plane is actually measured in a number of different ways, and relative to different things. Here is a summary of the different types: Indicated Airspeed (IAS). This is the number shown on the instrument that measures airspeed, and isn't really relative to anything. Rectified Airspeed (RAS) or Calibrated Air Speed (CAS) This is IAS corrected for ...
23
Most modern jets use an Air Data Computer (ADC) to calculate (among other things) Mach Number. Air Data Computer An ADC is simply a computer which accepts measurements of atmospheric data to calculate various flight related data. A typical ADC may be connected to$^1$: Inputs Static System Pressure Pitot Pressure Total Air Temperature (TAT) Outputs (...
23
An (analog) machmeter looks something like this: So it's more like an more complex version of the airspeed indicator, in this case correcting for the altitude in the process. That being said, I found this extract apparently from an FAA publication: Some older mechanical Machmeters not driven from an air data computer use an altitude aneroid inside the ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2022-01-23 21:10:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49498316645622253, "perplexity": 1615.5974158275835}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00520.warc.gz"}
|
http://www.dst.unive.it/~claudio/index.php/Blog/Main?setlang=it&page=2
|
Blog
# Calendari
## Seminar of Prof. Peter Filzmoser (TUW)
Announce of Seminar
# Prof. Peter Filzmoser
Department of Statistics and Probability Theory Vienna University of Technology
# Compositional data analysis: Challenges for environmental sciences
13 June, 2012 at 15.30 Aula E, Santa Marta, Venezia
Anyone interested is invited to participate
Abstract:
Many practical data sets in environmental sciences are in fact compositional data because only the ratios between the variables are informative. Compositional data are represented in the Aitchison geometry on the simplex, and for applying statistical methods designed for the Euclidean geometry they need to be transformed first. The isometric logratio transformation has the best geometrical properties, and it avoids the singularity problem introduced by the centered logratio transformation. Since real environmental data also often contain outlying observations, robust statistical procedures need to be employed. We show for different multivariate methods how robustness can be managed for compositional data, provide algorithms for the computation, and apply the methods on real data sets from geochemistry.
The announcement is available here.
## Consultazione Pubblica sul Valore Legale del Titotlo di Studio
Il MIUR ha aperto una consultazione pubblica sul valore legale del titolo di studio a questo indirizzo www.istruzione.it/web/ministero/consultazione-pubblica. Fai sentire anche tu la tua opinione. Passa parola!
## Consultazione Pubblica sul Valore Legale del Titolo di Studio
Il MIUR ha aperto una consultazione pubblica sul valore legale del titolo di studio a questo indirizzo www.istruzione.it/web/ministero/consultazione-pubblica. Fai sentire anche tu la tua opinione. Passa parola!
# International Conference on Robust Statistics 2012
Dear colleagues, We are pleased to announce that the University of Vermont will be hosting the upcoming International Conference on Robust Statistics. The conference will be held from August 5 to August 10, 2012 on the campus of the University of Vermont in Burlington, Vermont, USA. August 5-10 is the week after the Joint Statistical Meetings in San Diego.
The International Conference on Robust Statistics (ICORS) has been an annual international conference since 2001. The aim of the conferences is to bring together researchers interested in robust statistics, data analysis and related areas. This includes theoretical and applied statisticians as well as data analysts from other fields, and leading experts as well as junior researchers and graduate students.
ICORS welcomes contributions to applied statistics as well as theoretical statistics, and in particular new problems related to robust statistics and data analysis. The following areas are expected to be well represented at the conference, but contributed talks on other related topics are also welcomed.
* Concepts and theory of robust statistics
* Asymptotic theory and efficiency
* Novel applications of robust statistical methods
* Robust and nonparametric multivariate statistics
* Robust functional data analysis
* Robust regression, including quantile regression
* Linear and generalized linear models; mixed models
* Biostatistics
* Statistical methods in bioinformatics/genetics
* Statistical computing and graphics and data mining
* Data mining and machine learning
The website for the ICORS 2012 conference can be found at:
For any questions please email to:
icors12@gmail.com
We wish to acknowledge the generous support of the Minerva Research Foundation, and pending support of the National Science Foundation.
# One Year Research Position
At my Department there is an open position for one year research position.
Ca' Foscari University Venezia - Department of Environment, Informatics and Statistics
Research contract "Statistical analysis of historical earthquake catalogues"
Tutor: Mario Romanazzi
Duration: 1 year
Location: Venezia (Italy)
Closing Date of Applications: 31st March 2012
Further details available on line at http://www.unive.it/nqcontent.cfm?a_id=1538
Ca' Foscari University is an equal opportunity employer.
## Seminar of Dr. L. Spezia (UK)
Announce of Seminar
# Dr. Luigi Spezia
Biomathematics and Statistics Scotland, Aberdeen, UK
# MODELLING THE STATES OF SCOTLAND'S RIVERS
January 10th, 2012 at 14.00
Aula B, Santa Marta, Venezia
Anyone interested is invited to participate
Abstract:
Analyses of Scotland's rivers are presented both on a local and on a global scale, under the environmental and the ecological point of view. First, the multivariate time series of nitrogen concentrations are considered by modelling rivers in isolation, then modelling all data simultaneously. Next, the dynamics of the signatures of two stream isotopes is investigated. Finally, the mapping of the species distribution of freshwater pearl mussels in a river is shown. The analyses are performed by means of Bayesian hierarchical models, driven by a hidden Markov chain. The states of the Markov process allows gathering of the observations in a small set of homogeneous groups. The models shown belong to the class of hidden Markov models and Markov switching autoregressive models.
The talk is based on joint work with Christian Birkel (University of Aberdeen), Mark Brewer (Biomathematics& Statistics Scotland, Aberdeen), Susan Cooksley (The James Hutton Institute, Aberdeen), Martyn Futter (Swedish University of Agricultural Sciences, Uppsala), Roberta Paroli (Catholic University, Milan).
The announcement is available here.
## Seminar of Prof. C. Gaetan (DAIS)
Announce of Seminar
# Prof. Carlo Gaetan
Dipartimento di Scienze Ambientali, Informatica e Statistica Ca' Foscari University
# A MODEL FOR TEMPORAL EXTREMES BASED ON LATENT PROCESSES
December 14th, 2011 at 10.30am Aula Consiglio - Ala 2C, S. Giobbe, Venezia
Anyone interested is invited to participate
Abstract:
One approach of modelling extreme data is to consider the distribution of exceedances over a high threshold. Under suitable conditions, this distribution can be approximated by a generalized Pareto distribution (GPD). In recent research on extreme value statistics, there has been an extensive development of threshold methods for time series models. For instance extreme values of univariate time series are modelled by assuming that the time series is Markovian and using bivariate extreme value theory to suggest appropriate models for the transition distributions. Another possible approach for dealing with the dipendence is a hierarchical approach imposing a prior distribution on the parameters. A drawback of the Bayesian approach is that the resulting marginal distributions are not GPD. In this talk we present a new model for the extremal behaviour of a univariate time series based on a latent process, that overcomes this drawback. The extremal properties of the model are illustrated, showing that different choices of the underlying latent process can produce different degrees of asymptotic dependence, from independent extremes to clustering of exceedances. Two analysis of environmental and financial time series are also presented.
(a joint work with Paola Bortot, Università di Bologna)
The announcement is available here
## Seminario del Dott. Juan Francisco Rosco Nieves (University of Extremadura)
Il giorno Martedì 15 Novembre 2011 alle ore 11.15 presso l'Aula Consiglio, Ala 2C secondo piano (ex Dipartimento di Statistica) si terrà un seminario tenuto dal J.F. Rosco Nieves (Università dell'Estremadura) dal titolo
# SKEWNESS - INVARIANT MEASURES
Abstract: The coefficient of kurtosis introduced by Pearson as the standardised fourth
central moment is a characteristic used to describe the shape of a probability distribution. However, since its introduction more than a century ago, numerous interpretations of it have been suggested within the literature. A historical review of these interpretations is made and the measurement of kurtosis in the presence of asymmetry is considered. Blest’s kurtosis measure adjusted for skewness is studied and an alternative coefficient is proposed, both measures also being based on moments. Since the existence of moments of a distribution cannot always be assured, a quantile-based approach is considered. Two forms of kurtosis measures based on quantiles are identified which are invariant to the presence of skewness for certain families of distributions obtained via the transformation of a base symmetric random variable. A very general condition is established which can be used to determine when the two types of kurtosis measures will be skewness invariant, and two wide families of distributions are identified for which the measures are skewness invariant. These two family of distributions are the well-known Johnson’s unbounded family and the more recently proposed sinh-arcsinh family. Also we identify a family of distributions not arising via transformation for which the measures are skewness invariant, namely the Tukey lambda family.
La locandina è disponibile qui.
## Sette Milioni
Secondo le stime pubblicate sul report del United Nations Population Fund entro il 31 Ottobre prossimo raggiungeremo la quota di 7 Milioni. Puoi vedere qui per capire cosa questo voglia dire.
|
2013-12-09 09:57:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3671411871910095, "perplexity": 4403.72921037303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163955638/warc/CC-MAIN-20131204133235-00054-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://id.b-ok.org/book/3345705/198037
|
Utama Algebra with Galois Theory
# Algebra with Galois Theory
The present text was first published in 1947 by the Courant Institute of Mathematical Sciences of New York University. Published under the title Modern Higher Algebra. Galois Theory, it was based on lectures by Emil Artin and written by Albert A. Blank. This volume became one of the most popular in the series of lecture notes published by Courant. Many instructors used the book as a textbook, and it was popular among students as a supplementary text as well as a primary textbook. Because of its popularity, Courant has republished the volume under the new title Algebra with Galois Theory. Titles in this series are co-published with the Courant Institute of Mathematical Sciences at New York University.
Tahun: 2007
Penerbit: American Mathematical Society
Bahasa: english
Halaman: 126 / 137
ISBN 10: 0821841297
ISBN 13: 9780821841297
Series: Courant Lecture Notes
File: PDF, 24.39 MB
## You may be interested in
### Analytic number theory
Tahun: 2004
Bahasa: english
File: DJVU, 3.91 MB
### Galois Theory
Tahun: 1959
Bahasa: english
File: PDF, 1.47 MB
### Algebra
Tahun: 1991
Bahasa: english
File: DJVU, 4.28 MB
### Advanced Modern Algebra, Part 1
Tahun: 2015
Bahasa: english
File: PDF, 57.18 MB
## Most frequently terms
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1
### Animals in the Qur’an
Tahun: 2012
Bahasa: english
File: PDF, 3.64 MB
2
### La acción humana : tratado de economía
Tahun: 2015
Bahasa: spanish
File: EPUB, 1.19 MB
```Algebra with Galois Theory
Courant Lecture Notes
in Mathematics
Executive Editor
Jalal Shata h
Managing Editor
Paul D. Monsour
Assistant Editor
Reeva Goldsmith
Copy Editor
Marc Nirenberg
http://dx.doi.org/10.1090/cln/015
Emil Artin
Notes by Albert A. Blank
15 Algebr
a with Galois Theory
Courant Institute of Mathematical Science s
New York University
New York, New York
American Mathematical Societ y
Providence, Rhode Island
2000 Mathematics Subject
Classification.
Primar
y 1 2 - 0 1 , 1 2F1 0 .
Library o f C o n g r e s s Cataloging-in-Publieatio n D a t a
Artin, Emil , 1 898-1 962 .
Algebra wit h Galoi s theor y / E . Artin , note s b y Alber t A . Blank .
p. cm . — (Couran t lectur e note s ; 1 5 )
ISBN 978-0-821 8-41 29- 7 (alk . paper )
1. Galoi s theory . 2 . Algebra . I . Blank , Alber t A . I L Title .
QA214.A76 200 7
512—dc22 200706079
9
Printed i n th e Unite d State s o f America .
© Th e pape r use d i n thi s boo k i s acid-fre e an d fall s withi n th e guideline s
established t o ensur e permanenc e an d durability .
Visit th e AM S hom e pag e a t http://www.ams.org /
10 9 8 7 6 5 4 1
32
211 10
Contents
Editors' Note
Chapter 1 . Group s
1.1. Th e Concept of a Group
1.2. Subgroup s
Chapter 2. Ring s and Fields
2.1. Linea r Equations in a Field
2.2. Vecto r Spaces
Chapter 3. Polynomials . Factorization into Primes. Ideals.
3.1. Polynomial s over a Field
3.2. Factorizatio n into Primes
3.3. Ideal s
3.4. Greates t Common Divisor
Chapter 4. Solutio n of the General Equation of nth Degre e
Extension Fields. Isomorphisms.
4.1. Congruenc e
4.2. Extensio n Fields
4.3. Isomorphis m
Chapter 5. Galoi s Theory
5.1. Splittin g Fields
5.2. Automorphism s of the Splitting Field
5.3. Th e Characteristic of a Field
5.4. Derivativ e of a Polynomial: Multiple Roots
5.5. Th e Degree of an Extension Field
5.6. Grou p Characters
5.7. Automorphi c Groups of a Field
5.8. Fundamenta l Theorem of Galois Theory
5.9. Finit e Fields
Chapter 6. Polynomial s with Integral Coefficient s
6.1. Irreducibilit; y
6.2. Primitiv e Roots of Unity
Chapter 7. Th e Theory of Equations
7.1. Rule r and Compass Construction s
VI
CONTENTS
7.2. Solutio n of Equations by Radicals 9
4
7.3. Steinitz ' Theore m 0
4
7.4. Tower s ofFields 0
7
7.5. Permutatio n Groups
2
7.6. Abel' s Theorem 2 1
1
7.7. Polynomial s of Prime Degree 2
3
Editors' Note
Beeause what was in 1947 "modern" has now become Standard, and what was
then "higher" has now become foundational, w e have retitled this volume Algebra
with Galois Theory from the original Modern Higher Algebra. Galois Theory.
Jalal Shatah, Executive Editor
Paul Monsour, Managing Editor
August 2007
http://dx.doi.org/10.1090/cln/015/01
CHAPTER 1
Groups
We concern ourselve s with sets G of objects a,b,c,... calle d elements. Th e
sentence "a i s an element of G " will be denoted symbolicall y by a e G . Assum e
an Operatio n calle d "multiplication " whic h assign s t o a n ordere d pai r o f object s
a, b of G another object a • b (or simply ab) the product o f a and b. I t is useful t o
require that G be closed with respect to multiplication, namely:
(1) lfa.be G,thena-Z> e G.
EXAMPLES.
(a) Le t G be the set of positive integers. If subtraction is taken as the "multiplication" in G, then G is certainly not closed, e.g., 3-5 = 3 — 5 = —2 .
If taking the greatest common divisor is our multiplication, the n closur e
is obvious.
(b) Tak e G t o b e th e se t o f function s o f on e variable . I f f(x), g(x) e G
dehne f(x) • g(x) = f[g(x)], e.g. , e x • logx = e logx = x.
EXERCISE 1 . Writ e out the multiplication tabl e and thereby sho w closure fo r
the set of function s
1
f\=x, fi
/3 = 1
h
u = 1 — J C ' fs = X ~ 1
x'
SOLUTION.
h fr
/l
fr / l h fr
fr h / i fr
h h h fr
h U fr fr
h fr fr fr
76
fr fr
th
fr
fr
fr
fr
fr
fr
fr
fr
fr
fr
fr
fr
fr
jl
fr
fr
fr
fr
fr
fr
fr
th
where fi • fj i s listed in the / row and j column .
We make the further requiremen t that multiplication obey the associative law:
(2) I f a, b,c e G , then (ab)c = a(bc). Thi s is a rather strong condition. I t is
not generally satisfied; consider, e.g., subtraction among the integers. For
functions o f on e variable, a s above, it is valid, however . I f f(x), g(x),
h(x) ar e any three functions w e have
(fg)h = f(g(h(x))) =
l
f(gh).
2
1. GROUP S
EXERCISE 2. Dedue e the associative law for fou r element s from (2) , that is,
show that the five possible products o f four element s written i n a given sequenc e
are all equal. Furthermore , attemp t to determine the number of possible product s
of n element s give n i n a linear order . Fo r example, the elements a\, a 2, a^, a^ in
that the order yield the products {aia^ia^), a\{a 2{a^ü6S), etc. Hint: Le t a n b e
the number of products of a\, a 2,..., a n. Fin d a recursion formula fo r a n an d use
the Lagrange generating functio n
f(x) = ot\x + a 2x2 H h
a nxn H .
EXERCISE 3. Th e associative law for n elements states that all possible products of n element s written in a prescribed order, e.g., a\, a 2, . . . , a n, yield the same
result. Prov e the associative la w fo r an y number o f element s usin g onl y (2 ) (th e
associative law for three elements).
PROOF FOR EXERCISE 3: W e assume the validity o f the associative la w fo r
all products of m factors, m <n, an d show that this implies the validity of the law
for n + 1 . Consider the particular product (n + 1 ) Yll={ ak whic h is obtained fro m
the n + 1 element s a\, a 2,..., a n+\ b y successively multiplying on the right, i.e.,
n<
ük = a\,
k=l
n+l /
n
a
a
W k=\Y\ k ]<*n+\k=l V=
l
Let P n+\ b e any product o f the n + 1 elements a\, a 2,..., a n+\ take n i n that
order. Since P n+\ i s the result of at least one multiplication, we may write
Pn+l = P»P£\, \<m<n,
where P™ i s some product o f the elements a\, a 2i..., a m i n that order an d P^\
of the remaining elements a m + i, a m+2,..., a n+\. B y the induction hypothesi s w e
have
k=ß
for any ß, v such that v — ß + 1 <n. Specifically , w e have
/
/m ^
^ n+l ^
m
a
V
/
n
1
\
"+i=(n^)( n *)=n^ \( n ^j-^+ i
\j=l '
^k=m+l
'
= \ '-\
j
=
j k =1
:w+
/
J
( n a J n a k ) *an+i
\ - = l k=m+l
'
nx
n+l
( Y\ k) '
a
k=\ /
each step being a simple application of (2) . D
a
k=\
n+l = Yl ük>
1.1. THE CONCEP T O F A GROUP 3
1.1. Th e Concept of a Group
A set G will be called a group if it satisfies the following conditions :
(1) Closure. There exists an Operation called multiplication which assigns to
any ordered pair a,b e G a produet ab e G.
(2) AssociativeLaw. lfa,b,ceG the
n (ab)c = a(bc).
(3) Identity. There exists an e € G , called the (left) identity, such that ea = a
for all a e G.
(4) Inverse. For every a e G there is an a""1 € G , called the (left) inverse of
a, suc h that a- 1« = e.
Let us examine the produet
(cClylcClaa-1.
On one hand,
[(a - 1 )" 1 ^ - 1 ][aa _ 1 ] = e[aa~ l] = aa~ l,
and on the other
[(fl" 1 )" 1 ][(fl" 1 fl)fl"' 1 ] = [ ( a - 1 ) - " 1 ] ^ " 1 ] = ( a -
1
)-1 «-1 = * .
Consequently,
The existence of the left invers e implies the existence of a right inverse. A similar
result holds for the identity; for consider the produet
aa~la.
First we have
aa~la = {aa~ l)a = ea = a.
But also
aa~la = a(a~ la) = ae,
Consequently,
ae = a ,
and the existence of the right identity implies the existence of a left identity .
EXERCISE 4 . Tw o System s o f postulate s ar e sai d t o b e equivalen t i f eithe r
System can be derived logically from th e other. Sho w that the system (1), (2), (3),
(4) is equivalent to the system in which (3) and (4) are replaced by:
(3r) Ther e is a right identity e e G such that ae = a for all a € G .
(40 T o each a e G there is a right inverse a- 1 e G such that aa~l = e.
Apparently the words right and left need not be included in (3), (4), (30, or (40.
EXERCISE 5. Conside r the postulate system in which (3) and (4) are replaced
by:
(3*) Ther e exists a left identity e € G ; that is, ea = a for all a e G.
(4*) T o eac h a e G ther e i s a right invers e a~ l e G ; tha t is , aa~ l = e.
Determine whether this system of postulates defines a group. If not, give
a counterexample.
4
l.GROUPS
SOLUTION. Fo r an y a e G dehn e multiplicatio n b y ax = x fo r al l x e G.
This Syste m satishe s th e postulates (1 ) , (2) , (3*) , and (4*) . Wha t grou p propert y
does it not satisfy ?
For ordinary numbers, the quotient b -f- a of two numbers can be defined a s the
Solution of the equation ax — b. Consider similar equations for elements of G :
(a) ax = b, (b ) xa = b, (c ) axb = c
If (a) is true for some x9 then
a~lax = a~ lb = ex = x.
Hence, if there is a Solution, it is a~ lb an d it is therefore unique ; a~ lb i s in fact a
Solution. Simila r reasoning shows that (b) possesses the unique Solution x — ba~x
and (c) the unique Solutio n a~ lcb~l. Th e existence of a unique Solutio n for eac h
of the above equations demonstrates a property of the group analogous to division.
Since a~ l i s the Solutio n of the equation xa — e, a~ l i s unique. Similarly , e
is the unique Solutio n of xa = a. W e observe that th e Solutio n o f x{ab) = e is
(ab)~l = b~ xa~l. I n general, the inverse of a product
(a\a2 - - • an)~l = a~ l • • -a^ a^ 1 .
If % = (a" 1 ) - 1 , the n x satisfie s th e equatio n xa~ l = e, whic h ha s th e uniqu e
Solution x = a. Thu s the inverse of the inverse of an dement is the element itself.
6 . Sho w that postulates (3) and (4) may be replaced by
(3 ) I f a, b G G, the equations
EXERCISE
+
xa = b, ay
— b,
possess (not necessarily unique) Solutions x,y e G.
A group that satisfies the commutative law,
(5) lfa.be G , then ab = ba,
is said to be commutative or abelian.
EXERCISE 7. Sho w tha t th e si x function s o f Exercis e 1 form a noncommu tative grou p wit h respec t t o thei r rul e o f multiplication . Determin e th e identit y
element and the inverse to each function .
1.2. Subgroup s
If G i s a group an d 5 i s a subse t o f G tha t i s itsel f a group unde r th e sam e
Operation as G, then S is called a subgroup of G .
EXAMPLE. Tak e G t o b e th e se t o f rationa l number s othe r tha n zer o unde r
ordinary multiplication. G has, e.g., the subgroups
(a) th e positive rational numbers
(b) th e powers of any element
(c) th e set consisting of +1 an d —1
Trivially, (d) the set G itself or (e) the set consisting of the element 1 .
1.2. SUBGROUP S
5
THEOREM 1 . 1 Necessary and sufficient conditions for a subset S of G to be a
subgroup are:
(i) Closure . Ifs\, S2 € S, then s\S2 G S.
(ii) Inverse . IfseS, then s~l e S.
PROOF: Necessity. I f S i s a subgroup, (i ) holds b y definition . Th e identit y
e e S by the uniqueness in G and existence in S by the Solution of the equation
xs = s.
Note that (ii) is similarly established through the equation xs — e.
Sufficiency. I f (i) and (ii) hold, then S is a subgroup. From (ii) ifseS the n s~ l
is an element of S and henee (i) gives e e S. The assoeiative law holds for elements
of S since they are elements of G. The proof of the theorem is complete. D
If S is a subgroup of G and a e G , the coset aS i s defined t o be the set of all
elements a • s, where s e S.
EXAMPLE. Tak e for G th e se t o f al l rational number s exeludin g zer o unde r
ordinary multiplication . Le t S b e th e se t o f al l positiv e element s o f G. Ther e
are only two eosets, 5 an d — S = — IS. Thes e have no elements i n common an d
both set s together cove r G . I f we take instead S = {+1 , — 1} then th e cosets ar e
aS = {+a, —a). Here the same coset is given by +a an d —a. Note again that no
two cosets overlap and that the cosets cover G. These results are valid in general.
Let S be a subgroup of G and take a, b e G.
LEMMA 1.2 If the cosets aS and bS have an element c in common, then aS = bS.
Assume for som e s, s f e S we have c = as = bs f. Therefor e b = as(s f)~l.
From Theorem 1 .1 , sis')"1 = s /f e S and consequently bS = as fFS. Now sf,S = S ,
since if we suppose S to be any group, s any element of 5, we have
sS CS
(read: sS i s a subset of 5, or all elements of sS ar e elements of 5) . Also,
s~ls CS o
rS
CsS.
Therefore
sS = S.
In the above argument we may now write bS = as f/S = aS.
LEMMA 1.3 Every a e G is contained in some coset
a e aS since
e
eS
and hence ae = a e aS. G is covered by the cosets of S,
If G is a finite group, then the number of its elements is called the order of G .
THEOREM 1 . 4 Let G be a finite group of order N and S a subgroup of order n.
The number n of elements in the subgroup is a divisor ofN.
6
1. GROUPS
PROOF: Th e cosets aS hav e the same number of elements as S. For let S consist of the distinct elements s\,S2,... ,s n. aS consis t of as\, a\$2, ..., as n, wher e
^ k.
For otherwise we would have asi = asu and hence s t = Sk, i # k, contrary to the
definition o f the s (.
Consequently, aS consist s o f exactl y n elements . Le t j b e th e numbe r o f
cosets. B y Lemma s 1 . 2 an d 1 . 3 th e cosets cove r G withou t overlapping . I t fol lows that
N = jn. •
Take a e G. W e denote aa b y a 2 or , i n general , w e dehn e al l th e integra l
powers a ß o f a by
aß = aa- • • a (/ x times) for /x > 0 ,
a° = e,
aß = a~ la~l • • • a~~l (—/ x times) for \x < 0.
The set of all powers of a is a group and clearly the smallest group containing a.
The problem of determining the smallest group containing as few as two elements
is already of an entirely different nature . For example, what can be said about
(ab)n = ab • ab • • -ab (n times)?
If multiplicatio n i s commutative suc h products ca n b e handled, bu t this does no t
apply in general.
EXERCISE
8 . Sho w that the powers of elements obey the usual properties of
exponents
aßav =a
(avY = a
ß+
\
Vß
.
The first property implie s th e commutativ e la w fo r multiplicatio n o f power s
of a.
The set S of all powers of a forms a subgroup since S is closed under multiplication and inverses exist (cf. Theore m 1 .1 ) .
Case 1 . Th e power s o f a ar e al l distinct . S i s the n calle d a n infinite cyclic
group.
Case 2. Ther e exis t integer s / an d k with , say , i < k suc h tha t a l = a k.
Multiplying o n bot h side s b y a~ l w e obtai n e = a k~~l. Thus th e se t o f positiv e
integers for which aß = e is not empty. Let d be the smallest such integer
ad — e =>* a qd = e fo r all integers q
(read: "implies " for "=>"). Conversely, if am = e, m is a multiple of d, for we may
write m = qd + r where 0 < r < d
ar =
a
m-qd =
fl
m -qd
=
£
^
7
1.2. SUBGROUP S
But d i s th e smalles t positiv e intege r fo r whic h a d = e. Henc e r mus t b e zer o
whence m = qd. Th e powers
are all distinct for otherwise we would have
al = a k, 0 < i < k < d, o
ra
k l
~ = e\
this equation is impossible for 0 < k — 1 < d. An y other power of a must be equal
to one of these, for example ad = e, ad+l = a , . . . , or , in general,
aqd+r=ar9 0 <
r < d.
Thus there are only d distinct powers of a. S is called a cyclic subgroup of order d
and d is called the period o f a.
THEOREM 1.5 The period ofany element ofafinite group is a divisor ofthe order
ofthe group.
PROOF: Thi s is an immediate consequence of Theorem 1 .4 . Le t G be a finite
group of orde r N an d a an y element o f G. I f d i s the period o f a, w e may writ e
N = dj. Fro m ad = e we have
N
= e.
This Statement for prime N i s equivalent to Fermat's theorem in arithmetic. D
COROLLARY If the order ofG is p, a prime, then G must be cyclic.
PROOF: Th e perio d o f an y elemen t mus t b e a divisor o f p an d i s therefor e
either p o r 1 . The only element of period 1 i s e. Consequently, if a e G and a ^ e
the period of a must be p. D
There i s "essentially " on e cycli c grou p o f orde r n. Phrase d differently , tw o
cyclic group s o f th e sam e order hav e the "sam e structure. " Th e notion o f "sam e
structure" will be examined later in more detail.
EXAMPLES. Le t us determine all possible structures of groups of order 4. The
period of any element must be 1 , 2, or 4. I f there is an element a of period 4 then
e, a, a 2 , a 3 exhaus t the group. O n the other hand, if there is no element of period
4, the n al l element s bu t e mus t hav e th e perio d 2 . Thu s i f e, a, b, c denot e th e
different element s of the group we have a2 = b 2 = c 2 = e. Conside r the element
x = ab. Fro m ax = aab = fcwe hav e clearly x = £ e, x ^ a. Fro m the uniqueness
of the Solution y = e of the equation yb = b it follows tha t x ^ b. Therefor e x
must be c. Th e commutative la w holds in this group, for i f x e G then x = x~ l
and consequently ab = (ab)' 1 = b~ la~l = ba. I t is a simple matter to write out
the multiplication table:
e
a
b
c
e
e
a
b
c
a
a
e
c
b
b
b
c
e
a
c
c
b
a
e
1. GROUPS
We have shown that there are essentially two groups of order 4 and both are commutative.
Groups of orde r 6 are essentially o f two kinds, the cyclic group and the noncommutative grou p give n i n Exercis e 1 . Thi s las t i s th e simples t exampl e o f a
noncommutative group. On e of the unsolved problems of algebra is that of classifying all the groups of order n. There is, of course, always the cyclic group of order
n and for n prime, only the cyclic group. For nonprimes there is no general theory
although a Classification has been achieved for special cases. The table below gives
a summary for the first few cases:
N
4
6
8
9 10 12 14 15
2
2
5
2 2 5 2 1
ß
V
0
1
2
0
1 3 1 0
where /n is the total numbe r an d v the number o f noncommutativ e group s o f Order N.
9 . Th e two noncommutative groups of order 8 are essentially:
(a) Th e symmetries of the Square, i.e., the rotations in space which take the
Square into itself.
(b) Th e group formed by the quaternion units ± 1, ±/, ± j, ±k.
Construct the multiplication tabl e for those two groups an d show that they do not
have the same structure.
EXERCISE
(a) The symmetries of the Square.
3I
f a rotation replaces the vertices (1234) by the
vertices (a\02^3^4), then denote the rotation simply
by ((21 020304) . Th e identity is clearly e = (1 234) .
Denote b y a = (2341 ) th e counterclockwis e ro tation throug h 90° . Le t a 2 = b = (341 2 ) an d
c = a 1 = (41 23) . We have a4 = e. Th e powers of
a form a group S of order 4. If s denotes a rotation
of 1 80 ° abou t th e axi s 1 - 3 w e have s = (1 432) .
4 Th e coset sS i s simply
(1432),
sa = (21 43 ) = t,
saL = (321 4 ) = 1 1 , sa" = (4321 ) = v;
these together with the powers of a exhaust the symmetries of the Square:
'a b c s
e
a
b
c
s
t
u
e
a
b
c
s
t
u
a
b
c
e
t
u
b
c
e
a
u
c
e
a
b
V
V
V
V
s
s
t
s
t
u
V
t
s
V
t
s
u
t
e
c
b
a
u
a
e
c
b
V
uv
u
t
s
V
b
a
e
c
V
u
t
s
c
b
a
e
1.2. SUBGROUP S
9
(b) The quaternion group.
This is obtained a t once by the ordinary rules of multiplication o f the quater nion units
-1
+ 1 +i +j
+k
-J
+ 1 +i +j +k
+i -1 +k -j
+i -k -1 +i
+k +j —i -1
-1 -1 —i -j -k
—i —i +1 -k
+j
+k +1 —i
-j
-j
-k
-k
+i +1
-j
+1
+i
+j
+k
-1
—i
-j
-k
—i
+1
-j
-k
+k +1
+i
-j
+ 1 +i +j
+i -1 +k
+j -k -1
+k +j —i
-k
+j
—i
+1
+k
-j
+i
-1
The two groups do not have the same structure since the group of symmetrie s
has 5 elements of period 2 while the quaternion group has only one such dement .
http://dx.doi.org/10.1090/cln/015/02
CHAPTER 2
Rings and Fields
In the chapter o n groups we have isolated certai n properties o f ordinar y mul tiplication o f numbers an d examined these in some detail. I t has become obviou s
that the notion of group and multiplication in a group is a far more general concept
and has many more applications tha n that of multiplication o f numbers. I t is now
our purpose to dehne System s which will include som e of the ordinary propertie s
of number s (e.g. , addition , multiplication , an d later , division) . A t the sam e tim e
these Systems will remain sufficiently genera l to have wider application. Conside r
first a set for whose elements two Operations called "addition" and "multiplication"
are defined .
EXAMPLE. Th e additio n an d multiplicatio n o f od d an d eve n integer s obey s
the rules Eve n + Od d = Odd , Eve n x Eve n — Even , etc . Th e total behavio r o f
addition and multiplication o f Even and Odd is given in the tables:
+
Even Od d
Even Even Od d
Odd Odd Eve n
Even Od d
Even Even Eve n
Odd Even Od d
X
If "Even" is replaced by the number 0 and "Odd" by the number 1 , these tables are
the same as for ordinary addition and multiplication, together with the special rule
1 + 1 =0 .
Consider a set T whic h is closed with respect to two Operations, addition an d
multiplication. Th e element resulting fro m th e addition o f two elements is called
the sum a + b. We postulate that
(I) Th e elements of T ar e a group under addition.
The identity of this additive group is denoted by 0. The inverse of the element a is
denoted by — a. Accordin g to the customary Convention , the element a + (— b) i s
written a — b. The rules for the use of the minus sign before parentheses are easily
demonstrated:
a-(b +
c-d)=a +
{-[b + c + (-d)]}
= a + [-(-d) + (-c ) + (-b)] = a+d -c-b.
Note tha t th e orde r o f th e element s i n th e parenthesi s ha s bee n reversed . I f th e
commutative law holds the elements may be written in any order.
ii
2. RINGS AN D FIELD S
12
(II) Th e distributive laws.
If a, b,c e T, the n
[a] a(b
+ c ) = ab + ac
[b] (b
+ c)a = ba + ca
Consider the product (a + b){c + d). Fro m II[a] and then II[b] we have
(a + b){c + d) = (a + b)c + (a + b)d = ac + bc + ad + bd
and from II [b] and then II [a]
(a + b)(c + d) = a(c + d) + fc(c + d ) = a c + a J + bc + bd.
Setting the two results equal yields
The distributive laws imply that elements which are products are commutative with
respect t o addition . Thu s onl y i f som e element s ar e not products—and thi s cas e
(I*) Th e elements of T form a commutative group under addition.
Now let us consider some other consequences of (II). We have
ab = a(b + c) = ab + ac
whence aO = 0 for all a.
In a similar way we can show
0a = 0.
The product of any element with zero is zero.
It is also possible to prove the usual rules concerning the minus sign in multiplication
aO = 0
ab + a(—b)
or ab + a(—b) — 0 . Therefore a{~b) = — ab. B y a similar proof
I
{—a)b = —ab.
From the combination of these results we have
{-a){-b) =
-((-a)b) =
-(-(ab)) =
ab.
In som e o f th e literatur e a set T satisfyin g (I* ) an d (II ) i s calle d a ring and
mention is made of "associative rings," i.e., rings which satisfy th e postulate
(III) a,b,c £T = * a(bc) = (ab)c.
We adopt a more customar y usag e an d dehn e a ring to be a set whic h i s a n
"associative ring" in the sense above. A ring, then, is a set, closed with respect to
addition an d multiplication , tha t is a commutative grou p wit h respect to additio n
and obey s th e distributive la w of multiplication ove r additio n an d th e associativ e
law of multiplication.
2.1. LINEA R EQUATIONS IN A FIELD
13
EXAMPLE. Th e integers unde r ordinar y additio n togethe r wit h th e rule tha t
the product of any two elements is zero. An y commutative group can be made to
furnish a ring in this manner.
We are also interested in rings which possess division properties, i.e., inverses
with respect to multiplication. Henc e we introduee the postulate
(IV) Th e set T, excluding zero, is a group under multiplication.
The multiplicative identity is hereafter denote d by 1 . A set F which satisfie s
(I), (II), and (IV) is called a field. Thus a field is a group with respect to addition,
satisfies th e distributive la w of multiplication ove r addition , and , except fo r the
additive identity , i s a group wit h respec t t o multiplication. Clearl y (IV ) implies
(III). Furthermore (I*) holds in this System since the existence of the multiplicative
identity mean s tha t ever y demen t ca n be expressed a s a product. Consequently ,
addition i s commutative a s a consequence o f (II). From (IV ) we also obtai n the
"cancellation law"
a ^ O , b^0=^ab^0
since zero is not an dement of the multiplicative group.
A commutative field is a field which obeys the commutative law of multiplication. I f a field is commutative, it is convenient to adopt the notation for fractions.
We write a/b for b~la where b ^ 0 . Thus a/b = c/d 4 ^ b~la = d"lc<£ > ad —
bc. Fro m this rule immediately follow s
ca a
rt = b'
EXERCISE 1 . Deriv e the usual rule s fo r addition, multiplication , an d taking
the reciprocals of fractions.
2.1. Linea r Equations in a Field
Consider the System of m linear equations in n unknowns
f L i = a\\X\ + ai2*2 H h
a\nXn = b\
L2 — a 2\X\ + a22x2 + •" + a2nxn = b 2
[1]
L/m — &m\X\ ~ r ^m2*2 T " * * * r a mnXn —
un
where the coefficients a^ an d bj ar e elements of a field F. Th e System [1 ] is said
to hav e a Solution in F i f there exis t c\, c2, . . ., cn e F suc h tha t [1 ] is a true
Statement whe n C( i s substituted fo r xi. I f the bj are all zero [1 ] is said to be a
System of homogeneous equations . A System of homogeneous equation s clearl y
has the trivial Solution all xt = 0. Any other Solution is called nontrivial.
THEOREM 2.1 If n > m, the System
[2] Li
= anx\ +
Ö/2* 2
Hh
ainxn = 0 ( / = 1 , 2 , . . ., m)
ofm homogeneous equations in n unknowns always has a nontrivial Solution.
14
2. RINGS AND FIELDS
REMARK. Th e condition that the equations be homogeneous is quite neeessary
since, for example, the equations
+ y + z = 0,
x + y + z = l, x
can have no Solution in F.
Weus e induction onm .
(1) lfm = 0 the theorem certainly holds since we have n > 0 unknowns and
no eonditions on them. We eould take all xl• = 1 .
(2) Assum e the theorem is true for all Systems in which the number of equations is less then m.
PROOF:
Case 1 . Al l a^ = 0. The theorem is true, for we may choose all xt• = 1 .
Case 2. Ther e i s a nonzero coefficient . Withou t los s o f generalit y w e ma y
assume specificall y a\\ ^ 0 sinee altering the order of the x t o r of the equation s
has no effect upo n th e existence o r nonexistenee o f Solutions . W e take a\\ = 1
sinee we may multiply on the left by a^. Le t us examine the system of equations
Li = 0
L2 -a 2\Li =
[3]
0
ümlLl = 0
obtained by "eliminating" the variable x\ fro m the last m — 1 equation s in [2]. Any
Solution o f [2 ] is obviously a Solution o f [3] . Conversel y an y Solutio n o f [3 ] is a
Solution of [2 ] since the Solution must satisfy L\ = 0 . I t suffices t o show that [3 ]
has a nontrivial Solution.
The system of equations
L 2 - Ü2\L\ = 0
[31
Lm Qm\'-
J
\
0
is essentially a system of m — 1 equations in the n — 1 unknowns x 2, x^, ..., x m.
From the induction assumptio n thi s system possesses a nontrivial Solution . Usin g
this Solutio n w e complete th e Solutio n o f [3 ] by substitutin g th e firs t equatio n t o
obtain x\. Th e proof of the theorem is in no way changed when the coefficients ar e
multiplied on the right. D
EXERCISE
2. Prov e by an induction similar to that of Theorem 2.1 :
THEOREM 2.2 A system ofn equations in n unknowns,
L\ = a\\X\ + ai2* 21
H
- ain*n = b\
L2 = 021 * 1 + <222* 2 H h
a 2nXn = b
2
Ln — Q<n\X\ + ^«2* 2 + * * * + ttnnXn = t> n,
2.2. VECTOR SPACES
15
has a Solution for any choice of b\, b 2, • • •, bn e F if and only if the System of
homogeneous equations
[ L\— a\\X\ + fli2*2 H V
L2 = a 2\X\ + a 22x2 H h
a\ nxn = 0
a 2nxn = 0
Ln = a n\X\ + a n2x2 + • • • + a nnxn = 0
has only the trivial Solution.
EXAMPLE. Interpolatio n of polynomials.
The eoefficients o f a polynomial
f{x) = C o + c\x H h
Cn^ix" 1 "1
of degree < n — 1 can be chosen to satisfy th e linear equations in the C[
/ f e ) = ßi
xt ^ Xj for i i=- j
f(xn) = ß n,
where x\, x 2,..., x n, ß\, ß 2,..., ß
from the fact that the system
n
ar e an y preassigne d numbers . Thi s follow s
/(*i) = 0 ,
f(x2) = 0 ,
/(*») = 0,
of homogeneous linear equations has only the trivial Solution since no polynomial
of degree less than n can have n distinct roots.
2.2. Vecto r Space s
A (left) vector space V ove r a field F i s a n additiv e commutativ e group . It s
Clements are called vectors. The identity of this group will be denoted by 0. There
is an Operation which assigns to any a e F, A e V, a product aV = B e V. W e
as£. t u a t tiii a u p e i a t i u n s a u s i ^ t u e p u ö t u i a t e ö .
a(bA) = (ab)A,
(a + b)A -aA + bA,
a(A + B) =aA + aB,
\A = A.
(A)
(B)
(C)
(D)
The last postulate is not a consequence of the other three for we could define a A =
0 for all produets and satisfy (A) , (B), and (C).
EXERCISE
3 . Show : (a ) a0 = 0 , (b ) 0 • a = 0 , (c ) -A = ( - ! ) • A.
16
2. RINGS AND FIELDS
(a) W e have
aO + aO = a(0 + 0) = aO.
Adding — aO on both sides we obtain
a0 = 0 .
(b) Similarly ,
0A + 0 A = ( 0 + 0) A = 0A .
Adding — OA on both sides gives
0A = 0 .
(c) Fro m ( 1 — 1 ) A = O A = 0 we have
lA + ( - l ) A = 0 ,
or (—1)A is the inverse of A, i.e.,
(-1)A = - A .
From these results we can prove that aA = 0 implies either a = 0 or A = 0 .
If a i=- 0 then from a A = 0 we have
a~l(aA) =0 = (a~ la)A = A .
Hence A = 0 .
The fi vector s Ai , A 2 , . . ., A n ar e said to be linearly dependent i f there exist
x\, *2 , • •.> xn € F wit h not all x; = 0 such that
[1] xi
Ai + x 2A2 H h
x„An = 0 .
Take n = 1 . A vector Ai i s said to be linearly dependent if there exists an x 7 ^ 0
in F suc h that x A = 0 , i.e., if A = 0 . I f the vector is not zero it is independent .
Assume that [1 ] holds for nontrivia l X(. The n we have, say, xn ^ 0 . I t is possible
to write
l
An = -x~ lxiAx - x~ 1 x2A2 x~
xn-\An-i.
A sum of the form
c\A\ +c 2A2-i h
is called a linear combination of the vectors A\, A 2,..., A n. Th e Statement that n
vectors are linearly dependent i s equivalent to the Statement that one of them is a
linear combination of the others.
The dimension o f a vector spac e V i s the maximum number of linearly independent vectors in V . I f no such maximum exists the dimension of V is said to be
infinite.
EXAMPLE. Th e polynomials for m a vector spac e over the field of real numbers. I n particular, the polynomials 1 , x,..., x n ar e linearly independent. Clearl y
the dimension of the vector space of all polynomials is infinite.
The definitio n give s n o hin t o f a way t o obtai n th e dimensio n o f an y give n
vector space. In order to attack this problem we introduce
2.2. VECTOR SPACE S
THEOREM 2. 3 Given n vectors A\, A 2j . . . , A n e V and if B\, B 2,..., B
m > n linear combinations ofthe A/ , £/zew £/ze Bj are linearly dependent.
17
m
are
PROOF: W e are given the linear combinations
B\ — a nAi + ai 2A2 H h
ainAw
B2 — a 2\A\ + a 22A2 H h
a^Ä,,
[1]
#m = ^mM l + #m2^ 2 H +
« mwAw.
For the proof of the theorem we must find Xj e F suc h that
[2] x\B\
+ x2B2 H h
x mBm = 0
where not all Xj = 0 . Combining [1 ] and [2] we have
m
7=1
where
[3]
I L\ = X\Ü\\ + x 2a2\ + • • • + x mam\
L2 = Xi<2i 2 + X2«2 2 + • ' * + X mam2
J_jn — X\a\ n -f
- % 2&2n \ ' ' ' ~ r Xm^-mn,'
It suffices t o find nontrivial x; tha t make all L/ = 0 . Since m > n , the system L; =
0 of n equations in m unknowns has a nontrivial Solution according to Theorem 2.1 .
It follows tha t there are je,-, not all of them zero, such that [2 ] holds and therefor e
the theorem is proved. D
COROLLARY If V is a vector space in which all vectors are linear combinations
ofn given vectors, then the dimension ofV is less than or equal to n.
The vector space V i s said to be spanned by the vectors A\, A 2,..., A
if every vector B e V is a linear combination of the A t.
n
€V
THEOREM 2.4 If V is spanned by n linearly independent vectors, then the dimension Ny ofV is precisely n.
By the corollary t o Theorem 2.3 we have Ny < n. Bu t there exist n linearl y
independent vector s (e.g. , A\, A 2 , . . . , A n) i n V an d Ny i s the maximum numbe r
of independent vectors in V. Consequently , Ny > n. Therefor e Ny = n.
THEOREM 2.5 IfVisa vector space offinite dimension n, then there are n linearly
independent vectors in V which span the space.
PROOF: I f n i s the dimension o f V , then V contain s a set of n independen t
vectors; cal l the m A\, A 2,..., A n. Le t B b e an y vecto r o f V. Th e n + 1 vectors B, Ai , A 2 , . . . , A n ar e linearly dependent sinc e n is the maximum number of
independent vectors. Thus there are xt e F , not all zero, such that
x0B + x\A\ H h
x nAn = 0 .
2. RINGS AN D FIELD S
18
It follows that XQ ^ 0 ; for otherwise we would have x\A\ +X2A2 H \-x
n An =0
where not all x\, x 2,..., x n ar e zero. Th e At, however , ar e linearly independent .
Thus the possibility that XQ = 0 is exeluded. From xo ^= 0 we see at once that B is
a linear combination of the A;. We have proved, in fact, tha t any set of n linearl y
independent vectors in V Spans the entire Space. D
If W is a subspace of a finite-dimensional Space V, then obviously the dimension of W is not greater than that of V. More precisely we have the
COROLLARY IfWisa subspace
dimension ofV, then W = V.
of V and the dimension of W is the same as the
PROOF: Le t the dimension of W be n. Then there are n linearly independen t
vectors in W which spa n W. Bu t these must also span V by last Statement in the
proof of the theorem. Therefore W = V. D
Given a field F an d a number n, w e can construct th e vector spac e V n over F
consisting of all ordered n-tuples of elements of F. If
A = (ai,a 2,. . . , a n ) , a
B = (bi,b 2,...,bn), bt
t
e F,
e F,
we dehne
A + B = (a\ + b\, a2 + b 2,..., a
aA = {aa\, aa 2, . . ., aa n).
n
+ b n),
EXERCISE 4. Verif y tha t V n satisfies the postulates for a vector space.
The dimension of the space V n is easily see n to be n. Fro m 0 = (0 , 0? . . . , 0)
it follows that the n vectors
E/i = ( l , 0 , 0 , 0 , . . . , 0 ) ,
t/2 = ( 0 , l , 0 , 0 , . . . , 0 ) ,
C/„ = ( 0 , 0 , . . . , 0 , 0 , 1 ) ,
are linearly independent sinc e
n
Y2CiUi =
( c l> c 2, • . . , Cn )
i=\
is not zero unless all the ct are zero. Furthermore , the n vectors span V n since any
vector (c\, c2,,.., c n) € V n can be written as a linear combination YH=i ci Ui• Th e
result follows from Theorem 2.4.
EXERCISE 5. Sho w that any vector field of finite dimension n over F i s isomorphic t o V n. B y "V is isomorphic t o V„ " we mean tha t V ha s essentially the
same structur e a s V n. I n other words , to each elemen t o f one space ther e corre sponds an element of the other which behaves in exactly the same manner unde r
2.2. VECTO R SPACE S
19
the Operations among vectors. Thi s concept will be dealt with later in a more precise manner.
In V n, consider the equations
[1] x\A\
+ x 2A2 H h
xnAn = B.
Setting the components on one side equal to the corresponding components on the
other, w e obtain n linea r equation s i n n unknown s a s in Theore m 2.2 . Equatio n
[1] has a Solutio n fo r al l B e V n i f an d onl y i f th e A / ar e linearl y independen t
and therefore spa n V n. But this is equivalent to the assertion that the homogeneous
equation
x\A\ + x 2A2 H h
xnAn = 0
has only the trivial Solution, all Xf = 0 . I n terms of the components this is exactly
the Statement of Theorem 2.2.
http://dx.doi.org/10.1090/cln/015/03
CHAPTER 3
Polynomials. Factorization into Primes. Ideals.
In the following section s we shall devote considerable attentio n t o the theory
which has arisen from th e attempts o f algebraists t o solve the general equation of
nth degre e
anxn + a n-\xn~l + h
ÜQ = 0, a n ^ 0 .
This is the central problem of algebra and it was principally to handle this problem
that moder n algebrai c method s wer e developed . Wha t i s mean t b y a Solution t o
such an equation? In analysis a Solution is a method by which one can approximate
as closely as one likes to a number that satisfies th e equation. I n algebra, however,
the emphasi s i s o n th e natur e an d behavio r o f th e Solution . I t i s important , fo r
example, to know whethe r o r not an equation i s solvabl e in radicals. I n analysis,
this question is not necessarily relevant.
3.1. Polynomial s over a Field
Take for the domain of our discussion a commutative field F} A power series
over F is a sequence of elements of F which obeys certain rules of computation. A
sequence of elements of F i s simply a correspondence which associates with each
nonnegative integer n exactly one element c n of F. W e denote a power series by
oo
2
n
Co + c\x + c 2x H h
cnx H —
5Z CyxV '
v=0
This notation is nothing more than a way of writing such a correspondence; it is not
to be interpreted a s a sum, no meaning is to be attached to the x 's o r their indices,
and c vxv i s not to be considered a s a product. A polynomial i s a power serie s all
of whos e element s fro m a certain elemen t o n ar e zero . A polynomia l coul d b e
defined a s an ordered n-tuple of elements of F , bu t this would involve difficultie s
in framing rule s of computation, which we avoid by handling power series as one
may see by attempting to specialize the following rule s to suit this definition :
The sum of two power series is defined by
00
OO
v
OO
b %v
J2®vx +Y^ v =
v=0 v=0
X^
+b
^xV-
v=0
From this definitio n i t follows a t once that th e power serie s for m a commutativ e
group under addition with the zero element YlT=o ® ' %vAll fields are assumed hereafter t o be commutative unless the contrary is stated.
21
22 3
. POLYNOMIALS. FACTORIZATION INT O PRIMES . IDEALS.
EXERCISE 1 . Sho w that the polynomials are a subgroup of the group of power
The product of two power series is defined by
00
00
7 CyX
7
OO
— 7 C
@>ßX
v=0 ß=Q
nX
n=0
with
n
&n = /
J
Cv&ß
=
/
CyCln—v
J
v/tx> 0
By proving the distributive law of multiplication ove r addition and the associative
law o f multiplication , w e now sho w tha t th e se t o f powe r serie s ove r F form s a
ring.
The distributiv e la w follow s fro m th e linearit y o f th e produc t an d fro m th e
distributive law for the field elements. We prove this in general. Le t {a n} and {b n}
be two sequences of elements in F. Defin e the product {a n} -{bn} = d n to be linear
in the a's an d b's bu t otherwise arbitrary . Thu s d n i s of the form d n = Ha^üibj
with ciij e F. Consequently ,
kn}[{^} + {b n}] = {d n} with dn = Y^OtijCiiüj + bj) = Y^OiijCiüj + OtijCibj
or
{dn} = {c n} • {a n} + {c n} • {b n}.
EXAMPLES. Th e product of vectors in physics.
The scalar product is a • b = a\b\ + 0^ 2 + #3^ 3 and is therefore distributive .
The vector product a x b has components of the form ±(a tbj — ajb t) an d hence is
distributive.
The associative law follows immediately fro m
n—O \i6+v= w
=
/
E( E <vVpJ*
m
.
Since the result is symmetrical, it is independent of the placing of the parentheses.
This completes the proof that the power series forms a ring.
It is a simple matter to enlarge this ring to a field. First we note that the ring
already has a multiplicative identity, namely
1 + 0 - J C + 0-JC
2
+
-.-.
To obtain inverses with respect to multiplication, w e have only to include all elements of the form
VJ a nxn, m
> 0.
3.1. POLYNOMIALS OVER A FIELD 2
3
Elements of the form
+00
2_]anxn
—00
th
cannot be included, for the n coefficient o f a product would be written
+00
v-\-ß—n v=—oo
This expression, however, is meaningless sinc e the result of an infinite numbe r of
Operations (in this case additions) is not defined.
EXERCISE
2. Prov e that the product of two polynomials is a polynomial.
It follows fro m th e closure of the set of polynomials wit h respect to multiplication and from th e proof o f Exercise 1 that the polynomials ar e a subring of the
ring of power series. Th e multiplicative identity l + 0 - x + 0-Jt 2 + -- - i s also a
polynomial. This suggests
EXERCISE
3 . Sho w how the ring of polynomials may be enlarged to a field.
A polynomial is completely described if its nonzero coefficients ar e given. This
suggests the introduction of a finitenotation for a polynomial which omits all terms
with zero coefficients. W e denote the polynomial
oo n
y ^ akX k wit h ak = 0 for k > n by Y ^ a^xk {a^
^0)
where w e adopt th e Convention tha t al l terms wit h a^ = 0 are omitted fro m th e
barred symbol. In order to include the exceptional case we define
Ö = 0 + 0-J C + 0-JC
2
+ ---.
We have the particular cases:
ä = a + 0'X + 0'X2 + '-- ,
0-x2-\ .
x = 0+l'X +
It is easy to show that computation wit h the barred Symbol s gives the same result
as computation with the polynomials. For this purpose it is sufficient t o prove
nn
^akxk =
Y^ä
k(x)
k
.
k=0 k=0
We use induction on n. The Statement is certainly true for n = 0 since
o
k=0
If it is true for n, it must be true forn + 1 .
Case 1 . a n+ i = 0 . The assertion is trivially true.
24
3. POLYNOMIALS. FACTORIZATIO N INTO PRIMES. IDEALS.
Case 2. a n+\ ^ 0 . We have
w+1 n
n+\
k
2_]akX = 2~] akxk + a n+\xn+l =
Yjä^"(i) ^ + a n+\(x)n(x)
w+1
= J2äk(xf.
Since computation with the barred symbols is essentially the same as eomputation
with the polynomials, the bar may be omitted withou t danger of confusion. Thu s
we have created new symbols for the polynomials fo r whic h the signs of additio n
and multiplication have meaning.
A polynomial ao + a\x + • • • + a nxn ca n be used to dehne the function f(x)
which assigns to any c e F an d f(c) e F where f(c) — ao + a\c + • • • + a ncn.
EXERCISE
4. I f f(x), g(x) ar e polynomials and c e F, sho w that
f(x) + g(x) = h(x) = » f(c) + g(c) = h(c)
and
f(x) • g(x) = h(x) => f(c) . g(c) = h(c).
The degree of a polynomial is the highest index attache d to a nonzero coeffi cient. If the polynomial is zero, it possesses no degree in the sense of this definition.
To avoid the necessity of discussing special cases, however, the zero polynomial is
assumed to have any negative degree.
EXERCISE 5 . Give n two nonzero polynomials f(x) o f degree m an d g(x) o f
degree n, show that f(x)+g(x) ha s the degree max(m, n) if m ^ n and f(x) -g(x)
has the degree m + n.
EXERCISE 6 . Prov e the long division property for polynomials. That is, given
two polynomials f(x) an d g(x) ^ 0 , sho w tha t ther e ar e polynomials q{x) an d
r(x) suchtha t
[1] f(x)=q(x)g(x)
+
r(x)
where the degree of r(x) i s less than the degree of g(x).
PROOF: W
e consider two cases:
Case 1 . Ther e is a q(x) wit h f(x) = q(x)g(x). Consequently , r(x) = 0 and
the Statement is proved. In this case we say f(x) i s divisible by g(x).
Case 2. N o such q(x) exists . I n that event consider the set of polynomials of
the form
[2] f(x)-q(x)g(x).
In this set there must be a polynomial o f least degree; call it r(x). Th e degree of
r(x) i s less than the degree of g(x). Fo r suppose the degree of g(x) i s m an d the
25
3.2. FACTORIZATION INTO PRIMES
degree of rix) i s n > m , i.e.,
am x m , a
g(x) = a 0 + ai x H h
rix) = b 0 + bix + "- + b nx\ b
m7
n
^ 0,
i=. 0.
Then we may define a polynomial
rx(x) = r(x) - (b
n m
n/am)x ~ g(x)
of degree < n — 1 . But from
r{x) = f(x)-q(x)g(x)
we have
r i W = fix) - [q(x) + (b n/am)xn-m]g(x)
or r\{x) i s of the form [2 ] and has a degree less than that of rix). However , r(x)
was supposed to be the polynomial o f type [2 ] of least degree. Consequently , th e
degree of r (x) mus t be less than that of g(x).
We observe first that the result of long division is unique. For suppose we have
two representations
f{x) =qi(x)g(x) +
ri(x)
f(x) =q2(x)g(x) + r 2(x)
where the degrees of r\ (x) an d r2(x) ar e less than that of g(x). Thi s implies
[qi(x) - q 2(x)]g(x) + [ri(x) - r 2(x)] = 0.
Consequently, q\ (x) —q2(x) = 0 since two polynomials of different degree s cannot
be equal. Thus r\ (x) — r 2(x) = 0 and the proof is complete. D
An immediate consequence of the long division theorem is the familiär remainder theorem. Let g(x) = x — a in [1]. Thus
f{x) =q(x)(x -a)
+ c.
Hence
f(a)=c o
r f(x)
=q(x)(x -
a) + f{a).
COROLLARY The equation f(x) = 0 has the Solution x = a ifand only if f(x) is
divisible by x — a.
3.2. Factorizatio n into Primes
A polynomial f(x) ove r a field F i s said to btfactored i f it can be written as
the product of polynomials of positive degree:
f(x) = gix)-hix)--zix).
The polynomials g(x), A(JC), . . ., z(x) ar e calledfactors of fix). W e shall consider
two factorizations identica l i f on e can be obtained fro m th e other by rearrangin g
the factors an d multiplying eac h by some element of the field. If there are no two
polynomials o f positive degree which have the product fix), the n fix) i s said to
be irreducible in F.
3. POLYNOMIALS. FACTORIZATION INT O PRIMES . IDEALS.
26
For th e purpos e o f investigatin g th e Solution s o f equation s f(x) = 0 , i t i s
sufficient t o conside r irreducibl e polynomials . For , i f f(x) — g(x) • h(x) an d
f(a) = g(a) • h(a) = 0 , then either g(a) = 0 or h(a) = 0 . The polynomials have
the importan t propert y tha t ever y polynomia l possesse s a "unique " factorizatio n
into irreducible polynomials, where by "unique" we mean that any two factoriza tions of the same polynomial into irreducible factors are identical. The similarity of
this result and the theorem of unique factorization int o primes for integers is quite
striking. We are led to examine the properties common to the polynomials and the
integers in order to uncover the general principle of which those are special cases.
We note at once that the polynomials and the integers are both commutative rings,
with an identity element for multiplication, for which the law ab = 0 => a = 0 or
b = 0 holds. Thes e conditions are not enough to guarantee a unique factorizatio n
into primes. A s a counterexample, consider the numbers a + b\f^ 3 wher e a an d
b are integers. Clearly this is a ring of the given type. Yet we have
4 = 2.2 = (1 + - V ^ Xl - v^ä) :
unique factorization doe s not hold in this ring.
EXERCISE 7. Prov e that both factorizations o f 4 in the ring of a + b^/—3 are
factorizations int o primes.
Actually, i t is the existence o f lon g divisio n whic h guarantee s uniqu e factor ization int o primes i n the specia l case s o f th e polynomials an d the integers. Th e
long division theorem, however, involves the notion of "magnitude." In the case of
polynomials i t is the degree; in the case of integer s i t is the absolute value. Thi s
notion o f magnitud e i s not necessary, however , a s we shal l show . Wha t propert y
of the ring is it that guarantees the unique factorization theore m and is implied by
long division in these special cases?
3.3. Ideal s
Consider a ring R. A subset of R is called an ideal 21 if
(a) 2 1 is a group with respect to addition
„.
(b)
a e 2t ]
.
nr
b€R\=>abe*.
THEOREM 3.1 In the ring of integers there are no other ideals than those consisting ofthe multiples of a given integer and the set consisting ofzero ahne.
PROOF: Le t 21 be an ideal in the ring of integers.
Case 1 . 2 1 consists of zero alone.
Case 2. Ther e is a nonzero a e 21 . If a < Othen — l-a = — a > Oand — a e 21.
Thus if an ideal contains nonzero elements it also contains positive elements. From
the set of positive integers in 21 take the least and call it d. B y (b) every multiple of
d i s an element of 21 . We prove that 21 is precisely th e set of multiples of d. Tak e
any a e 2t . By the division algorithm we have
a = qd + r , 0
< r < d.
3.4. GREATEST COMMO N DIVISO R
27
But a e 2 t = ^r = a — qd e %l. Sinc e d i s the smallest positive integer in 21 and
0 < r < d, i t follows tha t r = 0 . Consequently , a = g<i . Thus, any element of 2t
is a multiple ofd. D
The same theorem holds for polynomials an d its proof uses the division algorithm for polynomials in a similar way. This property of the integers does not hold
for rings in which the unique factorization propert y does not hold, e.g., the ring of
numbers a + &>/— 3 where a, b are integers.
EXERCISE 8. Sho w that the subset of elements for which a + b is even for m
an idea l i n th e rin g o f a + b V—3. Prov e tha t thi s idea l doe s no t consis t o f th e
multiples of any one element. (Se e Exercise 7.)
We make the definition :
An idea l i s calle d a principal ideal if i t is th e se t of al l multiples o f a given
element d of the ring.
Both for integers and for polynomials, where factorization i s unique, the only
ideals ar e principal. I n on e case wher e there i s no unique factorizatio n w e hav e
shown that this result does not hold. W e shall prove that the unique factorizatio n
theorem is a consequence of the following postulates:
(1) Multiplicatio n is commutative.
(2) Ther e is a multiplicative identity l e R.
(3) ab = 0 =» either a = 0 or b = 0.
(4) Ever y ideal in R is principal.
3.4. Greates t Common Divisor
Let R be a ring satisfying postulate s (l)-(4). Assum e a,b e R an d ab ^ 0 . If
there is a c e R suc h that a • c = b, we say, variously, "b is a multiple of a," "b is
divisible by a," an d "a is a divisor of bV L We write a\b (read : "a divides fe"). The
divisors of 1 ar e called the units of the ring.
EXAMPLES. I n the ring of integers the units are ± 1. I f R i s the ring of polynomials over F its units are all a e F,a ^ 0 . The ring of Gaussian integers a + bi,
where a an d b ar e integers, possesses th e units ± 1 , ±i. I t is interesting tha t thi s
is a principal ideal ring. Th e primes of the subring of ordinary integer s ar e prime
in thi s rin g onl y i f the y ar e o f th e for m 4 n — 1 . Al l other s ar e no t prime , e.g. ,
5 = ( 1 + 2/)( l — 2i). Thi s is a consequence of the theorem that all primes of the
form 4n + l ca n be represented as the sum of two Squares.
If a\b and b\c, then a\c
f \ = » 3a, ß e R suchtha t
b\c '
aoi — b
bß = c.
(Read: "there is (are)" or "there exists(s)" for "3.") Consequently, aaß = c or a\c.
2
For rings containing element s a ^ 0 , b ^ 0 such that a • b = 0 is possible, we call a an d b
"divisors of zero."
3. POLYNOMIALS. FACTORIZATION INT O PRIMES. IDEALS.
28
If a\b, a\c then a\(b + c)
a\b
3s, t e R suchtha t \
a\c
1
at = c.
Therefore a(s + t) = b + c or a\(b + c).
\fa\b an d b\a, the elements a and b have the same division properties, that is,
a\c <\$> b\c an d c\a 4 > c\b.
PROOF:
b\a ,
. c\a
.
,
a\c a\b
Sinee a an d b appear symmetrically the proof is complete. D
If a\b and b\a, then a and b are said to be equivalent (with respect to division).
Two elements are equivalent if and only if they differ b y a unit factor.
PROOF: Le t a, b e R be equivalent, i.e., a\b an d b\a. Thi s means that there
are elements £ , rj i n R suc h that a = sb an d b = rja. Therefor e a = srja. Thi s
implies a(£?? — 1 ) = 0 . Sinc e a / 0, th e use of postulate (3) gives srj = 1 . Thu s
e| 1,7/| 1. Conversely, if er\ — 1 an d b = sa, the n & is equivalent to a. Fro m b = sa
we already have a|£. Multiplyin g by rj we obtain & = rjea = a or fc|a. D
Now suppose a 1? a 2,..., a
n
€ /? . Consider the set
2l = a i # + a 2 # + hfl«/?
3
consisting of elements of the form
[1] a\X\
—
a2%2 + fl3*3 H H
Ö«X« ,
where JCI , * 2 , . . ., x n e R. 2 1 is an ideal. T o prove this we have only to show that
21 is an additive group (i.e., is closed under addition and subtraction) and is closed
under multiplication by elements of R. Ou r result is immediate since
and
nn
n
i=\ i=l
i=l
/n \
n
where x/, >v, z € iv . 2 1 is a principal ideal by postulate (4) applied to /?; therefore
2t consists of the multiples of a Single element d. W e now write
% = dR = a tR + a 2R H h
a„£;
that is,
I a i s a multiple of <i, and
I a is expressible in the form [1].
3
The sum of tw o set s denoted b y S + T i s the set of elements s + t wher e s e S, t e T. Th e
union (o r logical sum) of the two sets is denoted differently b y S U T.
3.4. GREATEST COMMO N DIVISO R
29
Furthermore, d € 21 since, by postulate (2), 1 e R and hence 1 • d e 21 . Also
#1, a2, • . ., an e 21 ; for we may take, say, x\ = 1 , X{ — 0 (/ > 1 ) in [1 ] above.
Consequently, there are x\, x 2 , . . . , x „ € J R such that
öf = ö l xi + a 2 x 2 H h
a Mxw
where d|aj ( i = 1 , 2 , . . ., n). Thus d is called a common divisor of the a*.
Let S be any common divisor of the at, i.e., <5|ai, <5|a2 ,..., 5|a„. I t follows for
any choice of the xt tha t
S\a\X\ + a 2 x 2 + • • • + a nxn.
Hence S is a divisor of all elements of 2t. Consequently, S\d. Conversely, since d\ai
(i = 1 , 2 , . . ., «) , 8\d => 5|a,- . Thus, the common divisors of a? an d the common
divisors of d ar e the same. An y element having this property is called a greatest
common divisor of the a\. Th e greatest common divisor s of the at are equivalent
under division . Fo r if d an d d' ar e greatest commo n divisor s o f the at we have
d\d' and d'\d. Fo r this reason any greatest common divisor of the at i s called the
greatest commo n divisor . Equivalen t element s wil l not usually b e distinguished;
there is no danger o f confusion sinc e the behavior o f an element wit h respect to
division is exactly the same as that of any of its equivalents.
A linear diophantine equation in R is an equation of the form
a\X\ + a 2 x 2 + • • • + a nxn = b.
Such an equation can have a Solution if and only if b is a multiple of d, the greatest
common diviso r o f the a?. Thi s is a direct consequenc e o f dR = a\R + a 27? +
• + a nR.
EXAMPLE. Th e equation
32x + 74y-~ lSz = b
obviously has no Solution in integers if b is odd. On the other hand it has Solutions
for all even b.
The element s a\, a2 , . . . , a n o f R ar e said to be relatively prime i f 1 is their
greatest commo n divisor . Thus , th e integers 6 , 1 0 , 1 5 are said t o be relatively
prime. If the ai are relatively prime the diophantine equation
a\x + a2*2 H h
anxn = 1
has a Solution. A n element p i s said to be prime i f it has no divisors othe r tha n
itself and 1 an d if it does not divide 1 . According to this definition the element 1 i s
not a prime.
THEOREM 3.2 Ifa prime p e R divides aproduct ab, then it divides at hast one
ofthefactors, i.e.,
p\ab) .
.
p\a J
{Read: "p does not divide a "for p\a.)
30
3. POLYNOMIALS. FACTORIZATION INT O PRIMES . IDEALS.
EXAMPLES. Thi s theore m i s tru e i n genera l onl y fo r principa l idea l rings .
Consider, e.g. , 2 • 2 = ( 1 + >/—3)( 1 — V—3) i n the ring of numbers a + by/^3
where a and b are integers. Again , in the ring consisting of the even integers 6 is
prime and 1 8 is prime, yet we have 6-6 = 2 - 18.
PROOF: Th e greatest common divisor of p an d a i s 1 since p\a an d the divisors of p ar e only p an d 1 . Thu s p an d a are relatively prim e and the equation
px + ay = 1 ha s a Solution JC, y e R. Multiplyin g both sides of the equation by b,
we obtain
b = pbx + aby.
Since p\ab, th e right side is divisible by p. Henc e p\b. D
COROLLARY If p\a\a 2- —a
n
then p divides at least one ifthe a\.
PROOF: Thi s theore m i s tru e fo r n = 1 . W e us e induction . Assum e th e
theorem is true for n and suppose
p\a\a2ai-->anan+\.
By Theorem 3. 2 either p\a n+\ o r p\a\a 2 * * • an. I n the former cas e the theorem is
proved. In the latter case the theorem follows b y the induction assumption. D
Suppose an element possesses a factorization int o primes. Two such factoriza tions are said to be identical if the primes of one can be paired off with equivalent
primes in the other. Thus identical factorizations ar e the same except for order and
multiplication by the units. If all possible factorizations of an element are identical,
the element is said to possess a unique factorization int o primes.
THEOREM 3.3 If an element possesses a factorization into primes, the factorization is unique.
PROOF: Assum e two factorization s
p\P2'"Pr = qiq2'-q sFirst w e hav e r = s. Fo r eac h p t divide s a q k an d n o q k possesse s mor e tha n
one pt a s a divisor. Therefor e s > r . Similarl y r > s. Therefor e r = s. No w
Pilqiqi'' * <?r- B y the corollary t o Theorem 3.2 , p\ i s a divisor o f on e of the q t\
P\ \q\ say. Since q\ ha s only itself and 1 a s divisors and p\ \ 1 i t follows that p\ an d
qi ar e equivalent, i.e., p = sq\. Consequently , we may write
sqiPi'" p
r
=q\qi--q
r
or
(ep2)--Pr =q2'-q
The theorem follows by induction. D
r.
It is conceivable that there are elements which possess no decomposition int o
primes. In other words, an element might be factored i n such a way that nonprimes
are include d i n th e factorizatio n n o matte r ho w fa r th e proces s i s carried . Fo r
integers an d polynomials there is no such danger sinc e the number of elements in
3.4. GREATEST COMMON DIVISOR
31
the produet is limited by the "magnitude" of the element being factored. However ,
the result is true in general and therefore ever y elemen t o f R possesses a unique
factorization int o primes.
LEMMA Let a\, 0,2, . .. G R be a sequence ofnonzero elements such that a; + i|a;
for all i. Then all the at from a given element ort are equivalent.
PROOF: Le t 21 be the set of all multiples of the <z/. 21 is an ideal; for take any
a,b G 21 . We have
a G 2 t O a = ajC,
b G 21 O b = ajd.
Assume i > j, say. The n a r |a ; . Therefor e Bs e R with b = a ts. Hene e a ± b =
ai (c ± s); 2t is closed with respect to multiplication. Furthermore , 21 is closed with
respeet to multiplication by elements of R since for r G R, a • r = a t(cr). 2 t is a
principal ideal by postulate (4) and hence there is a d e R such that 21 = dR. Thu s
d • 1 G 2t and d is in 21. Therefore there is an an which divides d. Consequently ,
&n, ß „ + i , *W2 > . . . \d.
But ai\ai = > al• e 2 1 => a t G dR. Henc e d\a n, a„+i, ««+2, • • • • We have proved
that all the a\ for / > n are equivalent to d. D
THEOREM 3.4 Every a G /? w either zero, a unit, a prime, or a produet of primes.
PROOF: Suppos e a is none of these, i.e., a ^ 0 , af 1, and a is neither prime
nor a produet o f primes. Sinc e a is not prime i t can be expressed a s a produet
bc = a wher e neithe r b nor c is equivalent t o a. Clearl y i ^ 0 , c / 0 . I f c
and & wer e each eithe r a unit, a prime, o r a produet o f primes, the n a would be
in one of these categories. Thi s possibility is mied out . I t follows tha t one of the
divisors, say b, has the same property as a. But this reasoning could be carried out
indefinitely t o give a sequence of elements satisfying th e hypothesis of the lemma
but for which the terms do not eventually becom e equivalent. Thi s indirect proof
establishes the theorem. D
We have proved that every element can be factored uniquel y into primes. Suppose a has the factorizatio n
a = pip2-- p r
where the pt ma y be the same or distinet. I t is possible that the same prime and
its equivalent s ma y appear mor e tha n onc e i n this expression . I f all equivalent
elements are taken together we may write
Vi Vi
y
cn
a = ep l1 p2~ "• p s% Vi > 0,
where the pi are now essentially distinet, i.e., Pi\pj, i # 7 . Clearly v\ + V2~\ h
vs = r. Any element of the form
d = p?lpZ2-'P?°, 0 < f t < v
f,
is obviousl y a divisor o f a. Conversely , i f d\a it must be of this for m sinc e the
factorization o f d can contain no prime to a higher degree than its degree in a. W e
32 3
. POLYNOMIALS. FACTORIZATION INT O PRIMES . IDEALS.
may no w find an expression fo r th e greatest commo n diviso r o f tw o elements i n
terms of their factorizations. Suppos e a,b e R with the factorization s
a = p\ lp? •••?/,
b
= p?p?--.p?',
in whic h w e understan d tha t /x& , Vk ar e no t bot h zero . Thu s ever y prim e whic h
appears i n eithe r factorizatio n appear s i n both , i f onl y nominally . Th e greates t
common divisor of a and b is then
d = Px P
2
•
••Pr
where a,- = min(v (-, /x;). In a similar way we write the least common multiple
D = pfpf-p^
where ßt = max(v t, ß t).
http://dx.doi.org/10.1090/cln/015/04
CHAPTER 4
Solution of the General Equation of nih Degree . Residue
Classes. Extension Fields. Isomorphisms.
4.1. Congruenc e
Consider th e notatio n a = b. Th e sig n o f equalit y mean s tha t a an d b ar e
merely two ways of writing the same element. In other words, the symbols a and b
are interchangeable in any discussion. We have already considered relations which
are somewhat like equality i n this respect. Fo r example, in the preceding sectio n
"a is equivalent to fc" means that a and b are interchangeable in any discussion of
divisibility properties. Let us investigate relations of this kind in somewhat greater
generality.
Assume we are given a set S of elements a,b,c,... .
A relatio n
a= b
(read: "a congruent fc")between two elements of S is called congruence (or equivalence or similarity) if it satisfies the postulates
(A) a
(B) a
(C) a
= a (reflexivity)
,
= b => > b = a (symmetry)
,
= b, b = c => a = c (transitivity) .
ExAMPLES. A relation nee d not satisf y an y o f th e postulates. Fo r instance ,
let S be the set of human beings with the relation "a loves bV Ever y day "a love s
a" i s violated by some suicide. Furthermore , " a love s fc" is nonsymmetric a s any
reader of novels can teil. True, an argument can be made in favor of the transitivity
of thi s relation unde r th e principle o f "Lov e me , lov e my dog"—bu t th e logic i s
dubious. Fo r a se t o f peopl e gathere d i n a pitch dar k room a t a seanc e we hav e
the relation "a can see b" which vacuously satisfie s th e last two postulates, but not
the first. A more orthodox example is the relationship "a approximates fc" among
the real numbers. If we understand this to mean that the difference betwee n a and
b lie s within som e give n limi t o f error , w e se e then thi s relation i s reflexive an d
Symmetrie but not transitive. A relation which violates only the Symmetrie law is
"a < b" i n th e se t o f integers . W e have show n b y th e las t thre e example s tha t
the postulates o f a congruence relatio n ar e independent; i.e. , n o postulate ca n b e
derived logically from the other two.
By means o f the congruence relation th e elements o f S ca n be classified int o
nonoverlapping "species. " For define S a a s the set of all s e S such that s = a. I f
Sa and Sb overlap at all they are completely identical. For suppose 3c e S such that
33
34
4. RESIDUE CLASSES , EXTENSION FIELDS , AND ISOMORPHISM S
c e S a, c e Sb- Then c = a , c = b. B y postulates (B ) and (C), a = b. I f d e S a
then d = a an d hence d = b. Conversely , an y element i n Sb is in S a. Thu s the
classes S a d o not overlap. Conversely , a covering of S by nonoverlapping subset s
furnishes a congruence relation; namely, a = b if a and b are in the same subset.
Let jf ? be a commutative rin g an d assum e a congruence relatio n i n R whic h
satisfies postulates (A), (B), (C) and is preserved by the Operations in the ring, i.e.,
™u
(D) a
J __ v a + b = c + d
= c, b = d =\$> , j
a • d = c • d.
EXAMPLE. Le t R be the set of integers with the added relation: Tw o integers
are congruent if their difference i s even. Thi s is clearly a congruence of the above
type. By means of this congraence we divide R into two classes, the even numbers
and th e od d numbers . Not e tha t thes e tw o classe s ar e th e element s o f th e rin g
whose Operations are defined by the tables at the beginning of Chapter 2.
A congruence relation which satisfies (D) will always dehne a ring in the same
manner as in the above example. To prove this result, we first dehne
\$a+b = S a + Sb
and
Sa-b = S a ' Sb',
i.e., S a+b is the set of elements c + d where c = a,d = b.
EXERCISE 1 . Sho w that this definition i s consistent with the former definitio n
of Sa+b a s the set of all elements congruent to a + b.
LEMMA The sets S x, x e R,form a commutative ring.
PROOF: Firs t w e sho w tha t the y constitut e a commutativ e additiv e group .
Closure is obvious. The associative law is trivial:
(Sa + Sb) + S c — S(a+b)+c = S a+(b+C) = S a + (Sb + S c).
There is an identity element:
Sa + So = S a.
To each element there is an inverse:
Sa + S- a = So.
The commutativ e la w i s obvious . Next , fo r multiplication , th e distributiv e la w
holds by
Sax^b i
S c) =
S a(b+c) =
Sab+ac
— S ab + S
ac
= ö
a
' Ob H ~ o a • ö c.
The associative and commutative laws for multiplication clearly hold. D
Consider the set 2t = So of all a = 0.
35
4.1.CONGRUENCE
(1) 2 1 is closed with respect to addition and subtraction. Fo r
a, & e 2 1 =^ a = 0 , b = 0=>a + b = 0.
Furthermore, we have a = b, but
a = b, —b = —b = > a — b = 0 .
Hence a,be%L=>a±be\$l.
(2) 2 t is closed with respect to multiplication by elements of R
Consequently, 2t is an ideal. Thus we have shown that a congmence relation which
is preserved under the Operations in R defines a n ideal, the set of elements a = 0 .
Conversely, let 2t be an ideal in R. Us e 21 to dehne a new congmence relation:
a = b means a — b e 2t .
EXAMPLE. I
f 2t is not an ideal, a congmence cannot be defined in this manner.
Suppose, for example , that 2t is the set of od d integers; then , by this mle a = £ a.
That th e ne w relatio n i s a congmence follow s fro m th e singl e fac t tha t 2 t i s a n
(1) Wehav e
0 e 2 t =>• a = a.
(2) a = b ==» a — b G 2 1 = ^ -{a - b) e 2t . Hence
b — a e% o
rb
= a.
(3)
a =
1
=> a -b, b
b= c J
- c e2l .
Therefore a — c e 2 t o r a = e . W e show furthe r tha t thi s congmenc e
satisfies (D).
(4)
a= b
a — b, c —
J G 2 1.
c= ä
Using the group property we have
a + c-{b +
d)e% o
ra
+ c = b + d.
Using closure under multiplication by elements of R, we obtain
(a - b)c, b(c - d) e 2 1 =* (a - b)c + b(c - d) e 21.
Consequently,
ac — bd G 2 1 o r ac
= bd.
36
4. RESIDUE CLASSES, EXTENSION FIELDS, AN D ISOMORPHISMS
By means of this new congruence relation we may now define a n ideal So , the set
of all elements a = 0. But
aeSoOa =
0<^a-0 =
ae^i.
Clearly, the speeification o f a eongraenee relation of this type and the specificatio n
of an ideal are completely equivalent .
The congruence defined i n R by means of the ideal 2t is denoted by
a = b (modSl).
The classes S ö, S&,... are called the residue classes (mod 21). In a principal ideal
ring, 21 consists of the multiples of one element d. I n that case we use the notation
(mod d) instea d of (mod 21).
ExAMPLES. Conside r the congruence defined in the set of integers by the ideal
consisting of the multiples of 7
a = b(modl) =»7|(a-fe)
.
Thus a = b means a = Im + b: A n intege r i s congruent t o its remainder afte r
division b y 7 . Th e ring o f integer s i s spli t thereby int o the seve n residue classe s
So, S\,..., Sg . These classes are the elements of a commutative ring. We have, for
example, S2 + S 4 = SO , S2 • S4 = S\ 9 S3 + S 5 = S\. I t is convenient to omit the S' s
and denote the elements o f the ring by the subscript s 0, 1 , . . . , 6 alone. Th e ring
contains a multiplicative identity 1 . We further not e that all nonzero elements have
inverses:
dement 0 1 2 3
45
6
inverse
1 45
23
6
Thus the residue classes (mo d 7 ) form a field. Linear equations may be solved in
the usual way; if 3x — 4, then x = 3 - 1 4 = 5- 4 = 6 . Quadrati c equations can be
solved by completing the Square; thus
x2 + x + l=0= (x
+ lj) + ^ = ( j t + 4 ) 2 - l = 0 ,
and we obtain the Solutions
JC
=— 4 ± 1 o r x = 4 , x = 2 .
Not all equations of degree higher than one have Solutions; e.g., consider x2 — 3 =
0.
If a n integer m i s not prime, then the ring of integers (mo d m) i s not a field.
For we have divisors a • b = m with a,b 7 ^ m and hence we have divisors of zero.
EXAMPLE. Th e integers (mo d 12) do not form a field since we have 3-4 = 0 ,
for example. Thus there are divisors of zero; these elements do not have inverses.
THEOREM 4.1 Let R be aprincipal ideal ring as defined by postulates (l)-(4 ) in
Chapter 3. If p e R is a prime, the ring ofthe residue classes (mod p) is a field.
37
4.2. EXTENSION FIELD S
PROOF: Th e ring R (mo d p) ha s a unit dement (i.e. , multiplicative identity)
S\. I t is sufficient t o show closure under division; S a ^ SQ implies that S a ha s an
inverse. I f S a ^ So then a e S a =\$> a ^ 0 (mod p). Henc e p\a. Sinc e the only
divisors of p ar e p an d 1 , the greatest common divisor of p an d a is 1 . Therefor e
we can find x, y e R suc h tha t ax + py = 1 . I t follow s tha t ax = l(mo d p).
Consequently, S aSx — S\. D
4.2. Extensio n Fields
Consider the field F o f integers (mo d 7) and construct a table of their Squares
X
1
x
0 1 2 3 4 5 6
0 1 4 2 2 4 1
2
The equation JC = 3 has no Solution in this field. What, then, does it mean to solve
the equation? I n order to answer our question, conside r th e field of real numbers
and the equation x 2 + 1 = 0 , which has no Solution in this field. In order to solve
x2 + 1 = Ow e construc t th e large r field of number s a + bi, wit h a , b real an d
i2 — i. Th e same construction whic h leads from th e real numbers to the complex
numbers allow s us to "solve" the most general equation wit h real coefficients. I n
our example, the equation x2 — 3 = 0 has a Solution in the field of numbers a+b V3,
where a and b are integers (mo d 7).
EXERCISE 2. Verif y tha t the set of numbers a + b«/3, wher e a, b are integers
(mod 7) , actually is a field.
An extension field of a field F is a field E suc h that E D F an d the Operations
for whic h E i s a field coincide in F wit h those alread y defined . F i s then calle d
a ground field with respect to E. T o solve an equation f(x) = 0 where / ( x ) i s a
polynomial over F mean s to find an extension field of F which contains an dement
a suc h that f(a) = 0, Th e dement a i s then called a root of f(x). Le t p(x) b e
an irreducible polynomia l ove r F. I f p(x) ha s degree 1 it has a root in F . I f th e
degree of p(x) i s higher than 1 , p(x) canno t have a root in F; sinc e
a e F , /?(a
) = 0 =£• (x — a)\p(x),
so that p(x) woul d be reducible.
THEOREM 4.2 Denote by F(x) the ring of polynomialsover F. If p(x) e F(x) is
irreducible, then there is an extension field E of F which is " essentially" the ring
ofresidue classes E = F(x) (mo d p(x)). We further assert that p(x) has a root
in E.
PROOF: Th e ring F(x)(mod p(x)) i s a field F b y Theore m 4.1 . I f </>(x) e
F(x) w e may writ e (ß(x) = c\$ + c\x + • • • + c nxn. Th e residue class S<f,( X) € F
may be written
SCo+c\x-\ \-c
nx
n=
*^c o ~f " ^c\S x +
* ** + o
CnSx.
The residue classes can therefore be described in terms of only two types, Sa where
a e F , an d the class S x. Al l others ca n be obtained fro m thes e by addition s an d
multiplications.
38 4
. RESIDUE CLASSES , EXTENSIO N FIELDS , AND ISOMORPHISM S
Let us consider F = F (mo d p(x)), th e set of the S a, a e F. S a = Sb means
a = b (mod p(x)) o r p(x)\(a — b). It follows that a — b = 0ora = b. Thus every
element o f F contain s onl y on e element o f F. Furthermore , ever y elemen t o f F
belongs to an element of F. Th e elements of F can be paired off with the elements
of F. Moreover , we have
^a I " ^b — ^a+bi ^a
' ^b — ^a-bi
so that the computation wit h the elements o f F i s in no way different fro m com putation with the elements of F. W e have shown that F i s isomorphic to F. 1 No w
consider the equation
p(x) = a o + a\x + • • • + a nxn = 0.
The corresponding equation with coefficients i n F is
Sa0 + Sa j X + • • • + S ünX = SoThis possesses the Solution X = S x i n F, for we have
\*0 "I " ^ Ö I ^ X + * * * + b
anöx
=
o
fl0 + aiJC _|
\-a
nx
n
— Sp(jc ) = oo -
We have obtained a n extension field E o f F , an d shown that it is isomorphi c
to F . I n order to obtain a n extension field E o f F w e have only t o replace thos e
elements o f F whic h ar e i n F b y th e correspondin g element s o f F . W e defin e
addition an d multiplication fo r F a s follows: wheneve r S a occur s in the tables of
addition or multiplication fo r F replac e it by a. Fo r example, S a + S x = S a+X in
F lead s to a + S x = 5 a + x i n F. A field is defined whe n we are given its elements
and rule s o f Operation . Eve n thoug h th e element s o f F hav e a mixe d nature —
some of them are classes of polynomials, others elements of F—it is nonetheless a
perfectly goo d field. By constructing this field F w e have proved the theorem. D
The introduction o f complex numbers into analysis proceeds in the manner of
this theore m a s a n extensio n o f th e real number s tha t contain s a root o f x 2 + 1 .
Long divisio n b y x 2 + 1 is s o simpl e tha t on e ma y immediatel y writ e dow n th e
residue classes.
EXERCISE 3 . Solv e th e equatio n x 2 + 1 = 0 by extendin g th e field of rea l
numbers.
We now develo p a more careful descriptio n o f th e elements o f F . Th e mos t
general residue dass i n F i s S^x)* </>(*) € F(x). Sinc e d[p(x)] = n (read : "th e
degree of p(x)" fo r "3[p(x)]"), we may assume 3[0(x)] < n. Fo r we can express
any polynomial </>(x) i n the form
0 0 ) = q(x)p(x) + r{x), d[r(x)]
Hence (f>(x) = r(x) (mo d p(x)) o r S^^ =
0(x), i/f(x) e F(x) wit h
3[0W], 3W(x)]
See Section 4.3.
< n.
S r(X). O n th e othe r hand , suppos e
< n.
4.2. EXTENSION FIELDS
39
If 50^ ) = S^x), i t follows tha t p(x)\[(p(x) — \/r(x)] and hence (p(x) — \//(x) = 0
or 4>(x) — \jf{x).
EXAMPLE. a + bi = c + di => a = c, b — d. Th e sum S«^) + S ^ ) =
S(p(x)+i/f(x) is at once in the prescribed form sinc e d[(j)(x) + xl/(x)] < n.
For the product, however, the result is not so simple. We may write
(j)(x)\jf(x) = q(x)p(x) + r(x).
This yields
S(f)(x) ' S\ff(x) = S
r(x).
Only in those cases where 9[000 • if(x)] < n is S^).^*) immediately an element
of the prescribed form .
We have shown that the elements of E are expressible in the form
^c 0 +cix+---+c n _ix n - 1 =
c
0+
C
\SX +
' ' ' + C n-\SX
Two such elements are equal if and only if corresponding coefficients ar e equal.
EXAMPLE. Le t F b e the field of integer s (mo d 7). Th e equation p(x) =
x3 — x + 2 = 0 has no Solution in F. Consequently , sinc e p(x) i s of third degree
it is irreducible; for any factorization o f p(x) woul d have to contain a linear factor.
(On the other hand a fourth-degree polynomia l might have two quadratic factors. )
The extension field E consists of all elements a + ba + ca 2 where a,b,c e F and
p(a) = a 3 - a - 2 = 0.
We have a3 — a + 2, a4 — a 2 + 2a, a5 = a 3 + 2a 2 = 2a 2 + a + 2 , . .. . In this
manner any power of a—and henc e any polynomial in a—can b e reduced to one
of degree not greater than two.
Let F be a field, E a n extension o f F. Suppos e ot e E. W e distinguish two
cases:
Case 1 . Ther e is no nonzero polynomial over F which has a as a root.
EXAMPLE. Tak e F to be the field o f rational numbers. Th e real number e =
2.718 . .. i s the root of no polynomial with rational coefficients. I f a is an element
of this type it is said to be transcendental with respect to F.
Case 2. I f ot is not transcendental, ther e is a polynomial f(x) G F(x) suc h
that f(a) — 0. We say, then, that a is algebraic with respect to F. Amon g all the
polynomials whic h hav e the root a, ther e is one of least positive degree . Denot e
this b y p(x). Sinc e p(a) = 0 any multiple o f p(x) i s also a polynomial wit h
the root a. Conversely , suppos e f(x) e F(x) an d f(a) = 0 . W e can find q(x),
r{x) e F(x) suchtha t
f(x) =q(x)p(x) +r(x),
where d[r(x)] < d[p(x)]. Substitutin g a for x, we obtain
f(a) =q(a)p(a)
+r(a)
,
40 4
. RESIDUE CLASSES , EXTENSION FIELDS , AND ISOMORPHISM S
whence r(a) = 0 . Bu t p(x) wa s assume d t o b e a polynomia l o f lowes t positiv e
degree whic h ha s th e roo t a. Therefor e r(x) = 0 o r f(x) = q(x)p(x). W e hav e
proved
LEMMA 4. 3 The polynomials for which a is a root are the multiples of the polynomial p(x) of lowest degree.
Th e number \[l i s a root o f th e quadrati c polynomia l x 2 — 2 ove r
the field o f rationa l numbers . I t canno t b e a roo t o f a lower-degre e polynomia l
since i t i s irrational . Consequently , an y polynomia l whic h ha s y/l a s a root mus t
be a multiple o f x 2 — 2 .
EXAMPLE.
LEMMA 4. 4 The polynomial p(x) is irreducible in F. For otherwise p(x) =
a(x) - b(x) where d[a(x)], d[b(x)] > 0 .
It follows tha t p(cc) = a(a) • b(a) = 0 , whenc e eithe r a(ct) = 0 o r fc(a) = 0 .
This contradict s th e assumption tha t p(x) i s a polynomial o f least degree .
LEMMA 4. 5 The only irreducible polynomials which possess the root a are the
polynomials c • p(x), c e F .
Hence an y polynomia l o f lowes t degre e i s equivalen t t o p{x). Therefore , ac cording t o th e Conventio n o f th e las t chapter , w e sa y p{x) i s the polynomia l o f
lowest degree .
Suppose F i s a ground field an d p(x) a n irreducible polynomia l ove r F. Le t E
be a n extensio n o f F whic h contain s a root a o f p{x). B y Lemm a 4. 5 p{x) i s th e
polynomial o f leas t degree for whic h p(a) = 0 . However , E ma y certainl y includ e
elements whic h ar e no t necessar y fo r th e Solutio n o f p{x) — 0 . Fo r example , i f
F i s th e field o f rationa l number s an d ou r proble m i s t o solv e x 2 — 2 = 0 , i t i s
not necessar y t o exten d F a s fa r a s th e rea l numbers . Wha t i s th e smalles t field
between F an d E whic h contain s al
The required field certainl y mus t contai n ever y elemen t o f the for m
c nan
(p(a) = co + C\OL H h
where 0 ( x ) e F(JC) . Le t u s determin e ho w thes e particula r elements , th e polyno mials i n ot over F, equate , add , an d multiply .
Suppose </)(a) = \j/(a). The n (f>(x) = ^(x) ha s th e root or, whence
(Lemma 4.3) , o r
(f){x) = y/(x) (mo d p(x)).
Conversely, i f </)(x) — \jr{x) = p(x)q(x), the
(j)(a)-i/(a) =
n
0.
Thus w e hav e prove d
(p(a) = \//(a) O <f>(x) = xj/(x) (mo d p(x)).
The rule s fo r additio n an d multiplicatio n o f thes e element s ar e obviousl y th e
same as for polynomials. Thu s we see that the set consisting of the elements 4> («) i s
isomorphic t o the se t of the residue classe s (mo d p{x)) o f the polynomials ove r F.
4.3. ISOMORPHISM
41
4.3. Isomorphis m
The notion of isomorphism has already been touched upon here and there in the
text. We have mentioned an "essential sameness" of two mathematical Systems . It
has been implied that two Systems which are isomorphic differ in no important way;
Operations on the elements of one are "the same as" Operations on the elements of
the other. Th e purpose of this section is to replace this descriptive terminology by
a precise formulation .
Implicit i n the idea o f th e "essentia l sameness " of tw o set s is the knowledg e
that each element of one set has an "image" in the other. Specifically, conside r two
sets S and T. Th e set S is mapped into the set T if to each s e S there corresponds
ar e T, th e image of s. Th e Statements "S i s mapped into T" an d "f i s the image
of s" are denoted by
S — > T an d s — > t,
respectively. A mapping is nothing more than a single-valued functio n wit h arguments in the set S and values in the set T. W e could have written t = f(s) instea d
of s - > t.
If ever y elemen t o f T i s a n imag e fo r som e elemen t o f S , w e sa y tha t S i s
mapped onto T an d write S — > T.
EXAMPLE. Fo r the set S take a group G. Le t the elements of T b e the cosets
of som e subgroup H c G. B y means of the mapping f(x)=xH,G i
s mapped
onto the cosets of H. Fo r an isomorphism we require more, as this example shows.
Different element s of S should have distinct images. Thus if s\ ^ S2
From a mapping 5 — > T of this kind we can derive an inverse mapping T — > S.
For any t e T ther e is a Single s e S suc h that s— > t. Fo r the inverse mappin g
take t -+ s. Th e mapping has furnished a method of pairing off th e elements of S
and T. I n other words there is a one-to-one correspondence between the elements
of S an d th e element s o f T. I n thi s cas e w e sa y tha t th e mappin g i s 1 - 1 (read :
"one-to-one" for "1 -1 ". )
EXAMPLES. Th e ordinary Photographie imag e of a three-dimensional objee t
does not provide a 1 - 1 mapping , if , say , the objeet i s transparent. Tak e p(x), a n
irreducible polynomial over a field F, and let a be one of its roots in some extension
field E D F. Fo r S take the se t consisting o f al l the elements o f E whic h deriv e
from additio n an d multiplicatio n o f a wit h th e element s o f F\ S i s th e se t o f al l
polynomials (p(a) over F. Fo r T take the set of residue classes T^ X) wher e \j/{x) e
T^x) mean s i/(x) = (f>(x) (mo d p(x)). W e have shown that
(j)(a) = \l/(a) => 4>{x) = yjr{x) (mo d p{x)).
From this Statement it is easy to see that there is a 1-1 correspondence between the
elements of S and T, namely , </>(cc) ^ > T^x). Furthermore , we have
42 4
. RESIDUE CLASSES, EXTENSION FIELDS, AND ISOMORPHISM S
and a simila r resul t hold s fo r multiplication . Thu s th e sum (or product) o f the
images of two elements is the image of the sum (or product) of the elements. This
is what is meant by the essential sameness of two fields.
What do we mean when we say that two mathematical System s have the same
structure? Before we answer this question it is necessary to specify wha t we mean
by a mathematical System . A mathematical System is concerned with fundamenta l
elements of various classes Si, S2,... . Everything eise is defined in terms of these
elements. I n analysis , th e fundamental element s ar e real numbers ; i n geometry,
points, lines , planes, Fo
r simplicity, le t us assume tha t the fundamental ele ments are all of one dass S. Relations are defined for the elements of S. A relation
R(x\, X2, .. • , xn) i s a Statement involving the elements x\, x 2,..., x n. W e do not
mean to imply by this notation tha t the number of elements in a relation is finite.
For example, the Statement that a sequence of real numbers has a limit is a perfectly
good relation.
To write a relation for specific elements does not mean that it is true.
EXAMPLES. I f S i s the set of integer s an d R(x\,x 2) mean s x\ = x 2 the n
R(5, 5 ) is true bu t R(5, 7) is not. Labe l th e vertices o f a Square in accord wit h
the diagra m o n page 8 . Defin e R(x, y) t o mean tha t th e vertices x, y ar e adja cent. /?(1 , 3) i s false , bu t /?(1 , 4) i s true . A n Operatio n ca n be considere d a s
a relatio n connectin g thre e elements . Fo r example, th e Operation o f multiplica tion can be considered completely in terms of the relation R(a,b,c) whic h means
a-b = c. The relations of a mathematical System are defined by their special properties. Thes e ma y be clumsy t o write down. Fo r example, th e special propertie s
of multiplication in a group are given by the postulates on page 3. Using the notation abov e we see that the third postulate gives the property R(e, a, a) is true. A
mathematical System, then, consists of elements and relations defined among these
elements.
Two mathematical System s S and T ar e said to be isomorphic if there is a 1-1
correspondence betwee n the elements an d relations o f S and T suc h that truth of
a relatio n i n one System implie s trut h o f the corresponding relatio n i n the other
System, and falsity o f a relation in one System implies falsity o f the corresponding
relation in the other.
EXAMPLES. Fo r both S and T take the set of real numbers. Let R(x, y) be the
relation x < y for S. For the corresponding relation R r{x', y f) i n T take x' > y''. S
can be mapped isomorphically on T by the transformation x' — — x.
EXERCISE 4. Le t S be the set of vertices of the cube with the relation R(x, y),
x an d y hav e a n edge in common. Sho w tha t thi s i s isomorphic t o the set T of
faces of the octahedron where the relation R'(x', y r) means that the faces x', y' are
adjacent. (Se e Figure 4.1.) Label the faces of the octahedron accordingly .
For the projective plan e take the relation R(P,l) t o mean that the point P is
on the line l. T o set up an isomorphism between two planes, use central projection
from a point outsid e both. R(P,l) <& R(P',l f). Fo r ordinary Euclidea n plane s
only parallel projection wil l give an isomorphism.
4.3. ISOMORPHIS M
43
FIGURE4.1
An automorphism is an isomorphism of S with itself. Hence an automorphism
is a 1 -1 mapping of a System onto itself which preserves the validity of the relation s
among its elements. Ever y Syste m has at least one automorphism—the identity —
the mapping which takes each element into itself.
EXAMPLES. Conside r the automorphisms o f the cube which preserve the validity o f th e relatio n R(x, y) : x, y ar e adjacent . Th e 90 ° rotatio n whic h take s
the vertices 1 ,2,3,4,5,6,7, 8 into the vertices 2,6,7,3,1,5,8,4 i s just such an automorphism. We denote it by
1 23 4 5 6 7 8
26 7 3 1 58 4
or simply by (26731584). 2 Thes e automorphisms ar e called the symmetries of the
cube and, as the nomenclature suggests, they are strongly connected with its regulär
geometric properties.
EXERCISE 5 . Determin e th e 4 8 automorphism s o f th e cube . I n th e se t o f
integers let R(x, y) mea n x < y. Th e translations
a— • a + n
are all possible automorphisms of this System. In the same set, we define R(a,b,c)
to mean a is between b and c.
EXERCISE 6 . Fin d al l the automorphism s o f the integers whic h preserve th e
validity of the relation R(a,b,c).
In addition to these examples, we introduce two important illustrations o f numerical Systems:
(1) Th e only automorphisms of the integers which preserve addition are identity and the mapping x — • — x.
PROOF: Pu t 0' = a. The n 0 + 0 = 0 =>• a + a = a, and consequently, a = 0.
Furthermore, either l ' = l o r l ' = —1 . Fo r put V = b. It follows tha t
2' = ( 1 + 1 ) ' = b + b = 2b, y
= ( 1 + 2Y = b + 2b = 3b, etc .
Hence every integer i s a multiple o f b. Therefor e b can only be one of the units,
either+1 o r - 1. D
Compare the notation o n page 8.
44
4. RESIDUE CLASSES, EXTENSION FIELDS, AN D ISOMORPHISMS
(2) Th e field of real numbers possesses no automorphisms other than identity.
PROOF: Th e elements 0 and 1 remain fixed in the automorphisms of any field.
For clearly 0' = 0 and therefore V = a ^ 0 . Consequently ,
l-l = l=^ö- a = a, a
^ 0 =>- a = 1 .
It follows that the integers remain fixed. For
2 = 1 + 1 -* 1 + 1 = 2 , 3
= 2 + l - > 2 + l = 3 , etc. ,
and furthermor e
n + (-w ) = 0 ,
v
As an immediate consequenc e th e rational number s mus t als o remain fixed: Th e
number x — m/n satisfie s the relation nx = m. Pu t x' — y
nf = n
m —m
=> ny — m.
The Solution of this equation is unique. Consequently, x map s onto itself. D
The ordering of the real numbers is not changed by automorphism, i.e.,
a < b => a! < b\
In derivin g thi s resul t w e may no t use limitin g relation s sinc e w e ar e concerne d
only wit h the field relations, additio n an d multiplication, an d it is only these that
need be preserved. W e assert that limiting relations i n a field of numbers ar e not
always preserved by automorphism. I n proof of this assertion we offer th e following:
EXAMPLE. Le t F be the field of numbers a + b\flL where a and b are rational.
We use the result of
EXERCISE7. Sho w that the mapping a+b^/2 - > a—b«j2is a n automorphism
of F (i.e. , show that the mapping preserves addition and multiplication properties).
This automorphism leave s the rational elements fixed. W e can therefore approxi mate +Jl as closely a s we please by fixed elements 1 , 1 .4 , 1 .41 , 1 .41 4 , Eve
n
so, A/ 2 is not fixed. We see that continuity properties ar e not preserved in this automorphism o f the field. Nevertheless , i t is not difficult t o prove the contrary fo r
the field of real numbers.
It is sufficient t o show that a > 0 => a! > 0 . To this end we use: a > 0 O 3b
such that b 2 = a. Henc e
a > 0 =^ a = b • b => o! = b' • b' =» a! > 0.
Consequently,
c> d = * c + (-d) > 0 => c + {-\)d' > 0 =» c 7 ></ 7.
Thus, in any automorphism of the field of real numbers the order of the elements is
preserved. An y real number is uniquely define d b y its inequalities with respect to
45
4.3. ISOMORPHISM
the rational numbers (Dedekind cut). Since the rational numbers remain fixed, it is
clear that each element can go only into itself. The only automorphism is identity.
The field of complex numbers, on the other hand, has at least one nonidentical
automorphism, a + bi o a — bi. I n fact, the set of automorphisms of the complex
number field has the cardinal number
22*0.
Let p{x) b e an irreducible polynomial ove r a field F, an d E b e an extensio n
field of F whic h contains a root a of p(x). W e denote the smallest extension field
between E an d F by F(a), E c F(a) c F. I t has been demonstrated (p . 41) that
F(a) i s isomorphic to the field of residue classes of polynomials mo d p(x) unde r
the mappin g (j)(a) <- > 0(JC) . I f d[p(x)] = n + 1 , it i s unnecessar y t o conside r
polynomials of degree greater than n (by the proof on p. 38). Thus any element of
F(a) ca n be written in the form
Co + c\ot H V
c n an
where Co , c i , . . ., c n e F. Th e su m of two suc h elements i s a t once o f th e sam e
form. Th e product can be handled by the method of the followin g
EXAMPLE. Th e polynomial p(x) = x 5 — x — 1 is irreducible ove r the field
R o f rational numbers. Henc e if a i s a root of p(x), al l elements of R(a) ma y be
written as above:
Co + c\(x + - • • 4- C4G? 4.
The product o f tw o suc h elements i s a polynomial i n a o f degree < 8 . I t can be
reduced to the prescribed form by means of the rules
a5=:l+a, a
6
= a + a 2, a
7
= a 2 + a 3, a
8
=a3+a4.
This method is applicable to any irreducible polynomial.
The onl y difficult y i n th e demonstratio n o f th e field properties o f F(a) lie s
in writing the quotient ^(a)/x//(a), x//(a) 7 ^ 0, as a polynomial o f the prescribed
form. Pu t
(f)(a)/\lf(a) = Co + C\OL H h
c nan,
where the coefficients co , c\, . .., c
n
ar e to be determined. We write
(j)(a) = xl/(a)(c 0 + C\OL H h
c nan).
If the right side is reduced according to the rules for multiplication of two elements,
we obtain
<p(a) = L 0 + L\OL H h
L nan,
where Lo , L i , . . ., L n ar e linear combination s o f th e c t. Equatin g correspondin g
coefficients w e obtai n n linea r equation s i n n unknowns . Thi s Syste m o f linea r
equations always has a Solution since the System of homogeneous equations given
by <j>(a) = 0 ha s only the trivial Solution
co = c\ = • • • = c n = 0 .
We have obtained a method o f handling Operation s o n the elements o f F(a).
This treatment is based on the assumption tha t a i s the root of a given irreducibl e
46
4. RESIDUE CLASSES, EXTENSION FIELDS, AN D ISOMORPHISMS
polynomial. A s yet w e have hardly an y criteria fo r determinin g whethe r a given
polynomial is irreducible. We will formulate suc h criteria later.
8 . Sho w tha t x 5 — x — 1 is irreducibl e ove r th e field of rationa l
numbers. This result can be derived fro m
EXERCISE
9 . Prov e tha t a polynomial wit h intege r coefficient s possesse s a
factorization int o polynomials with integer coefficients provide d it can be factored
at all.
EXERCISE
We have considered several specific example s of isomorphism between fields.
Let u s no w analyz e thi s Situatio n i n complet e abstrac t generality . Assum e tw o
fields, F an d F , t o be given together wit h a n isomorphism whic h assign s to any
a e F a n ä e F. Thu s
a + b = c<&ä + b = c, a
-b = c <S> a - b — c.
Therefore
0 + 0 = 0=^ 0 + 0 = Ö.
Hence Ö is the zero element of F. I n the same way we see that 1 i s the unit element
of F. Fro m these results w e shall see that subtraction an d division properties ar e
also preserved. We have
a
+ (-a)=0, a
+ (~a) = Ö,
whence
—a = (— a).
Consequently,
(
a\ -
,
a
The isomorphis m betwee n F an d F ca n b e extende d i n a very natura l wa y
to the rings F(x) an d F(x) o f polynomials, ove r the respective fields. Give n the
polynomial
f(x) = ÜQ + a\x H h
a nxn,
define
f(x) = äo + ä\x + • • • + a nxn.
Thus we have provided a 1 - 1 correspondenc e betwee n th e elements o f F{x) an d
F(x). I t is an easy matter to show that the isomorphism between F an d F extend s
to F(x) an d F(x), i.e. , to show that
f(x) + g(x)**f(x) +
g(x),
f(x) -g(x) ** f(x) - g(x).
The usual properties of polynomials all go over in this way; e.g., a polynomial has
the same degree as its image. Th e irreducibility o f polynomials i s preserved. Fo r
suppose p(x) i s irreducible and p(x) i s not. Then we would have
p(x) = ä(x) • b{x) => > p{x) = a(x) • b(x),
47
4.3. ISOMORPHISM
which contradicts the hypothesis. W e have shown that F an d F behav e in exactly
the sam e way. Th e differenc e betwee n F an d F i s like a difference o f color— a
very unessential distinction for fields.
THEOREM 4.6 Let p(x) be an irreducible polynomial over F, and p(x) the corresponding polynomial in the isomorphic field F. Let a and ä, respectively, be
roots (obtained by any means whatsoever) ofthese polynomials. The isomorphism
between F and F can then be "extended" to the fields F(a) and F(ä). (The mapping of F(a) on F(ä) is called an extension ofthe mapping ofthe groundfields if
it contains the given correspondence between the elements of F and F.)
PROOF:
Th e elements of F(a) ar e polynomials
9 = c 0 + C\OL H h
c
n-ia
n
~\
where n = d[p(x)]. Ma p these elements onto the corresponding elements of F(ä) ,
6 = co + c\ä -\ h
c n-iän~l.
0\ + 62 = 0\ + 62.
For multiplication, however , w e may have to reduce the degree. Pu t 0\ = 0i(a) ,
02 = <fo(a) wher e 3[0i(x)], dlfaix)] < n:
0i(*) • <h(x) = q(x)p(x) + r(x), d[r(x)]
< n,
whence
01 (a) -02(a ) = r(a).
We must prove
01 (ä) -0 2 (ä) =f(x).
Since F(x) an d F(x) ar e isomorphic, we have
01 CO • 02(^) = q(x)p(x) + r(x) = 4>i(x) • 0 2 (x) = q(x)p(x) + r(x).
Putting x — ä w e obtain
0i (ä) -0 2 (ä) = r(ä) ,
and the proof is complete. D
EXAMPLES. Le t us consider isomorphisms of the field R o f rational numbers
with itself. Th e only automorphism o f R i s identity. I t follows tha t any extension
of thi s mappin g b y mean s o f a root o f th e irreducible polynomia l p(x) leave s R
fixed.
(a) Tak e p(x) = x 2 — 2 . Thi s polynomial ha s the roots y/2 an d — \fl i n the
field of real numbers. R(\fl) consist s of elements of the form a + bojl, R(—V2)
of element s o f th e for m a — b*j2, a,b e R. Clearl y bot h extension s giv e th e
same field. B y Theore m 4. 6 thi s prove s a n automorphis m o f R(V2). W e hav e
demonstrated th e resul t o f Exercis e 7 . However , th e metho d doe s no t generall y
give an automorphism. Conside r the example:
48 4
. RESIDUE CLASSES , EXTENSION FELDS , AN D ISOMORPHISM S
(b) p(x) = x 3 — 2 . Le t a = \/ 2 b e the real cube root of 2, and ä on e of the
complex roots. R{y/l) consist s only of real elements but R(ä) contain s element s
which are complex. The fields R(ä) an d R(y/2) ar e isomorphic but clearly not the
same.
(c) Th e points of the complex plane corresponding to the nth root s of unity are
the vertices of a regulär n-sided polygon. This fact allows us to handle polygons in
a very convenient manner.
Consider, for example, the regulär polygon of 1 7 sides.
8
It
s vertices are given by the roots of the polynomial x17 — 1 ,
C
\ 8 whic h is reducible in R ( 1 is clearly a root). Factorin g out
jex
— 1 we obtai n th e polynomia l p(x) = 1 + x + x 2 +
/l •
• • + x 1 5 + x 1 6 . Assum e fo r th e present , withou t proof ,
that thi s polynomia l i s irreducible . Le t s b e th e comple x
number corresponding t o the first Vertex counterclockwis e
from 1 . Th e successive vertices are given by e, s2,..., e 1 6, s 1 1 = 1 . Th e first 1 6
powers o f s ar e obviousl y th e roots o f p(x). No w conside r th e extensio n fields
R(e) an d R(e 3). R(s 3) contain s e since (s 3)6 = s ls = s. Henc e #(£ 3 ) D R(s).
But R(s) D R(s 3) an d therefore R(s) = R(s 3). Th e isomorphism i s actually a n
automorphism which maps 6 e R(s) ont o 9 e R(e) wher e if
9 = Co + C\B + C 2S2 H h
Ci
1 5
58
then
9 = c 0 + c xs3 + c 2s6 + • • • + c l5en,
where each power of s abov e the 1 5 th is reduced by using s 1 1 — 1 and p(s) = 0 ;
for example,
(s 3 ) 11 = (s) 33 = (s) 1 6 = _ ( i + s + . . . + £ 1 5 ).
If, instea d o f £ 3, w e tak e 6: u wher e v ^ 0 (mo d 1 7 ) w e ca n find an x suc h tha t
vx = = 1 (mod 1 7) . Henc e (s y)* = s . Therefore , b y the same reasoning a s above,
each value of v gives a different automorphis m of the field R(s). I t will be shown
that the nature of these 1 6 automorphisms permits us to see that the polygon of 1 7
sides possesse s a construction i n th e Euclidean sense . B y th e sam e methods w e
will be able to see that no construction exists for the 1 3-side d polygon.
http://dx.doi.org/10.1090/cln/015/05
CHAPTER 5
Galois Theory
5.1. Splittin g Fields
Let f(x) b e any polynomial over a field F.
THEOREM 5.1 There is an extension field E D F such that f(x) is the product of
linear factors in E. It is then said that fix) split s in the field E.
PROOF: Th e polynomia l fix) possesse s a unique factorizatio n int o irredu cible factors in F. Thu s we may write
f(x) = c(x - ot\)(x - a 2) • • • (x - a r)pi(x)p2(x) •
• • p 5(x)
where a\, a 2, . . . , ot r ar e the roots o f f{x) i n F an d p\(x), p 2ix), • • •, p s(x) a r e
the irreducible factors o f degree higher than 1 .
If s = 0 then f(x) split s i n F an d w e need g o no further. Otherwis e solv e
px (x) = 0 in any extension field. Le t a be a root of p\ (x). I n the field F(a), p\ ix)
has a linear factor ,
p\(x) = (x -a)q(x).
Now take F(a) a s the ground field and factor f(x) i n F(a). Th e new factorizatio n
possesses a t least on e additiona l linea r factor , namel y x — a, an d perhaps more .
If f{x) split s i n JF(QO , F(a ) i s the desired extensio n field . I f no t we may repea t
the argument and obtain an extension field of Fia) i n which fix) ha s at least one
additional factor. Clearly , the process terminates, and we arrive in a finite numbe r
of steps at a field E i n which fix) split s into linear factors. D
An extension field E D F which is obtained by this method is called a Splitting
field o f fix) ove r F.
THEOREM 5.2 Let fix) be any polynomial in F and Q any extension field Q D F,
in which fix) can be split into linear factors,
fix) = c(x - <xi)(x - a 2) • • • ix - a n).
The smallest field in Qfor which fix) splits is the field E obtained by the method
of Theorem 5.1.
PROOF: I f there is a field £ , Q D E D F, i n which fix) splits , then E mus t
contain th e elements a\, a 2, . . . , a n. Sinc e E contain s F, i t contains al l possibl e
49
50
5. GALOIS THEOR Y
combinations o f sums and products of the a t wit h the elements of F\ i.e. , E con tains all polynomials i n the a*. 1 I f this set of polynomials is a field—and w e shall
prove that it is—it is certainly the smallest Splitting field of f(x) betwee n Q and F.
Consider the set of all polynomials
0(ai,a 2 , ...,«* )
with coefficients i n F. W e now prove that this is the field E o f the previous theorem. Since ot\ i s algebraic over F the polynomials (j>{ot\) over F form a field F(pt\).
Furthermore, sinc e F C F(a\), a 2 i s algebraic in F(pt\). (I t satisfies th e equation
f(x) = 0 over F(qt\).) Therefore , th e set of al l polynomials i n a 2 whos e coeffi cients are elements of F(ct\) —that is , polynomials in ct\—form a field F(ct\, a 2).
It follows by induction that
E = F(ai,a 2, . . . , a
w),
the set of all polynomials in the a t. Not e that this field is, in fact, th e same as the
field obtained in Theorem 5.1 . D
The field E i s called the Splitting field of f(x) betwee n F an d Q.
EXAMPLE. Le
t F b e the field R of rational numbers, Q the field of real num-
bers. Take
(x 2 - 2)(x 2 - 3 ) = (J C + V2)(x - y/2)(x + V3)(x - V3) .
f(x) =
Clearly, E = / f (V2, A/3 ) an d hence consists of elements
(a + bVl) + (c + dV2)V3 = a + bV2 + cV?> + dV6,
a, b, c,d e R.
The dimension o f th e vector spac e E o f th e polynomials 2 in the GL[ ove r F i s
called the degree of E ove r F. Thu s the degree of 7?(\/2, V3) i s at most 4.
EXERCISE 1 . Sho w that the degree of R(Vl, y/3) ove r R is exactly 4.
THEOREM Let /(JC) be a polynomial over F. Any two Splitting fieldsoff(x) over
F are isomorphic.
We shall prove this result in the more general form :
THEOREM 5.3 If f(x) is any polynomial over F and f(x) is the corresponding
polynomial over an isomorphic field F, and if E is the Splitting field of f(x), E of
f{x), then the isomorphism between F and F can be extended to E and E.
PROOF: Writ e the factorization o f f(x) int o irreducible factors ove r F:
f(x) = c(x - a\)(x -a 2)--'(x- a
r)pi(x)p2(x)
•
• • p s(x)
A polynomial i n two variables JC , y is a polynomial i n y whos e coefficients ar e polynomials i n
x. A polynomial i n the n + 1 variable s x\, x 2,..., x n+\ i s a polynomial i n xn±\ whos e coefficient s
are polynomials i n the n variables x\, x 2,..., x n.
Cf. exampl e on p. 16.
5.2. AUTOMORPHISMS O F THE SPLITTIN G FIEL D
51
where the pi (x) ar e the irreducible factors o f degree higher than 1 . Since F an d F
are isomorphic this gives the factorizatio n
f(x) = c(x - ä x)(x -ä 2)'-(x- ä
r)pl(x)p2(x)
•
• • p s(x)
of f(x) int o irreducible polynomials over F. Le t n be the degree of f(x) an d r the
number of linear terms in the factorization. I n the proof o f the theorem we use an
induction in the following form :
If a theorem i s true for n an d if the truth o f the theorem for r + 1 implies its
truth for r, then it is true for all r < n.
If r = n, th e polynomial f(x) split s into linear factors i n F. Moreover , f(x)
splits in F i n exactly the same way. Consequently , E — F an d E — F. W e have
established the first step in the induction.
Assume that the theorem has been proved for polynomials having at least r + 1
linear factors , r < n. Suppos e no w tha t f(x) ha s r linea r factor s i n F. Sinc e
P\{x) split s in E, ß\(x) i n E, the y have roots ot r+\ e E, a r+\ e E, respectively .
Construct th e extension fields F(a r+\) an d F(ot r+\) an d extend th e isomorphis m
of F an d F to these fields by means of the transformation a r+\ <+ öT+ T (Theorem
4.6, Chapte r 4). Sinc e the isomorphism o f F(a r+\) an d F(ö^T ) contain s tha t of
F an d F, th e mapping f(x) ^ > f(x) i s retained. F(a r+\) an d F(ct^\) ar e now
taken a s the ground fields. W e again factor f(x) an d f(x) bu t now we obtain a t
|
2020-03-29 09:49:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8734453916549683, "perplexity": 4355.62848014678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00399.warc.gz"}
|
https://www.ijcai.org/Proceedings/2018/565
|
# Extracting Action Sequences from Texts Based on Deep Reinforcement Learning
## Wenfeng Feng, Hankz Hankui Zhuo, Subbarao Kambhampati
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 4064-4070. https://doi.org/10.24963/ijcai.2018/565
Extracting action sequences from texts is challenging, as it requires commonsense inferences based on world knowledge. Although there has been work on extracting action scripts, instructions, navigation actions, etc., they require either the set of candidate actions be provided in advance, or action descriptions are restricted to a specific form, e.g., description templates. In this paper we aim to extract action sequences from texts in \emph{free} natural language, i.e., without any restricted templates, provided the set of actions is unknown. We propose to extract action sequences from texts based on the deep reinforcement learning framework. Specifically, we view selecting'' or eliminating'' words from texts as actions'', and texts associated with actions as states''. We build Q-networks to learn policies of extracting actions and extract plans from the labeled texts. We demonstrate the effectiveness of our approach on several datasets with comparison to state-of-the-art approaches.
Keywords:
Natural Language Processing: Natural Language Processing
Planning and Scheduling: Activity and Plan Recognition
Planning and Scheduling: Planning with Incomplete information
Machine Learning: Deep Learning
|
2020-02-19 18:56:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6029726266860962, "perplexity": 4185.034742984217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00256.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=A.C.%20Robin
|
• ### Wrinkles in the Gaia data unveil a dynamically young and perturbed Milky Way disk(1804.10196)
April 26, 2018 astro-ph.GA
Most of the stars in our Galaxy including our Sun, move in a disk-like component and give the Milky Way its characteristic appearance on the night sky. As in all fields in science, motions can be used to reveal the underlying forces, and in the case of disk stars they provide important diagnostics on the structure and history of the Galaxy. But because of the challenges involved in measuring stellar motions, samples have so far remained limited in their number of stars, precision and spatial extent. This has changed dramatically with the second Data Release of the Gaia mission which has just become available. Here we report that the phase space distribution of stars in the disk of the Milky Way is full of substructure with a variety of morphologies, most of which have never been observed before. This includes shapes such as arches and shells in velocity space, and snail shells and ridges when spatial and velocity coordinates are combined. The nature of these substructures implies that the disk is phase mixing from an out of equilibrium state, and that the Galactic bar and/or spiral structure are strongly affecting the orbits of disk stars. Our analysis of the features leads us to infer that the disk was perturbed between 300 and 900 Myr ago, which matches current estimations of the previous pericentric passage of the Sagittarius dwarf galaxy. The Gaia data challenge the most basic premise of stellar dynamics of dynamical equilibrium, and show that modelling the Galactic disk as a time-independent axisymmetric component is definitively incorrect. These findings mark the start of a new era when, by modelling the richness of phase space substructures, we can determine the gravitational potential of the Galaxy, its time evolution and the characteristics of the perturbers that have most influenced our home in the Universe.
• We highlight the power of the Gaia DR2 in studying many fine structures of the Hertzsprung-Russell diagram (HRD). Gaia allows us to present many different HRDs, depending in particular on stellar population selections. We do not aim here for completeness in terms of types of stars or stellar evolutionary aspects. Instead, we have chosen several illustrative examples. We describe some of the selections that can be made in Gaia DR2 to highlight the main structures of the Gaia HRDs. We select both field and cluster (open and globular) stars, compare the observations with previous classifications and with stellar evolutionary tracks, and we present variations of the Gaia HRD with age, metallicity, and kinematics. Late stages of stellar evolution such as hot subdwarfs, post-AGB stars, planetary nebulae, and white dwarfs are also analysed, as well as low-mass brown dwarf objects. The Gaia HRDs are unprecedented in both precision and coverage of the various Milky Way stellar populations and stellar evolutionary phases. Many fine structures of the HRDs are presented. The clear split of the white dwarf sequence into hydrogen and helium white dwarfs is presented for the first time in an HRD. The relation between kinematics and the HRD is nicely illustrated. Two different populations in a classical kinematic selection of the halo are unambiguously identified in the HRD. Membership and mean parameters for a selected list of open clusters are provided. They allow drawing very detailed cluster sequences, highlighting fine structures, and providing extremely precise empirical isochrones that will lead to more insight in stellar physics. Gaia DR2 demonstrates the potential of combining precise astrometry and photometry for large samples for studies in stellar evolution and stellar population and opens an entire new area for HRD-based studies.
• ### Gaia Data Release 2: Mapping the Milky Way disc kinematics(1804.09380)
April 25, 2018 astro-ph.GA
To illustrate the potential of GDR2, we provide a first look at the kinematics of the Milky Way disc, within a radius of several kiloparsecs around the Sun. We benefit for the first time from a sample of 6.4 million F-G-K stars with full 6D phase-space coordinates, precise parallaxes, and precise Galactic cylindrical velocities . From this sample, we extracted a sub-sample of 3.2 million giant stars to map the velocity field of the Galactic disc from $\sim$5~kpc to $\sim$13~kpc from the Galactic centre and up to 2~kpc above and below the plane. We also study the distribution of 0.3 million solar neighbourhood stars ($r < 200$~pc), with median velocity uncertainties of 0.4~km/s, in velocity space and use the full sample to examine how the over-densities evolve in more distant regions. GDR2 allows us to draw 3D maps of the Galactocentric median velocities and velocity dispersions with unprecedented accuracy, precision, and spatial resolution. The maps show the complexity and richness of the velocity field of the galactic disc. We observe streaming motions in all the components of the velocities as well as patterns in the velocity dispersions. For example, we confirm the previously reported negative and positive galactocentric radial velocity gradients in the inner and outer disc, respectively. Here, we see them as part of a non-axisymmetric kinematic oscillation, and we map its azimuthal and vertical behaviour. We also witness a new global arrangement of stars in the velocity plane of the solar neighbourhood and in distant regions in which stars are organised in thin substructures with the shape of circular arches that are oriented approximately along the horizontal direction in the $U-V$ plane. Moreover, in distant regions, we see variations in the velocity substructures more clearly than ever before, in particular, variations in the velocity of the Hercules stream. (abridged)
• Parallaxes for 331 classical Cepheids, 31 Type II Cepheids and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of the Tycho-Gaia Astrometric Solution (TGAS). In order to test these first parallax measurements of the primary standard candles of the cosmological distance ladder, that involve astrometry collected by Gaia during the initial 14 months of science operation, we compared them with literature estimates and derived new period-luminosity ($PL$), period-Wesenheit ($PW$) relations for classical and Type II Cepheids and infrared $PL$, $PL$-metallicity ($PLZ$) and optical luminosity-metallicity ($M_V$-[Fe/H]) relations for the RR Lyrae stars, with zero points based on TGAS. The new relations were computed using multi-band ($V,I,J,K_{\mathrm{s}},W_{1}$) photometry and spectroscopic metal abundances available in the literature, and applying three alternative approaches: (i) by linear least squares fitting the absolute magnitudes inferred from direct transformation of the TGAS parallaxes, (ii) by adopting astrometric-based luminosities, and (iii) using a Bayesian fitting approach. TGAS parallaxes bring a significant added value to the previous Hipparcos estimates. The relations presented in this paper represent first Gaia-calibrated relations and form a "work-in-progress" milestone report in the wait for Gaia-only parallaxes of which a first solution will become available with Gaia's Data Release 2 (DR2) in 2018.
• Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and parallax are calculated using Hipparcos and Tycho-2 positions in 1991.25 as prior information. Aims. We investigate the scientific potential and limitations of the TGAS component by means of the astrometric data for open clusters. Methods. Mean cluster parallax and proper motion values are derived taking into account the error correlations within the astrometric solutions for individual stars, an estimate of the internal velocity dispersion in the cluster, and, where relevant, the effects of the depth of the cluster along the line of sight. Internal consistency of the TGAS data is assessed. Results. Values given for standard uncertainties are still inaccurate and may lead to unrealistic unit-weight standard deviations of least squares solutions for cluster parameters. Reconstructed mean cluster parallax and proper motion values are generally in very good agreement with earlier Hipparcos-based determination, although the Gaia mean parallax for the Pleiades is a significant exception. We have no current explanation for that discrepancy. Most clusters are observed to extend to nearly 15 pc from the cluster centre, and it will be up to future Gaia releases to establish whether those potential cluster-member stars are still dynamically bound to the clusters. Conclusions. The Gaia DR1 provides the means to examine open clusters far beyond their more easily visible cores, and can provide membership assessments based on proper motions and parallaxes. A combined HR diagram shows the same features as observed before using the Hipparcos data, with clearly increased luminosities for older A and F dwarfs.
• ### Population synthesis to constrain Galactic and Stellar Physics- I- Determining age and mass of thin-disc red-giant stars(1702.01769)
Feb. 6, 2017 astro-ph.GA, astro-ph.SR
The cornerstone mission Gaia, together with complementary surveys, will revolutionize our understanding of the formation and history of our Galaxy, providing accurate stellar masses, radii, ages, distances, as well as chemical properties for a very large sample of stars across different Galactic stellar populations.Using an improved population synthesis approach and new stellar evolution models we attempt to evaluate the possibility of deriving ages and masses of clump stars from their chemical properties.A new version of the Besancon Galaxy model (BGM) uses new stellar evolutionary tracks computed with STAREVOL.These provide chemical and seismic properties from the PMS to the early-AGB.For the first time, the BGM can explore the effects of an extra-mixing occurring in red-giant stars.In particular we focus on the effects of thermohaline instability on chemical properties as well as on the determination of stellar ages and masses using the surface [C/N] abundance ratio.The impact of extra-mixing on 3He, 12C/13C, N, and [C/N] abundances along the giant branch is quantified.We underline the crucial contribution of asteroseismology to discriminate between evolutionary states of field giants belonging to the Galactic disc.The inclusion of thermohaline instability has a significant impact on 12C/13C,3He as well as on the [C/N] values.We show the efficiency of thermohaline mixing at different metallicities and its influence on the determined stellar mass and age from the observed [C/N] ratio.We then propose simple relations to determine ages and masses from chemical abundances according to these models.We emphasize the usefulness of population synthesis tools to test stellar models and transport processes inside stars.We show that transport processes occurring in red-giant stars should be taken into account in the determination of ages for future Galactic archaeology studies.(abridged)
• ### Gaia Data Release 1: Catalogue validation(1701.00292)
Before the publication of the Gaia Catalogue, the contents of the first data release have undergone multiple dedicated validation tests. These tests aim at analysing in-depth the Catalogue content to detect anomalies, individual problems in specific objects or in overall statistical properties, either to filter them before the public release, or to describe the different caveats of the release for an optimal exploitation of the data. Dedicated methods using either Gaia internal data, external catalogues or models have been developed for the validation processes. They are testing normal stars as well as various populations like open or globular clusters, double stars, variable stars, quasars. Properties of coverage, accuracy and precision of the data are provided by the numerous tests presented here and jointly analysed to assess the data release content. This independent validation confirms the quality of the published data, Gaia DR1 being the most precise all-sky astrometric and photometric catalogue to-date. However, several limitations in terms of completeness, astrometric and photometric quality are identified and described. Figures describing the relevant properties of the release are shown and the testing activities carried out validating the user interfaces are also described. A particular emphasis is made on the statistical use of the data in scientific exploitation.
• ### The microlensing rate and distribution of free-floating planets towards the Galactic bulge(1606.06945)
Aug. 16, 2016 astro-ph.EP
Ground-based optical microlensing surveys have provided tantalising, if inconclusive, evidence for a significant population of free-floating planets (FFPs). Both ground and space-based facilities are being used and developed which will be able to probe the distrubution of FFPs with much better sensitivity. It is vital also to develop a high-precision microlensing simulation framework to evaluate the completeness of such surveys. We present the first signal-to-noise limited calculations of the FFP microlensing rate using the Besancon Galactic model. The microlensing distribution towards the Galactic centre is simulated for wide-area ground-based optical surveys such as OGLE or MOA, a wide-area ground-based near-IR survey, and a targeted space-based near-IR survey which could be undertaken with Euclid or WFIRST. We present a calculation framework for the computation of the optical and near-infrared microlensing rate and optical depth for simulated stellar catalogues which are signal-to-noise limited, and take account of extinction, unresolved stellar background light and finite source size effects, which can be significant for FFPs. We find that the global ground-based I-band yield over a central 200 deg^2 region covering the Galactic centre ranges from 20 Earth-mass FFPs year^-1 up to 3,500 year^-1 for Jupiter FFPs in the limit of 100% detection efficiency, and almost an order of magnitude larger for a K-band survey. For ground-based surveys we find that the inclusion of finite source and the unresolved background reveals a mass-dependent variation in the spatial distribution of FFPs. For a space-based H-band covering 2 deg^2, the yield depends on the target field but maximizes close to the Galactic centre with around 76 Earth through to 1,700 Jupiter FFPs year^-1. For near-IR space-based surveys the spatial distribution of FFPs is found to be largely insensitive to the FFP mass scale.
• ### Chemical tagging with APOGEE: Discovery of a large population of N-rich stars in the inner Galaxy(1606.05651)
June 17, 2016 astro-ph.GA
Formation of globular clusters (GCs), the Galactic bulge, or galaxy bulges in general, are important unsolved problems in Galactic astronomy. Homogeneous infrared observations of large samples of stars belonging to GCs and the Galactic bulge field are one of the best ways to study these problems. We report the discovery by APOGEE of a population of field stars in the inner Galaxy with abundances of N, C, and Al that are typically found in GC stars. The newly discovered stars have high [N/Fe], which is correlated with [Al/Fe] and anti-correlated with [C/Fe]. They are homogeneously distributed across, and kinematically indistinguishable from, other field stars in the same volume. Their metallicity distribution is seemingly unimodal, peaking at [Fe/H]~-1, thus being in disagreement with that of the Galactic GC system. Our results can be understood in terms of different scenarios. N-rich stars could be former members of dissolved GCs, in which case the mass in destroyed GCs exceeds that of the surviving GC system by a factor of ~8. In that scenario, the total mass contained in so-called "first-generation" stars cannot be larger than that in "second-generation" stars by more than a factor of ~9 and was certainly smaller. Conversely, our results may imply the absence of a mandatory genetic link between "second generation" stars and GCs. Last, but not least, N-rich stars could be the oldest stars in the Galaxy, the by-products of chemical enrichment by the first stellar generations formed in the heart of the Galaxy.
• The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three year observing campaign on the Sloan 2.5-m Telescope, APOGEE has collected a half million high resolution (R~22,500), high S/N (>100), infrared (1.51-1.70 microns) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design---hardware, field placement, target selection, operations---and gives an overview of these aspects as well as the data reduction, analysis and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12, all of the APOGEE data products are now publicly available.
• ### Quasi integral of motion for axisymmetric potentials(1508.01682)
Aug. 7, 2015 astro-ph.GA
We present an estimate of the third integral of motion for axisymmetric three-dimensional potentials. This estimate is based on a Staeckel approximation and is explicitly written as a function of the potential. We tested this scheme for the Besancon Galactic model and two other disc-halo models and find that orbits of disc stars have an accurately conserved third quasi integral. The accuracy ranges from of 0.1% to 1% for heights varying from z = 0~kpc to z= 6 kpc and Galactocentric radii R from 5 to 15kpc. We also tested the usefulness of this quasi integral in analytic distribution functions of disc stellar populations: we show that the distribution function remains approximately stationary and that it allows to recover the potential and forces by applying Jeans equations to its moments.
• ### ExELS: an exoplanet legacy science proposal for the ESA Euclid mission. II. Hot exoplanets and sub-stellar systems(1410.0363)
Oct. 1, 2014 astro-ph.SR, astro-ph.EP
The Exoplanet Euclid Legacy Survey (ExELS) proposes to determine the frequency of cold exoplanets down to Earth mass from host separations of ~1 AU out to the free-floating regime by detecting microlensing events in Galactic Bulge. We show that ExELS can also detect large numbers of hot, transiting exoplanets in the same population. The combined microlensing+transit survey would allow the first self-consistent estimate of the relative frequencies of hot and cold sub-stellar companions, reducing biases in comparing "near-field" radial velocity and transiting exoplanets with "far-field" microlensing exoplanets. The age of the Bulge and its spread in metallicity further allows ExELS to better constrain both the variation of companion frequency with metallicity and statistically explore the strength of star-planet tides. We conservatively estimate that ExELS will detect ~4100 sub-stellar objects, with sensitivity typically reaching down to Neptune-mass planets. Of these, ~600 will be detectable in both Euclid's VIS (optical) channel and NISP H-band imager, with ~90% of detections being hot Jupiters. Likely scenarios predict a range of 2900-7000 for VIS and 400-1600 for H-band. Twice as many can be expected in VIS if the cadence can be increased to match the 20-minute H-band cadence. The separation of planets from brown dwarfs via Doppler boosting or ellipsoidal variability will be possible in a handful of cases. Radial velocity confirmation should be possible in some cases, using 30-metre-class telescopes. We expect secondary eclipses, and reflection and emission from planets to be detectable in up to ~100 systems in both VIS and NISP-H. Transits of ~500 planetary-radius companions will be characterised with two-colour photometry and ~40 with four-colour photometry (VIS,YJH), and the albedo of (and emission from) a large sample of hot Jupiters in the H-band can be explored statistically.
• ### Constraining the thick disc formation scenario of the Milky Way(1406.5384)
July 30, 2014 astro-ph.GA
We study the shape of the thick disc using photometric data at high and intermediate latitudes from SDSS and 2MASS surveys. We use the population synthesis approach using an ABC-MCMC method to characterize the thick disc shape, scale height, scale length, local density and flare, and we investigate the extend of the thick disc formation period by simulating several formation episodes. We find that the vertical variation in density is not exponential, but much closer to an hyperbolic secant squared. Assuming a single formation epoch, the thick disc is better fitted with a sech2 scale height of 470 pc and a scale length of 2.3 kpc. However if one simulates two successive formation episodes, mimicking an extended formation period, the older episode has a higher scale height and a larger scale length than the younger episode, indicating a contraction during the collapse phase. The scale height decreases from 800 pc to 340 pc, and the scale length from 3.2 kpc to 2 kpc. The star formation increases from the old episode to the young one. During the fitting process, the halo parameters are also determined. The constraint on the halo shows that a transition between a inner and outer halo, if exists, cannot be at a distance of less than about 30 kpc, which is the limit of our investigation using turnoff halo stars. Finally, we show that extrapolating the thick disc towards the bulge region explains well the stellar populations observed there, that there is no longer need to invoke a classical bulge. To explain these results, the most probable scenario for the thick disc is that it formed while the Galaxy was gravitationally collapsing from well mixed gas-rich giant clumps sustained by high turbulence which awhile prevent a thin disc to form, as proposed by Bournaud et al. (2009). This scenario explains the observations in the thick disc region as well as in the bulge region. (abridged)
• ### Overview and stellar statistics of the expected Gaia Catalogue using the Gaia Object Generator(1404.5861)
April 23, 2014 astro-ph.GA, astro-ph.IM
Aims: An effort has been undertaken to simulate the expected Gaia Catalogue, including the effect of observational errors. A statistical analysis of this simulated Gaia data is performed in order to better understand what can be obtained from the Gaia astrometric mission. This catalogue is used in order to investigate the potential yield in astrometric, photometric and spectroscopic information, and the extent and effect of observational errors on the true Gaia Catalogue. This article is a follow-up to Robin et. al. (2012), where the expected Gaia Catalogue content was reviewed but without the simulation of observational errors. Methods: The Gaia Object Generator (GOG) catalogue is analysed using the Gaia Analysis Tool (GAT), producing a number of statistics on the catalogue. Results: A simulated catalogue of one billion objects is presented, with detailed information on the 523 million individual single stars it contains. Detailed information is provided for the expected errors in parallax, position, proper motion, radial velocity, photometry in the four Gaia bands, and physical parameter determination including temperature, metallicity and line of sight extinction.
• ### Gaia photometry for white dwarfs(1403.6045)
March 24, 2014 astro-ph.SR, astro-ph.IM
Context. White dwarfs can be used to study the structure and evolution of the Galaxy by analysing their luminosity function and initial mass function. Among them, the very cool white dwarfs provide the information for the early ages of each population. Because white dwarfs are intrinsically faint only the nearby (about 20 pc) sample is reasonably complete. The Gaia space mission will drastically increase the sample of known white dwarfs through its 5-6 years survey of the whole sky up to magnitude V = 20-25. Aims. We provide a characterisation of Gaia photometry for white dwarfs to better prepare for the analysis of the scientific output of the mission. Transformations between some of the most common photometric systems and Gaia passbands are derived. We also give estimates of the number of white dwarfs of the different galactic populations that will be observed. Methods. Using synthetic spectral energy distributions and the most recent Gaia transmission curves, we computed colours of three different types of white dwarfs (pure hydrogen, pure helium, and mixed composition with H/He= 0.1). With these colours we derived transformations to other common photometric systems (Johnson-Cousins, Sloan Digital Sky Survey, and 2MASS). We also present numbers of white dwarfs predicted to be observed by Gaia. Results. We provide relationships and colour-colour diagrams among different photometric systems to allow the prediction and/or study of the Gaia white dwarf colours. We also include estimates of the number of sources expected in every galactic population and with a maximum parallax error. Gaia will increase the sample of known white dwarfs tenfold to about 200 000. Gaia will be able to observe thousands of very cool white dwarfs for the first time, which will greatly improve our understanding of these stars and early phases of star formation in our Galaxy.
• ### Metallicity and kinematics of the bar in-situ(1401.1925)
Jan. 9, 2014 astro-ph.GA
Constraints on the Galactic bulge/bar structure and formation history from stellar kinematics and metallicities mainly come from relatively high-latitude fields (|b|>4) where a complex mix of stellar population is seen. We aim here to constrain the formation history of the Galactic bar by studying the radial velocity and metallicity distributions of stars in-situ (|b|<1). We observed red clump stars in four fields along the bar's major axis (l=10,6,-6 and b=0 plus a field at l=0,b=1) with low-resolution spectroscopy from VLT/FLAMES, observing around the CaII triplet. We developed robust methods for extracting radial velocity and metallicity estimates from these low signal-to-noise spectra. We derived distance probability distributions using Bayesian methods rigorously handling the extinction law. We present radial velocities and metallicity distributions, as well as radial velocity trends with distance. We observe an increase in the radial velocity dispersion near the Galactic plane. We detect the streaming motion of the stars induced by the bar in fields at l=+/-6, the highest velocity components of this bar stream being metal-rich ([Fe/H]~0.2 dex). Our data is consistent with a bar inclined at 26+/-3 from the Sun-Galactic centre line. We observe a significant fraction of metal-poor stars, in particular in the field at l=0,b=1. We confirm the flattening of the metallicity gradient along the minor axis when getting closer to the plane, with a hint that it could actually be inverted. Our stellar kinematics corresponds to the expected behaviour of a bar issued from the secular evolution of the Galactic disc. The mix of several populations, seen further away from the plane, is also seen in the bar in-situ since our metallicity distributions highlight a different spatial distribution between metal-poor and metal-rich stars, the more metal-poor stars being more centrally concentrated.
• ### A Sub-Earth-Mass Moon Orbiting a Gas Giant Primary or a High Velocity Planetary System in the Galactic Bulge(1312.3951)
Dec. 13, 2013 astro-ph.EP
We present the first microlensing candidate for a free-floating exoplanet-exomoon system, MOA-2011-BLG-262, with a primary lens mass of M_host ~ 4 Jupiter masses hosting a sub-Earth mass moon. The data are well fit by this exomoon model, but an alternate star+planet model fits the data almost as well. Nevertheless, these results indicate the potential of microlensing to detect exomoons, albeit ones that are different from the giant planet moons in our solar system. The argument for an exomoon hinges on the system being relatively close to the Sun. The data constrain the product M pi_rel, where M is the lens system mass and pi_rel is the lens-source relative parallax. If the lens system is nearby (large pi_rel), then M is small (a few Jupiter masses) and the companion is a sub-Earth-mass exomoon. The best-fit solution has a large lens-source relative proper motion, mu_rel = 19.6 +- 1.6 mas/yr, which would rule out a distant lens system unless the source star has an unusually high proper motion. However, data from the OGLE collaboration nearly rule out a high source proper motion, so the exoplanet+exomoon model is the favored interpretation for the best fit model. However, the alternate solution has a lower proper motion, which is compatible with a distant (so stellar) host. A Bayesian analysis does not favor the exoplanet+exomoon interpretation, so Occam's razor favors a lens system in the bulge with host and companion masses of M_host = 0.12 (+0.19 -0.06) M_solar and m_comp = 18 (+28 -100 M_earth, at a projected separation of a_perp ~ 0.84 AU. The existence of this degeneracy is an unlucky accident, so current microlensing experiments are in principle sensitive to exomoons. In some circumstances, it will be possible to definitively establish the low mass of such lens systems through the microlensing parallax effect. Future experiments will be sensitive to less extreme exomoons.
• ### CFBDS J111807-064016: A new L/T transition brown dwarf in a binary system(1309.1250)
Sept. 5, 2013 astro-ph.SR
Stellar-substellar binary systems are quite rare, and provide interesting benchmarks. They constrain the complex physics of substellar atmospheres, because several physical parameters of the substellar secondary can be fixed from the much better characterized main sequence primary. We report the discovery of CFBDS J111807-064016, a T2 brown dwarf companion to 2MASS J111806.99-064007.8, a low-mass M4.5-M5 star. The brown-dwarf was identified from the Canada France Brown Dwarf Survey. At a distance of 50-120 pc, the 7.7 arcsec angular separation corresponds to projected separations of 390-900 AU. The primary displays no Halpha emission, placing a lower limit on the age of the system of about 6 Gyr. The kinematics is also consistent with membership in the old thin disc. We obtained near-infrared spectra, which together with recent atmosphere models allow us determine the effective temperature and gravity of both components. From these parameters and the age constraint, evolutionary models estimate masses of 0.10 to 0.15 Msol for the M dwarf, and 0.06 to 0.07 Msol for the T dwarf. This system is a particularly valuable benchmark because the brown dwarf is an early T: the cloud-clearing that occurs at the L/T transition is very sensitive to gravity, metallicity, and detailed dust properties, and produces a large scatter in the colours. This T2 dwarf, with its metallicity measured from the primary and its mass and gravity much better constrained than those of younger early-Ts, will anchor our understanding of the colours of L/T transition brown dwarfs. It is also one of the most massive T dwarfs, just below the hydrogen-burning limit, and all this makes it a prime probe a brown dwarf atmosphere and evolution models.
• ### Three dimensional interstellar extinction map towards the Galactic Bulge(1211.3092)
Nov. 13, 2012 astro-ph.GA
Studies of the properties of the inner Galactic Bulge depend strongly on the assumptions about the interstellar extinction. Most of the extinction maps available in the literature lack the information about the distance. We combine the observations with the Besancon model of the Galaxy to investigate the variations of extinction along different lines of sight towards the inner Galactic bulge as a function of distance. In addition we study the variations in the extinction law in the Bulge. We construct color-magnitude diagrams with the following sets of colors: H-Ks and J-Ks from the VVV catalogue as well as Ks-[3.6], Ks-[4.5], Ks-[5.8] and Ks-[8.0] from GLIMPSE-II catalogue matched with 2MASS. Using the newly derived temperature-color relation for M giants that match better the observed color-magnitude diagrams we then use the distance-color relations to derive the extinction as a function of distance. The observed colors are shifted to match the intrinsic colors in the Besan\c{c}on model, as a function of distance, iteratively thereby creating an extinction map with three dimensions: two spatial and one distance dimension along each line of sight towards the bulge. Colour excess maps are presented at a resolution of 15' x 15' for 6 different combinations of colors, in distance bins of 1 kpc. The high resolution and depth of the photometry allows to derive extinction maps to 10 kpc distance and up to 35 magnitudes of extinction in Av (3.5 mag in Aks). Integrated maps show the same dust features and consistent values with the other 2D maps. Starting from the color excess between the observations and the model we investigate the extinction law in near-infrared and its variation along different lines of sight.
• ### The Milky Way's external disc constrained by 2MASS star counts(0812.3739)
Dec. 19, 2008 astro-ph
Context. Thanks to recent large scale surveys in the near infrared such as 2MASS, the galactic plane that most suffers from extinction is revealed and its overall structure can be studied. Aims. This work aims at constraining the structure of the Milky Way external disc as seen in 2MASS data, and in particular the warp. Methods. We use the Two Micron All Sky Survey (hereafter 2MASS) along with the Stellar Population Synthesis Model of the Galaxy, developed in Besancon, to constrain the external disc parameters such as its scale length, its cutoff radius, and the slope of the warp. In order to properly interpret the observations, the simulated stars are reddened using a three dimensional extinction map. The shape of the stellar warp is then compared with previous results and with similar structures in gas and dust. Results. We find new constraints on the stellar disc, which is shown to be asymmetrical, similar to observations of HI. The positive longitude side is found to be easily modelled with a S shape warp but with a slope significantly smaller than the slope seen in the HI warp. At negative longitudes, the disc presents peculiarities which are not well reproduced by any simple model. Finally, comparing with the warp seen in the dust, it seems to follow a slope intermediate between the gas and the stars.
• ### The large scale dust lanes of the Galactic bar(0711.2471)
Nov. 15, 2007 astro-ph
(abridged) By comparing the distribution of dust and gas in the central regions of the Galaxy, we aim to obtain new insights into the properties of the offset dust lanes leading the bar's major axis in the Milky Way. On the one hand, the molecular emission of the dust lanes is extracted from the observed CO l-b-V distribution according to the interpretation of a dynamical model. On the other hand, a three dimensional extinction map of the Galactic central region constructed from near-infrared observations is used as a tracer of the dust itself and clearly reveals dust lanes in its face-on projection. Comparison of the position of both independent detections of the dust lanes is performed in the (l, b) plane. These two completely independent methods are used to provide a coherent picture of the dust lanes in the Milky Way bar. In both the gas and dust distributions, the dust lanes are found to be out of the Galactic plane, appearing at negative latitudes for l > 0 deg and at positive latitudes for l < 0 deg. However, even though there is substantial overlap between the two components, they are offset from one another with the dust appearing to lie closer to the b = 0 deg plane. Two scenarios are proposed to explain the observed offset. The first involves grain destruction by the bar shock and reformation downstream. Due to the decrease in velocity caused by the shock, this occurs at lower z. The second assumes that the gas and dust remain on a common tilted plane, but that the molecular gas decouples from the Milky Way's magnetic field, itself strong enough to resist the shear of the bar's shock. The diffuse gas and dust remain coupled to the field and are carried further downstream. This second scenario has recently been suggested in order to explain observations of the barred galaxy NGC 1097.
• ### Loss of mass and stability of galaxies in MOND(0706.3703)
June 27, 2007 astro-ph
The self-binding energy and stability of a galaxy in MOND-based gravity are curiously decreasing functions of its center of mass acceleration towards neighbouring mass concentrations. A tentative indication of this breaking of the Strong Equivalence Principle in field galaxies is the RAVE-observed escape speed in the Milky Way. Another consequence is that satellites of field galaxies will move on nearly Keplerian orbits at large radii (100 - 500 kpc), with a declining speed below the asymptotically constant naive MOND prediction. But consequences of an environment-sensitive gravity are even more severe in clusters, where member galaxies accelerate fast: no more Dark-Halo-like potential is present to support galaxies, meaning that extended axisymmetric disks of gas and stars are likely unstable. These predicted reappearance of asymptotic Keplerian velocity curves and disappearance of "stereotypic galaxies" in clusters are falsifiable with targeted surveys.
• ### The stellar content of the COSMOS field as derived from morphological and SED based gtar/galaxy Separation(astro-ph/0612349)
Dec. 14, 2006 astro-ph
We report on the stellar content of the COSMOS two degree field, as derived from a rigorous star/galaxy separation approach developed for using stellar sources to define the point spread function variation map used in a study of weak galaxy lensing. The catalog obtained in one filter from the ACS (Advanced Camera for Survey on the Hubble Space Telescope) is cross-identified with ground based multi-wavelength catalogs. The classification is reliable to magnitude $F_{814W}=24$ and the sample is complete even fainter. We construct a color-magnitude diagram and color histograms and compare them with predictions of a standard model of population synthesis. We find features corresponding to the halo subdwarf main sequence turnoff, the thick disk, and the thin disk. This data set provides constraints on the thick disk and spheroid density laws and on the IMF at low mass. We find no evidence of a sharp spheroid edge out to this distance. We identify a blue population of white dwarfs with counts that agree with model predictions. We find a hint for a possible slight stellar overdensity at about 22-34 kpc but the data are not strong enough at present to claim detection of a stream feature in the halo (abridged).
• ### Optical spectroscopy of high proper motion stars: new M dwarfs within 10 pc and the closest pair of subdwarfs(astro-ph/0609433)
Sept. 15, 2006 astro-ph
We present spectra of 59 nearby stars candidates, M dwarfs and white dwarfs, previously identified using high proper motion catalogues and the DENIS database. We review the existing spectral classification schemes and spectroscopic parallax calibrations in the near-infrared $J$-band and derive spectral types and distances of the nearby candidates. 42 stars have spectroscopic distances smaller than 25 pc, three of them being white dwarfs. Two targets lie within 10 pc, one M8 star at 10.0 pc (APMPM J0103-3738), and one M4 star at 8.3 pc (LP 225-57). One star, LHS 73, is found to be among the few subdwarfs lying within 20 pc. Furthermore, together with LHS 72, it probably belongs to the closest pair of subdwarfs we know.
• ### Modelling the Galactic Interstellar Extinction Distribution in Three Dimensions(astro-ph/0604427)
April 20, 2006 astro-ph
The Two Micron All Sky Survey, along with the Stellar Population Synthesis Model of the Galaxy, developed in Besancon, is used to calculate the extinction distribution along different lines of sight. By combining many lines of sight, the large scale distribution of interstellar material can be deduced. The Galaxy model is used to provide the intrinsic colour of stars and their probable distances, so that the near infrared colour excess, and hence the extinction, may be calculated and its distance evaluated. Such a technique is dependent on the model used, however we are able to show that moderate changes in the model parameters result in insignificant changes in the predicted extinction. This technique has now been applied to over 64000 lines of sight, each separated by 15 arcmin, in the inner Galaxy (|l|<=100 deg, |b|<=10 deg). Using our extinction map, we have derived the main characteristics of the large scale structure of the dust distribution: scale height and warp of the ISM disc as well as the angle of the dust in the Galactic Bar. This resulting extinction map will be useful for studies of the inner Galaxy and its stellar populations.
|
2021-03-02 22:59:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.609748899936676, "perplexity": 1976.6193567665055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00179.warc.gz"}
|
https://www.shaalaa.com/textbook-solutions/c/frank-solutions-class-9-math-icse-chapter-4-expansions_2013
|
# Frank solutions for Class 9 Maths ICSE chapter 4 - Expansions [Latest edition]
## Chapter 4: Expansions
Exercise 4.1Exercise 4.2
Exercise 4.1
### Frank solutions for Class 9 Maths ICSE Chapter 4 Expansions Exercise 4.1
Exercise 4.1 | Q 1.1
Expand the following:
(a + 4) (a + 7)
Exercise 4.1 | Q 1.2
Expand the following:
(m + 8) (m - 7)
Exercise 4.1 | Q 1.3
Expand the following:
(x - 5) (x - 4)
Exercise 4.1 | Q 1.4
Expand the following:
(3x + 4) (2x - 1)
Exercise 4.1 | Q 1.5
Expand the following:
(2x - 5) (2x + 5) (2x- 3)
Exercise 4.1 | Q 2.1
Expand the following:
(a + 3b)2
Exercise 4.1 | Q 2.2
Expand the following:
(2p - 3q)2
Exercise 4.1 | Q 2.3
Expand the following:
(2"a" + 1/(2"a"))^2
Exercise 4.1 | Q 2.4
Expand the following:
(x - 3y - 2z)2
Exercise 4.1 | Q 3.1
Find the squares of the following:
9m - 2n
Exercise 4.1 | Q 3.2
Find the squares of the following:
3p - 4q2
Exercise 4.1 | Q 3.3
Find the squares of the following:
(7x)/(9y) - (9y)/(7x)
Exercise 4.1 | Q 3.4
Find the squares of the following:
(2a + 3b - 4c)
Exercise 4.1 | Q 4.1
Simplify by using formula :
(5x - 9) (5x + 9)
Exercise 4.1 | Q 4.2
Simplify by using formula :
(2x + 3y) (2x - 3y)
Exercise 4.1 | Q 4.3
Simplify by using formula :
(a + b - c) (a - b + c)
Exercise 4.1 | Q 4.4
Simplify by using formula :
(x + y - 3) (x + y + 3)
Exercise 4.1 | Q 4.5
Simplify by using formula :
(1 + a) (1 - a) (1 + a2)
Exercise 4.1 | Q 4.6
Simplify by using formula :
("a" + 2/"a" - 1) ("a" - 2/"a" - 1)
Exercise 4.1 | Q 5.1
Evaluate the following without multiplying:
(95)2
Exercise 4.1 | Q 5.2
Evaluate the following without multiplying:
(103)2
Exercise 4.1 | Q 5.3
Evaluate the following without multiplying:
(999)2
Exercise 4.1 | Q 5.4
Evaluate the following without multiplying:
(1005)2
Exercise 4.1 | Q 6.1
Evaluate, using (a + b)(a - b)= a2 - b2.
399 x 401
Exercise 4.1 | Q 6.2
Evaluate, using (a + b)(a - b)= a2 - b2.
999 x 1001
Exercise 4.1 | Q 6.3
Evaluate, using (a + b)(a - b)= a2 - b2.
4.9 x 5.1
Exercise 4.1 | Q 6.4
Evaluate, using (a + b)(a - b)= a2 - b2.
15.9 x 16.1
Exercise 4.1 | Q 7
If a - b = 10 and ab = 11; find a + b.
Exercise 4.1 | Q 8.1
If x + y = 9, xy = 20
find: x - y
Exercise 4.1 | Q 8.2
If x + y = 9, xy = 20
find: x2 - y2.
Exercise 4.1 | Q 9.1
If "a" + 1/"a" = 6;find "a" - 1/"a"
Exercise 4.1 | Q 9.2
If "a" + 1/"a" = 6;find "a"^2 - 1/"a"^2
Exercise 4.1 | Q 10.1
If "a" - 1/"a" = 10; find "a" + 1/"a"
Exercise 4.1 | Q 10.2
If "a" - 1/"a" = 10; find "a"^2 - 1/"a"^2
Exercise 4.1 | Q 11.1
If x + (1)/x = 3; find x^2 + (1)/x^2
Exercise 4.1 | Q 11.2
If x + (1)/x = 3; find x^4 + (1)/x^4
Exercise 4.1 | Q 12.1
If p + q = 8 and p - q = 4, find:
pq
Exercise 4.1 | Q 12.2
If p + q = 8 and p - q = 4, find:
p2 + q2
Exercise 4.1 | Q 13.1
If m - n = 0.9 and mn = 0.36, find:
m + n
Exercise 4.1 | Q 13.2
If m - n = 0.9 and mn = 0.36, find:
m2 - n2.
Exercise 4.1 | Q 14.1
If x + y = 1 and xy = -12; find:
x - y
Exercise 4.1 | Q 14.2
If x + y = 1 and xy = -12; find:
x2 - y2.
Exercise 4.1 | Q 15.1
If "a"^2 - 7"a" + 1 = 0 and a = ≠ 0, find :
"a" + (1)/"a"
Exercise 4.1 | Q 15.2
If "a"^2 - 7"a" + 1 = 0 and a = ≠ 0, find :
"a"^2 + (1)/"a"^2
Exercise 4.1 | Q 16.1
If a2 - 3a - 1 = 0 and a ≠ 0, find : "a" - (1)/"a"
Exercise 4.1 | Q 16.2
If a2 - 3a - 1 = 0 and a ≠ 0, find : "a" + (1)/"a"
Exercise 4.1 | Q 16.3
If a2 - 3a - 1 = 0 and a ≠ 0, find : "a"^2 - (1)/"a"^2
Exercise 4.1 | Q 17
If 2x + 3y = 10 and xy = 5; find the value of 4x2 + 9y2
Exercise 4.1 | Q 18
If x + y + z = 12 and xy + yz + zx = 27; find x2 + y2 + z2.
Exercise 4.1 | Q 19
If a2 + b2 + c2 = 41 and a + b + c = 9; find ab + bc + ca.
Exercise 4.1 | Q 20
If p2 + q2 + r2 = 82 and pq + qr + pr = 18; find p + q + r.
Exercise 4.1 | Q 21
If x + y + z = p and xy + yz + zx = q; find x2 + y2 + z2.
Exercise 4.2
### Frank solutions for Class 9 Maths ICSE Chapter 4 Expansions Exercise 4.2
Exercise 4.2 | Q 1.1
Find the cube of: 2a - 5b
Exercise 4.2 | Q 1.2
Find the cube of: 4x + 7y
Exercise 4.2 | Q 1.3
Find the cube of: 3"a" + (1)/(3"a")
Exercise 4.2 | Q 1.4
Find the cube of: 4"p" - (1)/"p"
Exercise 4.2 | Q 1.5
Find the cube of: (2"m")/(3"n") + (3"n")/(2"m")
Exercise 4.2 | Q 1.6
Find the cube of: "a" - (1)/"a" + "b"
Exercise 4.2 | Q 2
If 5x + (1)/(5x) = 7; find the value of 125x^3 + (1)/(125x^3).
Exercise 4.2 | Q 3
If 3x - (1)/(3x) = 9; find the value of 27x^3 - (1)/(27x^3).
Exercise 4.2 | Q 4
If x + (1)/x = 5, find the value of x^2 + (1)/x^2, x^3 + (1)/x^3 and x^4 + (1)/x^4.
Exercise 4.2 | Q 5
If "a" - (1)/"a" = 7, find "a"^2 + (1)/"a"^2 , "a"^2 - (1)/"a"^2 and "a"^3 - (1)/"a"^3
Exercise 4.2 | Q 6.1
If "a"^2 + (1)/"a"^2 = 14; find the value of "a" + (1)/"a"
Exercise 4.2 | Q 6.2
If "a"^2 + (1)/"a"^2 = 14; find the value of "a"^3 + (1)/"a"^3
Exercise 4.2 | Q 7
If "m"^2 + (1)/"m"^2 = 51; find the value of "m"^3 - (1)/"m"^3
Exercise 4.2 | Q 8
If 9"a"^2 + (1)/(9"a"^2) = 23; find the value of 27"a"^3 + (1)/(27"a"^3)
Exercise 4.2 | Q 9.1
If x^2 + (1)/x^2 = 18; find : x - (1)/x
Exercise 4.2 | Q 9.2
If x^2 + (1)/x^2 = 18; find : x^3 - (1)/x^3
Exercise 4.2 | Q 10.1
If "p" + (1)/"p" = 6; find : "p"^2 + (1)/"p"^2
Exercise 4.2 | Q 10.2
If "p" + (1)/"p" = 6; find : "p"^4 + (1)/"p"^4
Exercise 4.2 | Q 10.3
If "p" + (1)/"p" = 6; find : "p"^3 + (1)/"p"^3
Exercise 4.2 | Q 11.1
If "r" - (1)/"r" = 4; find: "r"^2 + (1)/"r"^2
Exercise 4.2 | Q 11.2
If "r" - (1)/"r" = 4; find : "r"^4 + (1)/"r"^4
Exercise 4.2 | Q 11.3
If "r" - (1)/"r" = 4; find : "r"^3 - (1)/"r"^3
Exercise 4.2 | Q 12
If "a" + (1)/"a" = 2, then show that "a"^2 + (1)/"a"^2 = "a"^3 + (1)/"a"^3 = "a"^4 + (1)/"a"^4
Exercise 4.2 | Q 13
If x + (1)/x = "p", x - (1)/x = "q"; find the relation between p and q.
Exercise 4.2 | Q 14
If "a" + (1)/"a" = "p"; then show that "a"^3 + (1)/"a"^3 = "p"("p"^2 - 3)
Exercise 4.2 | Q 15
If ("a" + 1/"a")^2 = 3; then show that "a"^3 + (1)/"a"^3 = 0
Exercise 4.2 | Q 16
If a + b + c = 0; then show that a3 + b3 + c3 = 3abc.
Exercise 4.2 | Q 17
If a + 2b + c = 0; then show that a3 + 8b3 + c3 = 6abc
Exercise 4.2 | Q 18
If x3 + y3 = 9 and x + y = 3, find xy.
Exercise 4.2 | Q 19
If a + b = 5 and ab = 2, find a+ b3.
Exercise 4.2 | Q 20
If p - q = -1 and pq = -12, find p3 - q3
Exercise 4.2 | Q 21
If m - n = -2 and m3 - n3 = -26, find mn.
Exercise 4.2 | Q 22
If 2a - 3b = 10 and ab = 16; find the value of 8a3 - 27b3.
Exercise 4.2 | Q 23
If x + 2y = 5, then show that x3 + 8y3 + 30xy = 125.
Exercise 4.2 | Q 24.01
Simplify:
(4x + 5y)2 + (4x - 5y)2
Exercise 4.2 | Q 24.02
Simplify:
(7a +5b)2 - (7a - 5b)2
Exercise 4.2 | Q 24.03
Simplify:
(a + b)3 + (a - b)3
Exercise 4.2 | Q 24.04
Simplify:
("a" - 1/"a")^2 + ("a" + 1/"a")^2
Exercise 4.2 | Q 24.05
Simplify:
(x + y - z)2 + (x - y + z)2
Exercise 4.2 | Q 24.06
Simplify:
("a" + 1/"a")^3 - ("a" - 1/"a")^3
Exercise 4.2 | Q 24.07
Simplify:
(2x + y)(4x2 - 2xy + y2)
Exercise 4.2 | Q 24.08
Simplify:
(x - 1/x)(x^2 + 1 + 1/x^2)
Exercise 4.2 | Q 24.09
Simplify:
(x + 2y + 3z)(x2 + 4y2 + 9z2 - 2xy - 6yz - 3zx)
Exercise 4.2 | Q 24.1
Simplify:
(1 + x)(1 - x)(1 - x + x2)(1 + x + x2)
Exercise 4.2 | Q 24.11
Simplify:
(3a + 2b - c)(9a2 + 4b2 + c2 - 6ab + 2bc +3ca)
Exercise 4.2 | Q 24.12
Simplify:
(3x + 5y + 2z)(3x - 5y + 2z)
Exercise 4.2 | Q 24.13
Simplify:
(2x - 4y + 7)(2x + 4y + 7)
Exercise 4.2 | Q 24.14
Simplify:
(3a - 7b + 3)(3a - 7b + 5)
Exercise 4.2 | Q 25.1
Evaluate the following :
(3.29)3 + (6.71)3
Exercise 4.2 | Q 25.2
Evaluate the following :
(5.45)3 + (3.55)3
Exercise 4.2 | Q 25.3
Evaluate the following :
(8.12)3 - (3.12)3
Exercise 4.2 | Q 25.4
Evaluate the following :
7.16 x 7.16 + 2.16 x 7.16 + 2.16 x 2.16
Exercise 4.2 | Q 25.5
Evaluate the following :
1.81 x 1.81 - 1.81 x 2.19 + 2.19 x 2.19
## Chapter 4: Expansions
Exercise 4.1Exercise 4.2
## Frank solutions for Class 9 Maths ICSE chapter 4 - Expansions
Frank solutions for Class 9 Maths ICSE chapter 4 (Expansions) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CISCE Class 9 Maths ICSE solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com provide such solutions so that students can prepare for written exams. Frank textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 9 Maths ICSE chapter 4 Expansions are Algebraic Identities, Expansion of Formula, Special Product, Methods of Solving Simultaneous Linear Equations by Cross Multiplication Method, Expansion of (a + b)3.
Using Frank Class 9 solutions Expansions exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in Frank Solutions are important questions that can be asked in the final exam. Maximum students of CISCE Class 9 prefer Frank Textbook Solutions to score more in exam.
Get the free view of chapter 4 Expansions Class 9 extra questions for Class 9 Maths ICSE and can use Shaalaa.com to keep it handy for your exam preparation
|
2022-05-17 10:27:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46183550357818604, "perplexity": 6019.932413562342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00179.warc.gz"}
|
https://www.numerade.com/questions/a-spaceship-travels-at-constant-velocity-from-earth-to-a-point-710-ly-away-as-measured-in-earths-res/
|
Our Discord hit 10K members! 🎉 Meet students and ask top educators your questions.Join Here!
# A spaceship travels at constant velocity from Earth to a point 710 ly away as measured in Earth's rest frame. The ship's speed relative to Earth is $0.9999 c .$ A passenger is 20 yr old when departing from Earth. (a) How old is the passenger when the ship reaches its destination, as measured by the ship's clock? (b) If the spaceship sends a radio signal back to Earth as soon as it reaches its destination, in what year, by Earth's calendar, does the signal reach Earth? The spaceship left Earth in the year 2000.
## (a) $A g e=30 \mathrm{yr}$(b) Not yet, will arrive on 3420
Gravitation
### Discussion
You must be signed in to discuss.
##### Andy C.
University of Michigan - Ann Arbor
##### Farnaz M.
Simon Fraser University
##### Aspen F.
University of Sheffield
##### Jared E.
University of Winnipeg
Lectures
Join Bootcamp
### Video Transcript
the question. I was asking us how old the passengers be as it was measured by the ship's clock. Well, we know this kind of question is dealing with the time dilation musicality because who, grandma times Delta teaser. Okay, come on. The Lawrence factor, which is he was won over square one, minus the square overseas. Where? Okay. And since the question was asking us the time measured by the A spaceship clock, which means just asking dalati zero and gelati year means the time on earth. Okay, so we have done that. He zero is equal to, uh she goes down. That's he over gun. Okay, which is equal to square root one minus the square was C square times duality. Okay, but we know that me the speed of a spaceship he's given as zero point ni my ni. I see. Okay. And Donald t, this game has ah, memory. The distance on a spaceship to a point is, uh, 710 light years. So light years is the distant that light travel warriors. So she's a 710 Linus. Name is the time. Takes a light to travel. There is 710 years. Okay. And therefore we re plugging. The values will have. Don t zero is equal to square one minus Joe boy. 99 My night C square over C square. Okay. And times the time from a measure from Earth, which is 700 attain years. Okay. And you should give us Donna T zero is about 10 years. And remember, the passenger was 20 years old. So that means during his trip during his trip, eventually, he would become 30 years old on a spaceship. So it's 20 years plus 10 years, which is 30 years. Yeah, so? So he'll be 30 years old. Simple question. Be I was asking us will be the year on earth calendar when a signal was, uh, send back was sending back to Earth. Well, we know he takes 710 years. Well, I to travel at a points. So in order to travel back, it took another seven, 110 years. Okay, so the total denial, because his 710 years times two okay, which is 14 20 years. Oh, yeah. This is a 1420 years and sees the Earth with at 2000. You're the year 2000 Amy's way when the signal rich fact to the earth will be 2000 years less 14 20 years, which is equal to 34 20 years. Okay, so we way rich back the years on counter will be 34 20. Okay? The reason why he took 7 10 years to a trial back and forces because the signal is traveling a speed of lights, OK, and these are my answers for these questions. Thank you.
Georgia State University
#### Topics
Gravitation
##### Andy C.
University of Michigan - Ann Arbor
##### Farnaz M.
Simon Fraser University
##### Aspen F.
University of Sheffield
##### Jared E.
University of Winnipeg
Lectures
Join Bootcamp
|
2021-05-09 14:06:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5919941067695618, "perplexity": 2364.1562263729666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00304.warc.gz"}
|
http://physics.stackexchange.com/questions/2865/wormholes-time-machines-for-experts-in-gr-maths
|
# Wormholes & Time Machines - for *experts* in GR/maths
EDIT: 21 Jan - Response to the Lubos Expansion appended [in progress, not yet complete]
EDIT: 23 Jan - Visser's calculations appended
EDIT: 26 Jan - Peter Shor's thought experiments rebutted
Summary to Date (26 Jan)
The question is: are the Morris, Thorne, Yurtsever (MTY) and Visser mechanisms for converting a wormhole into a time machine valid? The objection to the former is that the "motion" of a wormhole mouth is treated in an inadmissable manner by the former, and that the valid mathematical treatment of the latter is subsequently misapplied to a case in which a sufficient (and probably necessary) condition does not apply (existence of a temporal discontinuity). It is maintained that extant thought experiments lead to incorrect conclusions because in the former case correct treatment introduces factors that break inertial equivalence between an unaccelerated rocket and co-moving wormhole mouth, and in the latter case especially do not respect the distinction between temporal coordinate values and spacetime separations.
Given that the detailed treatment of the Visser case is reproduced below, a valid argument in favour of a wormhole time machine must show how an interval $ds^2=0$ (the condition for a Closed Timelike Curve) obtains in the absence of a temporal discontinuity.
In considering the MTY (1988) paper, careful consideration should be given to whether the authors actually transport a wormhole mouth or just the coordinate frame that is convenient for describing a wormhole mouth if one happened to exist there.
Issues concerning quantum effects, energy conditions, and whether any created time-machine could persist etc. are off topic; the question is solely about the validity of the reasoning and maths concerning time-machine creation from a wormhole.
The original postings in chronological order are below.
Over at Cosmic Variance Sean Carroll recommended this place for the quality of the contributions so I thought I would try my unanswered question here; it is most definitely for experts.
The question is simple, and will be stated first, but I'll supplement the question with specific issues concerning standard explanations and why I am unable to reconcile them with what seem to be other important considerations. In other words, I'm not denying the conclusions I'm just saying "I don't get it," and would someone please set me straight by e.g. showing me where the apparent counter-arguments/reanalyses break down.
The question is: How can a wormhole be converted into a time machine?
Supplementary stuff.
I have no problem with time-travel per se, especially in the context of GR. Godel universes, van Stockum machines, the possibilities of self-consistent histories, etc. etc. are all perfectly acceptable. The question relates specifically to the application of SR and GR to wormholes and the creation of time-differences between the mouths - leading to time-machines, as put forward in (A) the seminal Morris, Thorne, Yurtsever paper (Wormholes, Time Machines, and the Weak Energy Condition, 1987) and explained at some length in (B) Matt Visser's Book (Lorentzian Wormholes From Einstein To Hawking, Springer-Verlag, 1996).
A -- Context. MTY explore the case of an idealised wormhole in which one mouth undertakes a round trip journey (i.e. undergoes accelerated motion, per the standard "Twins Paradox" SR example).
What is unclear to me is how MTY's conclusions are justified given that the moving wormhole mouth is treated as moving against a Minkowskian background: specifically, can someone explain how the wormhole motion is valid as a diffeomorphism, which my limited understanding suggests is the only permitted type of manifold transformation in relativity.
Elaborating... wormhole construction is generally described as taking an underlying manifold, excising two spherical regions and identifying the surfaces of those two regions. In the MTY case, if the background space is Minkowski space and remains undistorted, then at times t, and t' the wormhole mouth undergoing "motion" seems to identify different sets of points (i.e. different spheres have to be excised from the underlying manifold) and so there is no single manifold, no diffeomorphism. [Loose physical analogue: bend a piece of paper into a capital Omega shape and let the "heels" touch... while maintaining contact between them the paper can be slid and the point of contact "moves" but different sets of points are in contact]
I'm happy with everything else in the paper but this one point, which seems to me fundamental: moving one wormhole mouth around requires that the metric change so as to stretch/shrink the space between the ends of the wormhole, i.e. the inference of a time-machine is an artefact of the original approach in which the spacetime manifold is treated in two incompatible ways simultaneously.
Corollary: as a way of re-doing it consistently, consider placing the "moving" wormhole mouth in an Alcubierre style warp bubble (practicality irrelevant - it just provides a neat handle on metric changes), although in this case v is less than c (call it an un-warp bubble for subluminal transport, noting in passing that it is in fact mildly more practicable than a super-luminal transport system). As per the usual Alcubierre drive, there is no time dilation within the bubble, and the standard wormhole-mouth-in-a-rocket thought-experiment (Kip Thorne and many others) produces a null result.
B -- Context. (s18.3 p239 onwards) Visser develops a calculation that begins with two separate universes in which time runs at different rates. These are bridged and joined at infinity to make a single wormhole universe with a temporal discontinuity. The assumption of such a discontinuity does indeed lead to the emergence of a time machine, but when a time-machine is to be manufactured within a single universe GR time dilation is invoked (one wormhole mouth is placed in a gravitational potential) to cause time to flow at different rates at the two ends of the wormhole (to recreate the effect described in the two universe case in which time just naturally flowed at different rates). However, in the case of a simple intra-universe wormhole there is no temporal discontinuity (nor can I see how one might be induced) and the application of the previously derived equations produces no net effect.
Thus, as I read the explanations, neither SR nor GR effects create a time-machine out of a (intra-universe) wormhole.
Where have I gone wrong?
Many thanks,
Julian Moore
Lubos' initial answer is informative, but - like several other comments (such as Lawrence B. Crowell's) - the answerer's focus is on the impossibility of an appropriate wormhole per se, and not the reasoning & maths used by Morris, Thorne, Yurtsever & Visser.
I agree (to the extent that I understand them) with the QM issues, however the answer sought should assume that a wormhole can exist and then eliminate the difficulties I have noted with the creation of time differences between the wormhole mouths. Peter Shor's answer assumes that SR effects will apply and merely offers a way to put a wormhole mouth in motion; the question is really, regardless of how motion might be created, does & how does SR (in this case) lead to the claimed effect?
I think the GR case is simplest because the maths in Visser's book is straightforward, and if there is no temporal discontinuity, the equations (which I have no problem with) say no time machine is created by putting a wormhole mouth into a strong gravitational potential. The appropriate answer to the GR part of the question is therefore to show how a time machine arises in the absence of a temporal discontinuity or to show how a temporal discontinuity could be created (break any energy condition you like, as far as I can see the absolute discontinuity required can't be obtained... invoking QM wouldn't help as the boundary would be smeared by QM effects.
Lubos said that "an asymmetry could gradually increase the time delay between the two spacetime points that are connected by the wormhole", I'm saying, "My application of the expert's (Visser's) math says it doesn't - how is my application in error?"
The SR case is much trickier conceptually. I am asserting that the wormhole motion of MTY is in principle impossible because, not to put too fine a point on it, the wormhole mouth doesn't "move". Consider a succession of snapshots showing the "moving" wormhole mouth at different times and then view them quickly; like a film one has an appearance of movement from still images, but in this case the problem is that the wormhole mouth in each frame is a different mouth. If the background is fixed Minkowski space (ie remains undistorted) at times t and t' different regions of the underlying manifold have to be excised to create the wormhole at those times... so the wormhole manifolds are different manifolds. If the background is not fixed Minkowski space, then it can be distorted and a mouth can "move" but this is a global rather than a local effect, and just like the spacetime in an Alcubierre warp bubble nothing is happening locally.
Consider two points A & B and first stretch and then shrink the space between them by suitable metric engineering: is there are time difference between them afterwards? A simple symmetry argument says there can't be, so if the wormhole mouths are treated as features of the manifold rather than objects in the manifold (as it seems MTY treat them) then the only way to change their separation is by metric changes between them and no time machine can arise.
Of course, if a time-machine could be created either way, it would indeed almost certainly destroy itself through feedback... but, to repeat, this is not the issue.
Thanks to Robert Smith for putting the bounty on this question on my behalf, and thanks to all contributors so far.
Edit 2: Re The Lubos Expansion
Lubos gives an example of a wormhole spacetime that appears to be a time-machine, and then offers four ways of getting rid of the prospective or resulting time machine for those who object in principle. Whilst I appreciate the difficulties with time-machines I am neither for nor against them per sunt, so I will concentrate on the creation issue. I have illustrated my interpretation of Lubos' description below.
As I understand it, there is nothing in GR that in principle prevents one from having a manifold in which two otherwise spacelike surfaces are connected in such a way as to permit some sort of time travel. This is the situation shown in the upper part of the illustration. The question is how can the situation in the upper part of the illustration be obtained from the situation in the lower part?
Now consider the illustration below of a spacelike surface with a simple wormhole (which I think is a valid foliation of e.g. a toroidal universe). As time passes, the two mouths move apart thanks to expansion of space between them, and then close up again by the inverse process (as indicated by the changing separation of the dotted lines, which remain stationary)
Edit 3: Visser's calculations reproduced for inspection
Consider the result in the case where there is no temporal discontuity using the equations derived for the case where there is a discontinuity, given below
Visser, section 18.3, p239 The general metric for a spherically symmetric static wormhole
$$ds^2~=~-e^{2\phi(l)}~dt^2~+~dl^2~+~r^2[d\theta^2~+~sin^2\theta~d\psi^2]~~~~~(18.35)$$
Note that "there is no particular reason to demand that time run at the same rate on either side of the wormhole. More precisely, it is perfectly acceptable to have $ф(l=+\infty)~\neq~ф(l=-\infty)$"
Reduce to (1+1) dimensions for simplicity and consider $$ds^2~=~-e^{2\phi(l)}~dt^2~+~dl^2~~~~~~~~~~(18.36)$$
The range of l is (-L/2,+L/2) and l=-L/2 is to be identified with l=+L/2. Define
$$\phi_\pm\equiv\phi(l=\pm~L/2); ~\Delta\phi\equiv\phi_+~-~\phi_-~~~~~~~(18.37)$$
at the junction $l=\pm~L/2~~$ the metric has to be smooth, ie. $ds = \sqrt{g_{\mu\nu}{dx^\mu}{dx^\nu}}$ is smooth, implying $$d\tau=e^{\phi_-}dt_-=e^{\phi_+}dt_+~~~~~~~~~(18.38)$$ Define the time coordinate origin by identifying the points $$(0,-L/2)\equiv(0,+L/2)~~~~~~(18.39)$$ then the temporal discontinuity is $$t_+=t_-e^{(\phi_-~-~\phi_+)}~=~t_-e^{(-\Delta\phi)}~~~~~~(18.40)$$ leading to the identification $$(t_-,-L/2)\equiv(t_-e^{-\Delta\phi},+L/2)~~~~~~~~~(18.41)$$ which makes the metric smooth across the junction. Now consider a null geodesic, i.e. ds=0, which is $${dl\over{dt}}=\pm{e^{+\phi(l)}}~~~~~(18.42)$$ where the different signs correspond to right/left moving rays. Integrate to evaluate for a right moving ray, with conventions $t_f$ is the final time and $t_i$ is the initial time $$[t_f]_+=[t_i]_-+\int_{-L/2}^{+L/2}e^{-\phi(l)}dl~~~~~(18.43)$$ then apply the coordinate discontinuity matching condition to determine that the ray returns to the starting point at coordinate time $$[t_f]_-=[t_f]_+e^{\Delta\phi} = [[t_i]_-+\oint{e^{-\phi(l)}}dl]e^{\Delta\phi}~~~~~(18.44)$$ A closed right moving null curve exists if $t[_f]_-=[t_i]_-$, i.e. $$[t_i]^R_-={{\oint{e^{-\phi(l)}dl}}\over{e^{\Delta\phi}-1}}~~~~~(18.45)$$
Edit 4: Peter Shor's thought experiments re-viewed
Peter Shor has acknowledged (at the level of "I think I see what you mean...") both the arguments against wormhole time machine creation (the absence of the required temporal discontinuity in spacetime if GR effects are to be used, and that wormhole mouth motion requires metric evolution inconsistent with the Minkowksi space argument of MTY) but still believes that such a wormhole time-machine can be created by either of the standard methods offered a thought experiment. This is a counter to those thought experiments and whilst it does not constitute proof of the contrary (I don't think such thought experiments are rich enough to provide proof either way), I believe it casts serious doubt on their interpretation, thereby undermining the objections.
The counter arguments rely on the key distinction between the values of the time coordinate and the separation of events ($ds^2$). Paragraphs are numbered for ease of reference.
(1) Consider the classic Twins Paradox situation and the associated Minkowksi diagram. When the travelling twin returns she has the same t coordinate ($T_{return}$) as her stay-at-home brother (who said they had to be homozygous? :) ) As we all know, despite appearances to the contrary on such a diagram, the sister's journey is in fact shorter (thanks to the mixed signs in the metric), thus it has taken her "less time" to reach $T_{return}$ than it took her brother. Less time has elapsed, but she is not "in the past".
(2) Now consider the gravity dunking equivalent Twins scenario. This time he sits in a potential well for a while and then returns to his sister who has stayed in flat space. Again their t coordinate is the same, but again there is a difference in separations; this time his is shorter.
(3) Now for the travelling/dunking Twin substitute a wormhole mouth; then the wormhole mouths are brought together they do so at the same value of t. The moving mouths may have "travelled" shorter spacetime distances but they are not "in the past"
(4) Suppose now we up the ante and give the travelling/dunking Twin a wormhole mouth to keep with them...
(5) According to the usual stories, Mr A can watch Ms A receding in her rocket - thereby observing her clock slow down - or he can communicate with her through the wormhole, through which he does not see her clock slow down because there is no relative motion between the wormhole mouth and Ms A. Since this seems a perfectly coherent picture we are inevitably led to the conclusion that a time machine comes into existence in due course.
(6) My objection to this is that there are reasons to doubt what is claimed to be seen through the wormhole, and if the absence of time dilation is not observed through the wormhole we will not be led to the creation of a time machine. So, what would one see through the wormhole, and why?
(7) I return to the question of the allowable transformations of the spacetime manifold. If a wormhole mouth is rushing "through" space, the space around it must be subject to distortion. Now, whilst there are reasons to doubt that one can arrange matter in such a way as to create the required distortion (the various energy condition objections to the original Alcurbierre proposal, for instance), we are less concerned about the how and more concerned with the what if (particularly since, if spacetime cannot "move" to permit the wormhole mouth to "move", the whole question becomes redundant). The very fact that spacetime around the "moving" wormhole mouth is going to be distorted suggests at least the possibility that what is observed through the wormhole is consistent with what is observed the other way, or that effects beyond the scope of the equivalence principle demonstrate that observation through the wormhole is not equivalent to observing from an inertial frame. Unfortunately I don't have the math to peform the required calculations, but insofar as there is a principled objection to the creation of a time machine as commonly described, I would hope that someone would check it out.
(8) What then for the dunking Twin? In this case there is no "motion" of the wormhole mouth, so no compensating effects can be sought from motion. However, I believe that one can apply to the metric for help. Suppose that the wormhole mouth in the gravitational well is actually embedded in a little bit of flat space, then (assuming the wormhole itself is essentially flat) the curvature transition happens outside the mouth and looking through the wormhole should be like looking around it: Mr A seems very slowed down. If, instead, the wormhole mouth is fully embedded in the strongly curved space that Mr A also occupies, then the wormhole cannot be uniformly flat and again looking through the wormhole we see exactly what we see around it (at least as far as the tick of Mr A's watch is concerned.) but the transition from flat to curved space (and hence the change in clock rates) occurs over the interior region of the wormhole.
(9) Taken with the "usual" such thought experiments, we now have contradictory but equally plausible views of the same situations, and they can't both be right. I feel however that no qualitative refinements will resolve the issue, thus I prefer the math, which seems to make it quite plain that the usually supposed effects do not in fact occur. Similarly, if you disagree that the alternative view is plausible, since the "usual" result does not seem plausible to me,maths again provides the only common ground where the disagreement can be resolved. I urge others to calculate the round-trip separations using the equations provided from Visser's work.
(10) I say the MTY paper is in error because it treats spacetime as flat, rigid Minkowskian and then treats the "motion" of a wormhole mouth in a way that is fundamentall incompatible with a flat rigid background.
(11) I say Visser is in error in applying his (correct) inter-universe wormhole result to an intra-universe wormhole where the absence of the temporal discontinuity in the latter nullifies the result.
(12) These objections have been acknowledged but but no equally substantive arguments to undermine them (i.e. to support the extant results) has been forthcoming; they have not been tackled head on.
(14) I am not comfortable with any of the qualitative arguments either for against wormhole-time machines; an unending series of thought experiments is conceivable, each more intricate and ultimately less convincing than the last. I don't want to go there; look at the maths and object rigorously if possible.
-
"The question is: How can a wormhole be converted into a time machine?" -- that must be also a title for your question. – Kostya Jan 14 '11 at 13:33
What? A question about me? Time Machines are awesome. :) – elyse Jan 15 '11 at 1:10
You say "Consider two points A & B and first stretch and then shrink the space between them by suitable metric engineering: is there are time difference between them afterwards? A simple symmetry argument says there can't be" But this is exactly the twin paradox in special relativity. How does the fact that the objects are wormhole mouths make any difference? I guess I don't understand your question. – Peter Shor Jan 20 '11 at 18:54
@Julian: If you move the wormhole mouths with the Alcubierre warp-bubble, then they presumably have no time dilation. But if you move them any other way, they should have the same time dilation you get from GR. Why should they behave any differently? – Peter Shor Jan 21 '11 at 10:58
@Julian: there is a difference between gravitational potential and curvature. You should learn general relativity better. At the event horizon of a large black hole, there is low curvature but very high gravitational potential. Look in Wikipedia (or at the equations) if you don't believe me. – Peter Shor Jan 28 '11 at 17:31
If you have a charged black hole, and a strong enough magnet, you should be able to move the charged black hole without any theoretical difficulty. A wormhole mouth shouldn't be any different.
-
Right, but this still doesn't directly answer the question whether the wormhole may become a time machine, does it? – Luboš Motl Jan 20 '11 at 8:42
@Lubos & Peter. I've added some extensive clarification to the original question to help focus on the points at issue. – Julian Moore Jan 20 '11 at 9:34
this answer has been expanded at the end.
I am convinced that macroscopic wormholes are impossible because they would violate the energy conditions etc. so it is not a top priority to improve the consistency of semi-consistent stories. At the same moment, I also think that any form of time travel is impossible as well, so it's not surprising that one may encounter some puzzles when two probably impossible concepts are combined.
However, it is a genuinely confusing topic. You may pick Leonard Susskind's 2005 papers about wormholes and time travel:
http://arxiv.org/abs/gr-qc/0503097
http://arxiv.org/abs/gr-qc/0504039
Amusingly enough, for a top living theoretical physicist, the first paper has 3 citations now and the second one has 0 citations. The abstract of the second paper by Susskind says the following about the first paper by Susskind:
"In a recent paper on wormholes (gr-qc/0503097), the author of that paper demonstrated that he didn't know what he was talking about. In this paper I correct the author's naive erroneous misconceptions."
Very funny. The first paper, later debunked, claims that the local energy conservation and uncertainty principle for time and energy are violated by time travel via wormholes. The second paper circumvents the contradictions from the first one by some initial states etc. The discussion about the violation of the local energy conservation law in Susskind's paper is relevant for your question.
I think that if you allowed any configurations of the stress-energy tensor - or Einstein's tensor, to express any curvature - it would also be possible for one throat of an initial wormhole to be time-dilated - a gravity field that is only on one side - and such an asymmetry could gradually increase the time delay between the two spacetime points that are connected by the wormhole. For example, you may also move one endpoint of the wormhole along a circle almost by the speed of light. The wormhole itself will probably measure proper time on both sides, but the proper time on the circulating endpoint side is shortened by time dilation, which will allow you to modify the time delay between the two endpoints.
Whatever you try to do, if you get a spacetime that can't be foliated, it de facto proves that the procedure is physically impossible, anyway. Sorry that I don't have a full answer - but that's because I fundamentally believe that the only correct answer is that one can't allow wormholes that would depend on negative energy density, and once one allows them, then he pretty much allows anything and there are many semi-consistent ways to escape from the contradictions.
Expansion
Dear Julian,
I am afraid that you are trying to answer more detailed questions by classical general relativity than what it can answer. It is clearly possible to construct smooth spacetime manifolds such that a wormhole is connecting places X, Y whose time delay is small at the beginning but very large - and possibly, larger than the separation over $c$ - at the end. Just think about it.
You may cut two time-like-oriented solid cylinders from the Minkowski spacetime. Their disk-shaped bases in the past both occur at $t=0$ but their disked-shaped bases in the future appear at $t_1$ and $t_2$, respectively. I can easily take $c|t_1-t_2| > R$ where $R$ is the separation between the cylinders. Now, join the cylinders by a wormholes - a tube that goes in between them. In fact, I can make the wormhole's proper length decreasing as we go into the future. It seems pretty manifest that one may join these cylinders bya tube in such a way that the geometry will be locally smooth and Minkowski.
These manifolds are locally smooth and Minkowski, when it comes to their signature. You can calculate their Einstein's tensor - it will be a function of the manifold. If you allow any negative energy density etc. - and the very existence of wormholes more or less forces you to allow negative energy density - then you may simply postulate that there was an energy density and a stress-energy tensor that, when inserted to Einstein's equations, produced the particular geometry. So you can't possibly avoid the existence of spacetime geometries in which a wormhole produces a time machine sometime in the future just in classical general relativity without any constraints.
The only ways to avoid these - almost certainly pathological - configurations is to
1. postulate that the spacetime may be sliced in such a way that all separations on the slice are spacelike (or light-like at most) - this clearly rules "time traveling" configurations pretty much from the start
2. appreciate some kind of energy conditions that prohibits or the negative energy densities
3. impose other restrictions on the stress-energy tensor, e.g. that it comes from some matter that satisfies some equations of motion with extra properties
4. take some quantum mechanics - like Susskind - into account
If you don't do either, then wormholes will clearly be able to reconnect spacetime in any way they want. This statement boils down to the fact that the geometry where time-like links don't exist at the beginning but they do exist at the end may be constructed.
All the best Lubos
-
Hi Lubos (quality input already!) Thanks! I read the Susskind papers a long time ago (though with limited comprehension given the QM perspective). Loved the self-deprecation. Not many papers make one laugh aloud. I agree in general with the difficulties re WEC etc. but the problem is not whether wormholes are feasible, it's about my lack of understanding of the MTY/Visser arguments & math. Your penultimate para seems to be a restatement of MTY, treating a mouth as an object in spacetime rather than a feature of it. Grateful for the input but still feel stupid. – Julian Moore Jan 15 '11 at 10:46
Dear Julian, this is a good point whether wormholes are "objects with many features such as the mouths" or "paired objects in spacetime". In linearized GR, it's possible to look at it from the latter perspective, especially when the radius of the mouth is very small - relatively to other length scales in the problem, such as the distance between the mouths. Of course, for this "much greater" inequality to hold, one needs to fine-tune positive and negative energies across the solution in a rather unnatural way. – Luboš Motl Jan 20 '11 at 8:40
Lubos, @Expansion... I'm taking that on board (wish there was a whiteboard to hand) and thinking hard... will respond more fully later. Thanks – Julian Moore Jan 20 '11 at 16:53
There is a more complete answer appended here
The problem with the conversion is at the point the two worm hole openings are separated by a lightlike interval. This is a Cauchy horizon and there is this winding of paths, traversed by particles or photons, which pile up at the horizon. This would include virtual particles or the vacuum as well as I see it. The worm hole has a Lansczoc junction with a shell of negative mass-energy that permits the wormhole to exist. The winding up of vacuum modes on the Cauchy horizon would introduce enormous fluctuation of positive energy which I think would destroy the wormhole. I think this would destroy the solution, so that even if a wormhole can exist any attempt to convert it to a time machine would destroy it.
The other problem is that in QM we have that momentum is the generator of a position change. Yet with multiply connected spacetimes there is a funny problem of an amgibuity. A particle can travel from $x$ to $y$ by two types of momentum generators or transformation $e^{ipx}$. So it is not entirely clear whether the wormhole is consistent with quantum mechanics.
This is just a brief answer at this point. I will try to work out a more complete version in a few days.
addendum: This is not airtight, but it is worth a couple of evening’s work. I think this gives a pretty good argument for why you can’t transform a wormhole into a time machine. I will have to confess that I am a big enemy of these quirky spacetimes which give faster than light or time reversal results. They are mathematical result, but quantum mechanics kills them in physics. Further, the averaged weak energy condition is violated $T^{00}~<~0$, which means the quantum field which acts as this source has no lower bounded eigenvalue. That is a complete disaster.
A worm hole has a membrane or surface with a field that has an energy density. This starts with a static wormhole as a necessary starting point. Start with the Reissner-Nordstrom metric $$ds^2~=~-F(r)dt^2~+~{1\over {F(r)}}dr^2~+~d\Omega^2,$$ for $F(r)~=~1~-~r_0/r$. The event horizon with radius $r_0$ is replaced by a junction or thin shell at $r_0(t)~\rightarrow~r_0$ $+~\delta r(t)$. This defines the openings of the wormhole in two regions. The normal vectors to this sphere are $$n^\mu~=~\pm\Big({{\dot r_0}\over{F(r_0)}},~\sqrt{F(r_0)~-~{\dot r_0}},~0,~0\Big), n_\mu~=~\pm\Big({\dot r_0},~{{\sqrt{F(r_0)~-~{\dot r_0}^2}\over {F(r_0)}}}, ~0, ~0\Big).$$ The sign is an indication of the direction of the normal on the sphere. The extrinsic curvature is computed as $K_{\mu\nu}~=~{1\over 2}n^\sigma\partial_\sigma g_{\mu\nu}$ and the components are $$K_{\theta\theta}~=~\pm {1\over r_0}\sqrt{F(r_0)~-~{\dot r_0}^2}$$ $$K_{tt}~=~\mp{1\over 2}{1\over{\sqrt{F(r_0)~-~{\dot r_0}^2}}}\Big({{\partial F(r_0)}\over{\partial r_0}}~-~{\ddot r_0}\Big).$$
The energy density $\rho~=~G^{00}$ $=~1/4\pi K_{\theta\theta}$ is integrated on a pillbox configuration to find the jump in the surface energy on the membrane, which is the jump in extrinsic curvature at $r~=~r_0(t)$. For a static membrane the tension equals the energy density so there is an overall conservation of the stress-energy on the membrane. The energy density is found from the ADM tensors $$G^{00}~=~\rho~=~{1\over{2\pi r_0}}\sqrt{F(r_0)~-~{\dot r_0}^2}$$ and the tension is $$\tau~=~-{1\over{4\pi}}(K_{\theta\theta}~-~K_{tt}).$$ The static membrane then infers the following equation $${{F(r_0)}\over{r_0^2}}~-~\Big( {{\dot r_0}\over {r_0}}\Big)^2~=~4\pi^2\tau^2.$$ The conservation of energy $\partial\rho /\partial t$ applied to this constraint equation results in the evolution equation $${\ddot r_0}~+~{1\over {r_0}}\Big(F(r_0)~-~{\dot r_0}^2\Big)~-~{{\partial F(r_0)}\over {\partial r_0}}~=~0.$$ This describes the dynamical evolution of the wormhole in a purely classical setting.
The evolution of the vacuum is governed by the discontinuity in the Einstein field at $r~=~r_0$. This discontinuity is then given by $$\lim_{\epsilon~\rightarrow~0}\int_{-\epsilon}^{+\epsilon} G^{00}dn~=~\delta\rho~=~-{1\over{2\pi r_0}}\sqrt{F(r_0)~-~(\dot r_0~+~U)^2},$$ where $U~=~\sqrt{U^\mu U_\mu}$ is the speed of the $r_1$ opening. The value of this change in energy density is positive. To induce the wormhole a field of some sort of field $\phi$ with a negative energy density $T^{00}(\phi)~<~0$. This discontinuity is determined by this field, ie $\delta\rho~=~T^{00}(\phi)$.
With quantum mechanics we have a vacuum which winds through the wormhole. The creation and annihilation operators $a^\dagger$ $a$ for this field are boosted relative to the moving opening. This results in a Bogoliubov transformed operator for these operators with $$b~=~a~\cosh(g(s))~+~a^\dagger \sinh(g(s)),~b^\dagger~=~ a^\dagger~\cosh(g(s))~+~a \sinh(g(s)).$$ The term $g(s)~=~gs$ for a constant acceleration. We interpret this as a rapidity angle which varies with respect to the proper time $s$ and which diverges when the two openings have a lightlike separation. The Hamiltonian is found to be $$b^\dagger b~=~a^\dagger a~+~\big((a^\dagger)^2~+~a^2\big)\cosh(g(s))\sinh(g(s))$$ The expectation of the field remains the same for $E_n~=~\langle n|b^\dagger b|n\rangle$. The off diagonal term is a form of the squeezed state operator, which removes the vacuum state of photons off quadrature. The uncertainty in the momentum and positions are evaluated so that $\langle\Delta x\Delta p\rangle~=~(1/2)\sinh(2gs)$ which is the evaluation of off diagonal term with completeness. The entropy can be evaluated from $${k\over 2}\ln\big(\langle\Delta x\rangle\langle\Delta p\rangle~-~\langle\Delta x\Delta p\rangle^2 \big)~=~k~\ln(\cosh(2g(s)).$$ This entropy is associated with the generation of photons from the vacuum, or as a form of Hawking radiation. The energy of this radiation is positive. As the opening of the wormhole approach a lightlike separation this radiation will demolish the wormhole by overwhelming the negative energy at the junction.
-
Dear Lawrence, you've put so much effort into this I'm embarrassed to say I don't understand it beyond the generalities - but I'm sure many others will benefit from the detail. However, insofar as I think I understand the general thrust and one or two details: you put a wormhole mouth in motion and say "The creation and annihilation operators... are boosted relative to the moving opening", which is at the heart of my general difficulty. How is the movement of a feature of spacetime defined? I think - see also the answer to Peter Shor re the Twin's paradox - that everything is locally at rest – Julian Moore Jan 21 '11 at 10:19
(cont) in which case (in this particular case) would the operators in fact be boosted? More generally. the fundamental issue remains with the mechanism for the creation of the time difference between the ends of the wormhole and not what would happen as a result. [I'm sorry my physics is rusty, and this is way beyond a lowly BSc... ignorance is not bliss!] – Julian Moore Jan 21 '11 at 10:20
The transformation of the operators is somewhat qualitative at this point. A part of the problem is that the twin paradox argument involves a variable acceleration. A derivation of Unruh radiation invokes an “eternal acceleration,” and general physics for a variable acceleration is not known. There is a bit of an physical assumptions here. From the perspective of an exterior observer the acceleration varies in a way so that spacelike separated fields on the wormhole openings are transformed into lightlike separated fields on the Cauchy horizon. – Lawrence B. Crowell Jan 21 '11 at 12:24
Continued: However, from the perspective of traversing the wormhole the fields retain a spacelike separation across the junction or wormhole membrane demarking the openings. So depending upon which path you take fields are lightlike separated or spacelike separated. Further, there must be some spacetime transformation between these two. The rapidity $\gamma$ for such a transformation is “$\infty$” and the boost function is $\sim~cosh(\gamma)$ diverges. – Lawrence B. Crowell Jan 21 '11 at 12:25
continued: As I said this is a few hours effort and the argument is not airtight. The transformation argument needs to be put on more solid ground. It is an imposition of a local transformation rule on fields that puts sharp separation between timelike, spacelike and lightlike intervals onto a multiply connected spacetime. The multiply connected spacetime intertwines spacelike and timelike intervals in a way which is not possible by Lorentzian transformation. – Lawrence B. Crowell Jan 21 '11 at 12:26
Can someone explain how the wormhole motion is valid as a diffeomorphism, which my limited understanding suggests is the only permitted type of manifold transformation in relativity.
Diffeomorphisms are about coordinate transformations, and have nothing to do with anything physical. If, in a coordinate patch, you switch to a different coordinate system, then if the transformation (of coordinates) is a diffeomorphism no physics will change. If moving from one coordinate patch to another, you need to convert coordinates in the overlap region, and in the overlap region the transformation will be a diffeomorphism. But a diffeomorphism is just about two coordinate systems describing the same physics. If has nothing to do with motion (except in as far as for a given motion there might be a naturally associated coordinate systems, but we can use any coordinate system we want, so who cares if one has a simpler form, they all work fine).
Can a wormhole be converted into a time machine?
Yes. I'll describe in detail how to make a time machine out of a wormhole, without "moving" either wormhole, and hopefully you won't be bothered by it. First note that if you arrange (positive energy density) matter in a spherical shell and send one twin to the inside of the shell and another sits on the outside of the shell (both at rest in one and the same coordinate system where the metric is static), then the twin on the outside of the shell dies first (ages more quickly). The twin inside is in a perfectly flat spacetime. The twin outside is a Schwarzschild spacetime, and we can make the shell rather thin and undense so there need not be any strong curvature anywhere (you mentioned strong curvature in your gravity dunking example, and it is unnecessary to the effect in question).
Now, imagine the sphere large enough that you could place a wormhole at the center without it changing the metric out by/near the shell very much. So make your wormhole with a pair of mouths. Make them far enough apart that there is room for you to build a shell around one with the other one being outside the shell. The key here is that I'm not asking you to move the wormholes (or to move them moving fast relative to each other), just that they have their mouths far apart because a wormhole with both mouths close can't do much at all. Given that they are far apart, then build a spherical shell (of positive energy density) around one of them. So we don't move either wormhole (so we don't need to worry about your Omega, to which I can't understand your objection in the slightest since diffeomorphisms have nothing to do with physics, the particles don't know about coordinate systems and don't know when or if we switch from one coordinate system to another).
Now for the standard argument, you can have a region of wormhole, you can use Visser's metric $$ds^2=-e^{2\phi(l)}dt^2+dl^2+dr^2(l)\left[d \theta^2+\sin^2\theta d\varphi^2\right]$$ and have $\phi(l)$ approach a constant as $l\rightarrow +\infty$, a possibly different constant as $l\rightarrow -\infty$. It sounds like you are fine with this if the $\pm l$ correspond to "different universes." Let's explicitly ask for more though, let's also insist (like page 101 of Visser) that $\lim_{l\rightarrow +\infty}r(l)/|l|=1=\lim_{l\rightarrow -\infty}r(l)/|l|$. So we have a single coordinate system $(t,l,\theta, \phi)$ for both sides of the wormhole (as a technicality the coordinate system is bad at the north and south poles and can't go all the way around, this happens even for spherical coordinates in a totally flat spacetime and I bring this up not to be pedantic but because the exact same issue will come up when Visser makes this an intrauniverse wormhole).
Define $\phi_+=lim_{l\rightarrow +\infty}e^{\phi(l)}$ and $\phi_-=lim_{l\rightarrow -\infty}e^{\phi(l)}$. Even though we have one coordinate system for everything, imagine that far from the throat, people prefer to use $T_+=e^{\phi_+}t$ or $T_-=e^{\phi_-}t$, then we could always use three coordinate systems, because coordinate systems are up to us. The point is that when $|l|$ is large, the new coordinate systems have a metric that numerically looks very very close to that of the flat spacetime coordinate system. So you can imagine taking two flat empty universes, cutting out a giant ball from each and putting our wormhole in between and not much geometry needs to change.
Simply do the same with one flat spacetime, except cut two giant balls out that are far from each other. And while Visser didn't say this, I'm saying that you can do this even when $\phi_+=\phi_-$. Hopefully you agree here, because we haven't made a time machine yet. OK. Now we cut those two giant balls out of a flat empty spacetime, but those balls themselves are inside an even larger ball, and outside that ball there could have been a giant shell of positive energy density matter (that would leave a flat spacetime metric inside, and that flat sapcetime metric inside is all that we needed to patch in our wormhole), so why not assume the entire wormhole and both mouths were not inside a flat empty spacetime but where inside a spherical shell of matter. We can do that, since the math inside isn't any different. Still no time machine.
Now, let's make a time machine. We have the wormholes, with equal time rate around each mouth and far from each other. Then we go all the way out to the shell of mater and remove a thin layer and take it over to the region outside one mouth and build a thin shell there. Now there is a difference in time flow from right inside the shell and right outside the shell (this already happens without wormholes, so again there shouldn't be any controversy, and all we moved was ordinary positive energy density matter, so hopeful that is unproblematic as well).
The secret to the time machine now is to wait. How long we wait depends on how far (through external space and through the wormhole) the mouths are from each other, and how much matter we put in the shell. And when we place the two mouths far apart, we need them to also be far apart relative to the curvature that will be induced by the spherical shell we later place about one of the mouths. But before we do make the time machine, I want to talk about what Visser did since I think your question might actually be more about that than how to make a time machine.
Where have I gone wrong?
Since you seem extremely worried about a coordinate discontinuity without citing any reason to be worried, that's probably at least one conceptual error. Imagine the spherical coordinate system, when you move in the $\varphi$ direction the coordinate changes smoothly and, say, increases but eventually you end up back where you started. How can it always increase but yet you end up at an earlier/lower value. An elementary differential geometry text will tell you that you need two coordinate systems, for instance one that doesn't include the international date line and one that excludes a line that is twelve time zones away, and your elementary textbook will tell you to switch coordinate systems from one to the other before reaching the place your coordinate system fails. This works. However it is a bit overkill for experienced practitioners. Instead you could use the one coordinate system and simply identify $\varphi=2\pi$ and $\varphi=0$ and accept that there is a coordinate discontinuity but that it means nothing except that your skill always you to be frugal with coordinate patches at the expense of a minor hassle with coordinate discontinuities.
So now lets redo our intra-universe wormhole before we turn it into a time machine. Unlike Visser we will avoid discontinuities by using many coordinate systems. So first we take a giant ball of flat spacetime (later this will be the inside of the original big shell). Inside it we remove two huge balls, but first we imagine a spherical coordinate system centered about the center of each ball. So we have two spherical coordinate systems in a regular flat spacetime. Nothing weird or problematic. And you can clearly switch from one to the other.
So now we put in our wormhole. It has it's own coordinate system as Visser gave (and as you and I both wrote up). We could cutout a large positive value of $l$ and a very negative value of $l$ and sew it to the spherical boundaries of where those balls used to be. But the elementary textbook way would be to include values in the wormhole coordinate patch that are slightly more positive and more negative, so that when you arrive outside the former holes you are still in the wormhole coordinate system for a bit then switch to the spherical coordinate system of that mouth of the wormhole. Then later later when you get closer to the other mouth than the first mouth you switch to that other spherical coordinate system, then when you get super close to the other mouth you switch to the coordinate system of the wormhole again this time for a very negative value of $l$ then you move in more and eventually are inside the wormhole again. This could have been done all entirely inside the one wormhole coordinate system if you just identified some points with very positive $l$ with some points of very negative $l$. But if that advanced technique confuses you don't do it.
But it has nothing to do with one universe or two, it's just about making one coordinate system work when an elementary textbook would say you need to use more than one.
However, in the case of a simple intra-universe wormhole there is no temporal discontinuity (nor can I see how one might be induced) and the application of the previously derived equations produces no net effect.
So far we could start with a super giant shell of matter, inside have a flat spacetime. Inside that cut out two balls very large and very far from each other. Then we can take our wormhole and sew it up together with time on both ends ticking at the same rate, have three coordinate systems (one for the throat, one for the region right outside one mouth, one for the region outside the other mouth, note that the wormhole coordinates could have worked for everything with an identification, and that having both coordinates outside the mouths is redundant to each other, but that then at least the transitions between each coordinate system is standard and not confusing). What Visser calls the $l=\pm L/2$ identification is what I call switching from the spherical coordinate system about one mouth to the spherical coordinate system about the other mouth in that region far from the throat (far from either mouth). And in fact it is easier to just stay with the one wormhole metric.
Hopefully you and I and everyone agree up to this point, and that I've addressed any confusions you have. Which means any problems with the next part are actual problems with the physics.
Next we steal matter from the big shell and place it around one of the mouths. If the amount of matter we moved is small compared to how far apart the mouths are, then after a time we might expect that there is a Schwarzschild type metric around the outside of the shell around mouth one, and that the flow of time near the other mouth is hardly affected at all because it is so very far away. So the time flow right near both mouths is unaffected by the shell. But now mouth one opens up in a spacetime region that is inside a shell of matter. If you placed that shell far enough from the throat then out there the metric of the wormhole looked almost flat spherical with time ticking at a normal constant rate.
So imagine you start right inside the shell and move out. You move in an approximately Schwarzschild metric so you use Schwarzschild coordinates, originally your coordinate time ticked much faster than your proper time. But eventually get so far away that things are pretty flat and proper time here now ticks much like coordinate time. Then you switch to the spherical coordinate system for the other mouth, and the rate of time ticking is also in line with your coordinate time. Then before you enter the mouth of the wormhole you switch to the wormhole coordinate system and now you clock ticks at a rate of $dT=e^{\phi_+}dt=e^{\phi_-}dt$ where $dt$ is the rate of coordinate time. You then traverse the throat using the metric:
$$ds^2=-e^{2\phi(l)}dt^2+dl^2+dr^2(l)\left[d \theta^2+\sin^2\theta d\varphi^2\right],$$
all the way through the throat of the wormhole, eventually getting to a region of large $l$ where again your proper time ticks at $dT=e^{\phi_+}dt=e^{\phi_-}dt$ where $dt$ is the rate of coordinate time. Then you switch to the original spherical coordinates around that mouth where proper time and coordinate time tick together. You go out some more, get to the shell and then switch back to the original Schwarzschild coordinate system, where proper time now ticks much slower than coordinate time.
You can place an observer at each mouth at the same wormhole coordinate time, and their clocks can tick the same number of times between two wormhole coordinate times (if $\phi_+=\phi_-$ and if $|l|$ is large enough that $\phi(l)\approx \phi_+$). But the clock by the empty wormhole ticks at the same rate as the Schwarzschild coordinate time, and the clock by the other wormhole ticks at a rate slower than the Schwarzschild coordinate time. When you wait long enough for this difference to build up you get your time machine. This example is more complicated than Visser's example since he just asserted an identification for a time origin, whereas we started with a wormhole that had synchronized ends and then we over time built up the shell, so it starts to turn into a time machine as we move the matter and we wait, but it doesn't happen all once at one easily labelled instant.
So now I'll show that a time machine is formed.
For concreteness and simplicity, first fix a length $R$, then find the parameter $l=W$ in the wormhole coordinates where $R=|r(\pm W)|$. We wanted the asymptotic $\phi$ to be equal, we'll choose that $\phi_+=\phi_-=0$ and that $\phi(\pm W)\approx 0$, $|r(\pm W)|\approx W$ and that these would have only gotten closer as $|l|$ got even bigger than $W$ if they had connected two different universe. This is what we will call the throat, and these surfaces have a surface area of $\approx 4\pi R^2$. Next we collect enough matter that it would form a Schwarzschild radius of $R$, but instead we position at a spherical shell of surface area $4\pi(10R)^2$, so there is a $9R$ proper distance between it and the $l=+W$ end of the throat. We set up the space time to have a proper distance of $10^9R$ between the shell and the $l=-W$ other end of the wormhole throat.
Now, we hang out in the flat space between the shell and the $l=+W$ throat, right near the shell. And just like you can yell at a canyon and hear the echo, you decide to send a message to yourself every second, by sending an absolutely perfect light pulse straight at the throat so that it comes out the other end of the wormhole, travels the long way through space all the way to the shell and through a small hole in the shell you placed. By a perfect pulse we just mean following a lightlike geodesic. The same effect will happen if you send the messages at sub lightspeed, but this just makes the math easier.
So you send these messages once a second, and you might have to wait a while to get them at first. You can use the wormhole coordinate system since you are inside the shell, and so your proper time is the coordinate time ($\phi_+=0$), say you send it at $t=0$. So it is wormhole coordinate time $t=9R/c+\Delta t_1= 9R/c+\int_{-W}^{+W}e^{\phi(l)}dl$ when it arrives at the other mouth (the $l=-W$ mouth), just like in 18.42 of page 241 of Visser. Over there, the wormhole coordinate time ticks at the same rate as the Schwarzschild coordinate time. This means if you are sending at one a second at your end, they are arriving a proper distance of $10^9R$ from the shell at a rate of one every Schwarzschild coordinate second. And they have $10^9R$ distance to travel, so we can compute the Schwarzschild coordinate time it take to get to the shell, by computing $\Delta t_2= \int_{10R}^{10^9R+10R}(1-R/r)^{-1})dr/c=(10^9R+Rln(\frac{10^9+9}{9}))/c$. Then it goes through the shell and gets to you.
There is something going on here that didn't go on before we made the shell. Before we made the shell we could do the same thing, send them once a second, then after $\Delta t_1$ wormhole coordinate, but since $\phi_+=\phi_-$ they would start coming out the other end at a rate of once per Wormhole coordinate second once they started coming out, and then once they travelled to you, they would continue to arrive once a second.
But after the shell we have a different story. They go into the wormhole at a rate of once per wormhole coordinate second. Since $\phi_+=\phi_-$ they come out at a rate of once per wormhole coordinate second (once they start to come out). But this is almost exactly equal to the Schwarzschild coordinate time a distance $10^9R$ from the mass (which is on a surface of surface area $4\pi (10R)^2$). And so they appear outside of the wormhole at a rate of almost once per Schwarzschild coordinate time $\sqrt{1-1/(10^9+10)}=\approx 1$. But that means once they reach the shell they start to arrive to someone just inside the shell at a rate of $\sqrt{1-1R/10R}$ seconds apart. So they arrive every $\sqrt{1-1/10}$ seconds apart but are sent every second. They are arriving faster than they are sent. The first messages could be boring, like just sending the number 1, then the number 2, then the number 3, and send a bunch before you start to get them, but once you start to get them too, and you notice that they are coming faster than you send them, you might choose to start sending yesterday's lotto numbers instead. There is a finite backlog of the old boring messages you already sent to work through and a finite number of messages in the (mental construct) queue of things you've sent but haven't gotten, but you reduce that queue relentlessly since they arrive faster than you send them. Eventually you start to get messages you haven't sent yet. That's your proof that you have a time machine.
For the numbers, if you measure the time $$9R/c+\Delta t_1= 9R/c+\int_{-W}^{+W}e^{\phi(l)}dl(10^9R+Rln(\frac{10^9+9}{9}))/c$$ in seconds, it tells you numerically how many messages you sent before you got the first one back. Then you can switch to sending lottery numbers (or flipping a coin and giving the coin toss a unique id number and recording the outcome). And your queue gets exhausted in 18.48683298 ($1/(1/\sqrt{1-(R/10R)}-1)$) times as long as it took you to start getting your messages back.
You don't have to have discontinuities to make a time machine out of a wormhole. And you don't have to move a wormhole to make a time machine out of it. And you can even start with a wormhole where both the ends age at the same rate, and still make a time machine if you have some ordinary positive energy density matter to hand, and a whole lot of time to wait for it to get sufficiently out of sync.
And I tried to use all the required math, and explain the math I thought you might misunderstand.
I think the GR case is simplest because the maths in Visser's book is straightforward, and if there is no temporal discontinuity, the equations (which I have no problem with) say no time machine is created by putting a wormhole mouth into a strong gravitational potential.
I could never figure out why you said that.
The appropriate answer to the GR part of the question is therefore to show how a time machine arises in the absence of a temporal discontinuity.
Done, build a shell around one of the ends, wait a long time, you have a time machine.
I'm saying, "My application of the expert's (Visser's) math says it doesn't - how is my application in error?"
I can't tell why you think Visser's math doesn't allow you to make a time machine, but hopefully my example is more clear since it restricts itself to describing actual actions, and using multiple coordinate systems and only using each coordinate systems locally.
-
Quick acknowledgement: I'm sorry I missed this earlier; that is very extensive and much appreciated. More later. NB since the main discussion was 4 years ago things have moved on, following further research etc.: I have written it up in Phys Rev D style. I will recheck what I have written in the light of your comments - if there is still a gap to bridge, would you be willing to take a look at said "paper"? – Julian Moore May 13 at 9:30
|
2015-08-29 14:47:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7401043772697449, "perplexity": 482.3392981909235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064503.53/warc/CC-MAIN-20150827025424-00294-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://back2cloud.com/percentage-error/percent-error-calculation-wiki.php
|
Home > Percentage Error > Percent Error Calculation Wiki
# Percent Error Calculation Wiki
## Contents
Last but not least, for intermittent demand patterns none of the above are really useful. Contents 1 Importance of forecasts 2 Calculating the accuracy of supply chain forecasts 3 Calculating forecast error 4 See also 5 References Importance of forecasts Understanding and predicting customer demand is Contents 1 Definitions 2 Formulae 3 Percent error 4 Percentage change 4.1 Example of percentages of percentages 5 Other change units 6 Examples 6.1 Comparisons 7 See also 8 Notes 9 See also Percentage error Mean absolute percentage error Mean squared error Mean squared prediction error Minimum mean-square error Squared deviations Peak signal-to-noise ratio Root mean square deviation Errors and residuals in have a peek at these guys
Please help improve this article by adding citations to reliable sources. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Another interesting option is the weighted M A P E = ∑ ( w ⋅ | A − F | ) ∑ ( w ⋅ A ) {\displaystyle MAPE={\frac {\sum (w\cdot ISBN1-86152-803-5. https://en.wikipedia.org/wiki/Mean_absolute_percentage_error
## Relative Error
There are two features of relative error that should be kept in mind. This is the same as dividing the sum of the absolute deviations by the total sales of all products. When it halves again, it is a -69cNp change (a decrease.) Examples Comparisons Car M costs $50,000 and car L costs$40,000. Retrieved 2010-05-05. "Percent Difference – Percent Error" (PDF).
D.; Torrie, James H. (1960). This calculation ∑ ( | A − F | ) ∑ A {\displaystyle \sum {(|A-F|)} \over \sum {A}} , where A {\displaystyle A} is the actual value and F {\displaystyle F} However a percentage error between 0% and 100% is much easier to interpret. Mape Calculation Example Percentage Error Definition The percentage error, also known as percent error, is a measure of how innaccurate a measurement is, standardized to how large the measurement is.
North Carolina State University. 2008-08-20. Mean Percentage Error It is the relative error expressed in terms of per 100. We wish to compare these costs.[3] With respect to car L, the absolute difference is $10,000 =$50,000 - $40,000. the number of variables in the regression equation). Statistically MAPE is defined as the average of percentage errors. Mean Absolute Percentage Error Excel The percentage error It is the difference between the true value and the estimate divided by the true value and the result is multiplied by 100 to make it a percentage. Cambridge: Cambridge University Press. a scale which has a true meaningful zero), otherwise it would be sensitive to the measurement units . ## Mean Percentage Error Issues While MAPE is one of the most popular measures for forecasting error, there are many studies on shortcomings and misleading results from MAPE.[3] First the measure is not defined when https://en.wikipedia.org/wiki/Mean_absolute_percentage_error The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation σ, but σ appears in both the numerator and the denominator Relative Error Concretely, in a linear regression where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be higher than the variability of residuals Percent Difference Formula Then we have: The difference between the height of each man in the sample and the unobservable population mean is a statistical error, whereas The difference between the height of each The earliest reference to similar formula appears to be Armstrong (1985, p.348) where it is called "adjusted MAPE" and is defined without the absolute values in denominator. More about the author However, a terminological difference arises in the expression mean squared error (MSE). Residuals and Influence in Regression. (Repr. ISBN9780471879572. Absolute Error The limits of these deviations from the specified values are known as limiting errors or guarantee errors.[2] See also Accepted and experimental value Relative difference Uncertainty Experimental uncertainty analysis Propagation of The relative error is calculated as the absolute error divided by the magnitude of the exact value. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if check my blog Then the F value can be calculated by divided MS(model) by MS(error), and we can then determine significance (which is why you want the mean squares to begin with.).[2] However, because This alternative is still being used for measuring the performance of models that forecast spot electricity prices.[2] Note that this is the same as dividing the sum of absolute differences by Weighted Mape Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. When this occurs, the term relative change (with respect to the reference value) is used and otherwise the term relative difference is preferred. ## Operations Management: A Supply Chain Approach. As an alternative, each actual value (At) of the series in the original formula can be replaced by the average of all actual values (Āt) of that series. The approximation error in some data is the discrepancy between an exact value and some approximation to it. Please help to improve this article by introducing more precise citations. (September 2016) (Learn how and when to remove this template message) Part of a series on Statistics Regression analysis Models Wmape Uses of relative error The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 Van Loan (1996). Instruments In most indicating instruments, the accuracy is guaranteed to a certain percentage of full-scale reading. Example of percentages of percentages If a bank were to raise the interest rate on a savings account from 3% to 4%, the statement that "the interest rate was increased by http://back2cloud.com/percentage-error/percentage-error-calculation-of-equipment.php The formula for the mean percentage error is MPE = 100 % n ∑ t = 1 n a t − f t a t {\displaystyle {\text{MPE}}={\frac {100\%}{n}}\sum _{t=1}^{n}{\frac {a_{t}-f_{t}}{a_{t}}}} where Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. To fix this problem we alter the definition of relative change so that it works correctly for all nonzero values of xreference: Relative change ( x , x reference ) = The approximation error is the gap between the curves, and it increases for x values further from 0. In univariate distributions If we assume a normally distributed population with mean μ and standard deviation σ, and choose individuals independently, then we have X 1 , … , X n The difference between At and Ft is divided by the Actual value At again. By using this site, you agree to the Terms of Use and Privacy Policy. When used in constructing forecasting models the resulting prediction corresponds to the geometric mean (Tofallis, 2015). If that sum of squares is divided by n, the number of observations, the result is the mean of the squared residuals. For this same case, when the temperature is given in Kelvin, the same 1° absolute error with the same true value of 275.15 K gives a relative error of 3.63×10−3 and Although the concept of MAPE sounds very simple and convincing, it has major drawbacks in practical application [1] It cannot be used if there are zero values (which sometimes happens for By multiplying these ratios by 100 they can be expressed as percentages so the terms percentage change, percent(age) difference, or relative percentage difference are also commonly used. Thus, if an experimental value is less than the theoretical value, the percent error will be negative. Please help improve this article by adding citations to reliable sources. The relative difference, −$ 10 , 000 $50 , 000 = − 0.20 = − 20 % {\displaystyle {\frac {-\$10,000}{\\$50,000}}=-0.20=-20\%} is also negative since car L costs 20% Likewise, the sum of absolute errors (SAE) refers to the sum of the absolute values of the residuals, which is minimized in the least absolute deviations approach to regression. value; the value that x is being compared to) then Δ is called their actual change.
It usually expresses accuracy as a percentage, and is defined by the formula: M = 100 n ∑ t = 1 n | A t − F t A t | The error (or disturbance) of an observed value is the deviation of the observed value from the (unobservable) true value of a quantity of interest (for example, a population mean), and
|
2017-11-23 16:51:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9378142356872559, "perplexity": 1634.825102932771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806844.51/warc/CC-MAIN-20171123161612-20171123181612-00420.warc.gz"}
|
http://mathoverflow.net/questions/125900/constructing-a-linear-ode-for-a-product-of-two-holonomic-functions-without-intro
|
# Constructing a linear ODE for a product of two holonomic functions without introducing additional singularities
A function $f$ is called holonomic if it satisfies some linear differential equation with polynomial coefficients $$p_n(x) f^{(n)}(x)+\dots+p_1(x)f'(x)+p_0(x)f(x)=0.$$ Now if $f,g$ are holonomic then so are their sum and product. To obtain a differential equation for $h=fg$, first observe that $$h^{(k)} = \sum_{i=0}^k {k\choose i} f^{(i)} g^{(k-i)}.$$ Since each $f^{(i)}$ is a linear combintation of $f,f',\dots,f^{(n-1)}$ with rational coefficients (where $n$ is the order of the ODE satisfied by $f$), and analogously each $g^{(i)}$ is a linear combination of $g,g',\dots,g^{(m-1)}$, then $h,h',h'',\dots$ span a finite-dimensional vector space of dimension at most $d=mn$, and hence there exists a nontrivial relation of the form $$r_d(x)h^{(d)}+\dots+r_1(x)h'+r_0(x)h=0$$with $r_i(x)$ rational functions.
Now suppose that the ODE which $f$ satisfies had singular points $u_1,\dots,u_n$, while the equation of $g$ had singular points $w_1,\dots,w_m$. The ODE for $h$ might have additional singular points besides $u_1,\dots,u_n,w_1,\dots,w_m$. For example, if $$(x-a)f''(x)+cf(x)=0\\\\ (x-b)g''(x)+dg(x)=0$$ then $h=fg$ satisfies an ODE of order 4 with leading coefficient $$(x-a)^3 (x-b)^3 \biggl((c-d)x-(bc-ad)\biggr)h^{(4)}+\dots$$ (I calculated this using C.Mallinger's GeneratingFunctions Mathematica package).
Is it possible to construct a holonomic ODE for $h$ (in the above example and in general) without introducing additional singular points?
-
Hi Dima,
The operator you obtain by the algorithm you describe has minimal possible order. You pay for the minimality of the order by having (in general) a nonminimal degree of the polynomial coefficients. You can turn the minimal-order operator into a minimal-degree operator if you are willing to pay the price of a higher order. There is in general no operator which has both order and degree as small as possible.
As long as you are only interested in the singularities, i.e., you are only concerned about avoiding extra factors in the leading coefficient, you can use a desingularization algorithm to turn the minimal order operator into one which has no unnecessary singular points. For differential operators, this is a classical technique, explained for example in the ODE book of Ince (Section 16.4). For recurrence operators, there is an algorithm by Abramov and van Hoeij (http://www.math.fsu.edu/~hoeij/papers/issac99/new.pdf).
If you are interested in minimizing the degree not only of the leading coefficient, but of all the polynomial coefficients in the operator simultaneously, see my joint paper with Chen, Jaroschek, and Singer, to appear on this year's ISSAC (http://www.risc.uni-linz.ac.at/people/mkauers/publications/jaroschek13.pdf).
Best regards, Manuel
-
Thanks Manuel! That kind of things is exactly what I needed. – dima Apr 4 '13 at 11:19
|
2014-04-19 12:46:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530549645423889, "perplexity": 221.63762674188527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/85988-vector-field.html
|
1. ## Vector field
Calculate the circulation of the vector field $F(x,y) = (x,xy)$ around the closed curve $x^2+y^2 = 25$ situated above the axis $OX$
2. we have talked before if F is a force field the line integral is the Work done
If F is a velocity field then the line integral is the circulation
so just compute the line integral of F over C
Her C consists of 2 parts the circle and the line segment along the x axis from -5 to 5
But of course you have Green's thm in your tool box as well
3. Originally Posted by Calculus26
we have talked before if F is a force field the line integral is the Work done
If F is a velocity field then the line integral is the circulation
so just compute the line integral of F over C
Her C consists of 2 parts the circle and the line segment along the x axis from -5 to 5
But of course you have Green's thm in your tool box as well
Above the axis ox:
$\int_0^{ \pi} (-costsent + cos^2tsent)dt = - \frac{2}{3}$
Correct ?
4. Not quite
x= 5cos(t) y = 5sin(t)
dr/dt = -5*sin(t) +5*cos(t)
F = 5cos(t)i +25cos(t)sin(t)
go from here
5. Originally Posted by Calculus26
Not quite
x= 5cos(t) y = 5sin(t)
dr/dt = -5*sin(t) +5*cos(t)
F = 5cos(t)i +25cos(t)sin(t)
go from here
$\int_0^{ \pi} (-25costsint + 125 cos^2tsint)dt = \frac{250}{3}$
It is correct ?
6. Ok--now you are correct----Again I've mentioned this before--slow down
|
2016-08-31 16:20:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844112753868103, "perplexity": 1311.5883774031786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295966.49/warc/CC-MAIN-20160823195815-00068-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://docs.q-ctrl.com/wiki/filter-functions
|
# Filter functions
The filter function is a computational heuristic employed to simply calculate the sensitivity of a control solution to a time-dependent noise channel expressed in the Fourier domain.
## Definition
The filter function is derived based on the measure for robust control, and is therefore understood as a heuristic for robustness. Let $H_{c}(t)$ be a control Hamiltonian, implementing a control solution over duration $\tau$, on a system defined on a Hilbert space $\mathcal{H}$. Let $P$ denote the projection matrix defining a subspace of $\mathcal{H}$ on which robustness is to be evalulated.
The filter function associated with the $k$th noise channel with respect to $P$ is defined
\begin{align} F_{k}(\omega) \equiv \frac{1}{\text{Tr}\left(P\right)} \text{Tr}\left( P\left[ \mathscr{F} \left\{\tilde{N}'_{k}(t)\right\} \mathscr{F} \left\{\tilde{N}'_{k}(t)\right\}^\dagger \right]P \right). \end{align}
Here $\mathscr{F}$ denotes the Fourier transform
\begin{align} \mathscr{F}\left\{\tilde{N}'_{k}(t)\right\} \equiv \int_{-\infty}^{\infty}dt e^{-i\omega t} \tilde{N}'_{k}(t) \end{align}
implemented element-wise on the time-dependent matrix $\tilde{N}’_{k}(t)$ defined by
\begin{align} \tilde{N}'_{k}(t) = \tilde{N}_{k}(t)- \frac{\text{Tr}\left(P\tilde{N}_{k}(t)P\right)}{\text{Tr}\left(P\right)}\mathbb{I} \hspace{1cm} \text{for} \hspace{1cm} t\in[0,\tau]\ \end{align}
and where $$\tilde{N}'_{k}(t)=0$$ at all other times. The (unprimed) matrix $\tilde{N}_{k}(t)$ is the dynamic generator (in the toggling frame) for the $k$th noise channel, defined by the similarity transform
\begin{align} \tilde{N}_{k}(t) \equiv {U_{c}}^\dagger(t)N_{k} (t)U_{c}(t) \end{align}
where $N_{k} (t)$ is the noise-axis operators for the $k$th noise channel in the lab frame, and $U_{c}(t)$ is the unitary evolution operator for the control Hamiltonian $H_{c}(t)$.
Let $p_{l}$ be the $l$th diagonal element of $P$, then the filter function may be re-expressed in the more useful computational form
\begin{align} F_{k}(\omega) = \frac{1}{\text{Tr}\left(P\right)} \sum_{l=1}^D p_{l} \sum_{q=1}^{D} \left| \mathscr{F} \left\{ \tilde{N}'_{k}(t) \right\}_{lq} \right|^2 \end{align}.
That is, take the Fourier transform of each matrix element of the time-dependent operator $\tilde{N}’_{k}(t),$ sum the complex modulus square of every element, weighted by the diagonal elements $p_{l}$, and divide through by $\text{Tr}(P)$, the dimension of the quantum system subspace. This computational algorithm is protected by Provisional Patent Application #2018902650.
## First-order infidelity
The filter function may be calculated in this framework for arbitrary single and multi-qubit controls in order to capture the effect of time-varying noise on the control operation. This is an efficient and effective formalism for evaluating robustness of a given control solution. In particular the contribution to robustness infidelity due to the $k$th noise channel is approximated, to first order, as the overlap integral of a noise power spectrum and the associated filter function. Specifically
\begin{align} \mathcal{O}_{k} &= \frac{1}{2\pi} \int_{-\infty}^{\infty} S_{k}(\omega) F_{k}(\omega) d\omega \\ \mathcal{I}_{robust} &= 1 - \mathcal{F}_{robust} \approx \frac{1}{2} \left(1 - \exp\left(-2 \sum_{k=1}^{p} \mathcal{O}_k\right)\right) \end{align}
The below image shows how small values of the filter function lead to a net reduction in noise-induced error from a given noise channel and for a given quantum operation.
## Derivation
This derivation of the filter function is summarized below. Let $\mathcal{H}$ be a $D$-dimensional Hilbert space for some quantum system. We write the total hamiltonian
\begin{align} H_{tot}(t)=H_{c}(t)+ H_{n}(t) \end{align}
as the sum of control $(c)$ and noise $(n)$ components
\begin{align} H_{c}(t) &= \sum_{j=1}^{n}\alpha_{j}(t)C_{j},\\ \\ H_{n}(t) &= \sum_{k=1}^{p}\beta_{k}(t)N_{k} (t). \end{align}
The control Hamiltonian, $H_{c}(t)$, captures target evolution associated with the control dynamics generated by $n$ participating control operators, $C_{j}\in\mathcal{H}$. The noise Hamiltonian, $H_{n}(t)$, captures interactions with $p$ independent noise channels. Distortions in the target evolution due to uncontrolled noisy dynamics are captured by the noise operators, $N_{k} \in\mathcal{H}$, where the noise fields $\beta_{k}(t)$ are assumed to be a classical zero-mean wide-sense stationary processes with associated noise power spectral densities, $S_{k}(\omega)$.
The fidelity of target operations generated by $H_{c}(t)$ is therefore reduced by interactions captured by $H_{n}(t)$. To compute the resulting average infidelity we move to a frame rotating with the control Hamiltonian, the so-called toggling frame. In this frame noise Hamiltonian responsible for the errors takes the form
\begin{align} %\text{\emph{toggling-frame noise Hamiltonian}:} %\hspace{1cm} \tilde{H}_{n}(t) \equiv {U_{c}}^\dagger(t)H_{n}(t)U_{c}(t) = \sum_{k=1}^{p}\beta_{k}(t)\tilde{N}_{k}(t) \end{align}
where the noise-axis operators in the toggling-frame are defined by
\begin{align} \tilde{N}_{k}(t) \equiv {U_{c}}^\dagger(t)N_{k} (t)U_{c}(t). \end{align}
$\tilde{H}_{n}(t)$ then satisfies the Schrödinger equation
\begin{align} i\frac{d}{dt}\tilde{U}_{n}(t) &= \tilde{H}_{n}(t)\tilde{U}_{n}(t) \hspace{1cm} \text{where} \hspace{1cm} \tilde{U}_{n} = {U_{c}}^\dagger U_{tot}, %i\frac{d}{dt}\Un(t,0) &= H_{n}(t)\Un(t,0), \end{align}
and the infidelity measured by the measure for robust control takes the form
\begin{align} \mathcal{I}_{robust} = 1 -\left\langle\left|\frac{\text{Tr}\left(P \tilde{U}_{n}(\tau) P\right)}{\text{Tr}\left(P\right)} \right|^2\right\rangle. \end{align}
This is generally challenging to compute, requiring approximation methods. To achieve this we generalize the framework developed by Green et al. and focus on computational simplicity and extensibility to higher dimensions. In this framework the error contributed by the noise channels over the duration of the control is approximated, to first order, via a truncated Magnus expansion. Each noise channel then contributes a term to the average infidelity in the spectral domain, expressed as an overlap integral between the noise power spectrum and an appropriate filter function, $F_{k}(\omega)$. Explicitly, the infidelity measure for robust control, averaged over noise realizations, is approximated to first order as
\begin{align} \mathcal{I}_{robust} = 1 - \mathcal{F}_{robust} & \approx \sum_{k=1}^{p} \frac{1}{2\pi} \int_{-\infty}^{\infty} S_{k}(\omega) F_{k}(\omega) d\omega \end{align}
## Exceptions
Filter functions as defined above are used throughout the Q-CTRL package to evaluate the performance of controlled quantum systems. Mølmer-Sørensen control drives are an exception. In this case filter functions are computed by performing Fourier analysis on the displacement operator
\begin{aligned} U_\text{spin-mode}(t) \equiv \exp \Bigg( \sum_{\mu=1}^{N} \hat{S}^{(\mu)}_{x} \otimes \sum_{k=1}^{M} \left( \alpha^{(\mu)}_{k}(t) \hat{a}^\dagger_{k} - \left( \alpha^{(\mu)}_{k}(t)\right)^{*} \hat{a}_{k} \right) \Bigg) \end{aligned}.
describing spin-mode coupling. In this context, the filter function captures the robustness of the given control in achieving the ideal decoupling condition
\begin{aligned} &\text{spin-mode decoupling:} && && U_\text{spin-mode}(\tau) &&=&& \mathbb{I} \end{aligned}
in the presence of amplitude or detuning noise.
|
2021-05-07 19:06:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 13, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931849837303162, "perplexity": 1841.5717544977015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00531.warc.gz"}
|
https://bioinformatics.stackexchange.com/questions/14635/how-to-find-adapter-sequence
|
# How to find adapter sequence
I have this GSE dataset ( GSE104279 ) (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE104279)
I want to run cutadapt, but how would I find the adapter sequence so I can run cutadapt?
The definitive answer is the sequencing institute/lab. They know what protocol/chemistry etc. was used.
If you don't have access to that a number of tools check for known adapter sequences. Run e.g. FASTQC, which will tell you the proportions of adapter sequence found. Tools like TrimGalore can also autodetect the most common adapters.
• I would say this is really a separate question, but essentially the easiest pipeline is: Download the fastq.gz, run salmon with a human reference (there are tons of tutorials for this). In R, use the tximport package to create a count matrix. Oct 26 '20 at 18:19
|
2021-09-23 03:08:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20969358086585999, "perplexity": 2497.2492063702653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00292.warc.gz"}
|
http://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isnumber=6728750&punumber=6509490
|
By Topic
# IEEE Transactions on Control of Network Systems
## Filter Results
Displaying Results 1 - 15 of 15
Publication Year: 2014, Page(s): C1
| PDF (444 KB)
• ### IEEE Transactions on Control of Network Systems publication information
Publication Year: 2014, Page(s): C2
| PDF (144 KB)
• ### The Inaugural Issue of the IEEE Transactions on Control of Network Systems
Publication Year: 2014, Page(s):1 - 3
| PDF (180 KB) | HTML
• ### Compositional Transient Stability Analysis of Multimachine Power Networks
Publication Year: 2014, Page(s):4 - 14
Cited by: Papers (16)
| | PDF (1833 KB) | HTML
During the normal operation of a power system, all the voltages and currents are sinusoids with a frequency of 60 Hz in America and parts of Asia or of 50 Hz in the rest of the world. Forcing all the currents and voltages to be sinusoids with the right frequency is one of the most important problems in power systems. This problem is known as the transient stability problem in the power systems lit... View full abstract»
• ### Convex Relaxation of Optimal Power Flow—Part I: Formulations and Equivalence
Publication Year: 2014, Page(s):15 - 27
Cited by: Papers (98) | Patents (1)
| | PDF (2490 KB) | HTML
This tutorial summarizes recent advances in the convex relaxation of the optimal power flow (OPF) problem, focusing on structural properties rather than algorithms. Part I presents two power flow models, formulates OPF and their relaxations in each model, and proves equivalence relationships among them. Part II presents sufficient conditions under which the convex relaxations are exact. View full abstract»
• ### Optimal Control of Scalar Conservation Laws Using Linear/Quadratic Programming: Application to Transportation Networks
Publication Year: 2014, Page(s):28 - 39
Cited by: Papers (3)
| | PDF (1775 KB) | HTML
This article presents a new optimal control framework for transportation networks in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi (H-J) equation and the commonly used triangular fundamental diagram, we pose the problem of controlling the state of the system on a network link, in a finite horizon, as a Linear Program... View full abstract»
• ### Controllability Metrics, Limitations and Algorithms for Complex Networks
Publication Year: 2014, Page(s):40 - 52
Cited by: Papers (66)
| | PDF (1912 KB) | HTML
This paper studies the problem of controlling complex networks, i.e., the joint problem of selecting a set of control nodes and of designing a control input to steer a network to a target state. For this problem, 1) we propose a metric to quantify the difficulty of the control problem as a function of the required control energy, 2) we derive bounds based on the system dynamics (network topology a... View full abstract»
• ### Distributed Control with Low-Rank Coordination
Publication Year: 2014, Page(s):53 - 63
Cited by: Papers (9)
| | PDF (1653 KB) | HTML Media
A common approach to distributed control design is to impose sparsity constraints on the controller structure. Such constraints, however, may greatly complicate the control design procedure. This paper puts forward an alternative structure, which is not sparse yet might nevertheless be well suited for distributed control purposes. The structure appears as the optimal solution to a class of coordin... View full abstract»
• ### An $O(1/k)$ Gradient Method for Network Resource Allocation Problems
Publication Year: 2014, Page(s):64 - 73
Cited by: Papers (23)
| | PDF (1601 KB) | HTML
We present a fast distributed gradient method for a convex optimization problem with linear inequalities, with a particular focus on the network utility maximization (NUM) problem. Most existing works in the literature use (sub)gradient methods for solving the dual of this problem which can be implemented in a distributed manner. However, these (sub)gradient methods suffer from an O(1/"... View full abstract»
• ### Optimal Turn Prohibition for Deadlock Prevention in Networks With Regular Topologies
Publication Year: 2014, Page(s):74 - 85
| | PDF (2635 KB) | HTML
In this paper, we consider the problem of constructing minimal cycle-breaking connectivity preserving sets of turns for graphs that model communication networks, as a method to prevent deadlocks. Cycle-breaking provides for deadlock-free wormhole routing constrained by turns prohibited at some nodes. We present lower and upper bounds for minimal cardinalities of cycle-breaking connectivity preserv... View full abstract»
• ### Optimal Routing and Energy Allocation for Lifetime Maximization of Wireless Sensor Networks With Nonideal Batteries
Publication Year: 2014, Page(s):86 - 98
Cited by: Papers (5)
| | PDF (3106 KB) | HTML
An optimal control approach is used to solve the problem of routing in sensor networks where the goal is to maximize the network's lifetime. In our analysis, the energy sources (batteries) at nodes are not assumed to be “ideal” but rather behaving according to a dynamic energy consumption model, which captures the nonlinear behavior of actual batteries. We show that in a fixed topolo... View full abstract»
• ### Optimal Resource Allocation for Network Protection Against Spreading Processes
Publication Year: 2014, Page(s):99 - 108
Cited by: Papers (33)
| | PDF (1930 KB) | HTML
We study the problem of containing spreading processes in arbitrary directed networks by distributing protection resources throughout the nodes of the network. We consider that two types of protection resources are available: 1) preventive resources able to defend nodes against the spreading (such as vaccines in a viral infection process) and 2) corrective resources able to neutralize the spreadin... View full abstract»
• ### Message Passing Optimization of Harmonic Influence Centrality
Publication Year: 2014, Page(s):109 - 120
Cited by: Papers (6)
| | PDF (2323 KB) | HTML
This paper proposes a new measure of node centrality in social networks, the Harmonic Influence Centrality (HIC), which emerges naturally in the study of social influence over networks. Using an intuitive analogy between social and electrical networks, we introduce a distributed message passing algorithm to compute the HIC of each node. Although its design is based on theoretical results which ass... View full abstract»
• ### Collective Decision-Making in Ideal Networks: The Speed-Accuracy Tradeoff
Publication Year: 2014, Page(s):121 - 132
Cited by: Papers (14)
| | PDF (2089 KB) | HTML
We study collective decision-making in a model of human groups, with network interactions, performing two alternative choice tasks. We focus on the speed-accuracy tradeoff, i.e., the tradeoff between a quick decision and a reliable decision, for individuals in the network. We model the evidence aggregation process across the network using a coupled drift-diffusion model (DDM) and consider the free... View full abstract»
• ### IEEE Transactions on Control of Network Systems information for authors
Publication Year: 2014, Page(s):133 - 134
| PDF (139 KB)
## Aims & Scope
The IEEE Transactions on Control of Network Systems is committed to the timely publication of high-impact papers at the intersection of control systems and network science.
Full Aims & Scope
## Meet Our Editors
Editor-in-Chief
Ioannis Ch. Paschalidis
Boston University
|
2017-05-28 12:43:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28568023443222046, "perplexity": 2550.0489577087237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609817.29/warc/CC-MAIN-20170528120617-20170528140617-00319.warc.gz"}
|
https://socratic.org/questions/what-is-the-period-of-f-t-sin-t-32-cos-t-64
|
# What is the period of f(t)=sin( t / 32 )+ cos( (t)/64 ) ?
May 29, 2016
Both $\sin$ and $\cos$ are periodic with period $2 \pi$.
Then, for example, $\sin \left(t\right) + \cos \left(t\right)$ is automatically periodic of $2 \pi$ because if we substitute $t = 2 \pi$ both functions return on the initial value and so does their sum.
Now the period of the function $\sin \left(\frac{t}{32}\right)$ is $64 \pi$ because when $t = 64 \pi$ we have $\sin \left(2 \pi\right)$ that is equal to $\sin \left(0\right)$ and then the function restarts.
Applying the same concept $\cos \left(\frac{t}{64}\right)$ has the period $128 \pi$.
This means that if we take the sum, when we arrive to $64 \pi$ the $\sin$ did a full turn but the $\cos$ is still not repeating. When we are at $128 \pi$ the $\sin$ did two full turns ($4 \pi$) and the $\cos$ did its full period. So both functions are again to zero and the sum will restart the next cycle.
We are lucky that 128 is exactly the double of 64 so one period of the $\cos$ correspond to exactly two periods of $\sin$. If this is not true we have to search the least common multiple of both periods to have a period that is valid for both functions. In fact $128$ is the LCM of $128$ and $64$.
|
2021-04-15 00:13:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 25, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8819799423217773, "perplexity": 201.27534901018782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00042.warc.gz"}
|
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Le_Chateliers_Principle/The_Effect_of_Changing_Conditions
|
# The Effect of Changing Conditions
This page looks at the relationship between equilibrium constants and Le Chatelier's Principle. Students often get confused about how it is possible for the position of equilibrium to change as you change the conditions of a reaction, although the equilibrium constant may remain the same.
## Changing concentrations
Equilibrium constants are not changed if you change the concentrations of things present in the equilibrium. The only thing that changes an equilibrium constant is a change of temperature. The position of equilibrium is changed if you change the concentration of something present in the mixture. According to Le Chatelier's Principle, the position of equilibrium moves in such a way as to tend to undo the change that you have made.
Suppose you have an equilibrium established between four substances A, B, C and D.
$A + 2B \rightleftharpoons C + D$
According to Le Chatelier's Principle, if you decrease the concentration of C, for example, the position of equilibrium will move to the right to increase the concentration again.
## Explanation in terms of the constancy of the equilibrium constant
The equilibrium constant, $$K_c$$ for this reaction looks like this:
$K_c = \dfrac{[C][D]}{[A][B]^2}$
If you have moved the position of the equilibrium to the right (and so increased the amount of $$C$$ and $$D$$), why hasn't the equilibrium constant increased? This is actually the wrong question to ask! We need to look at it the other way round.
Let's assume that the equilibrium constant must not change if you decrease the concentration of $$C$$ - because equilibrium constants are constant at constant temperature. Why does the position of equilibrium move as it does?
If you decrease the concentration of $$C$$, the top of the $$K_c$$ expression gets smaller. That would change the value of $$K_c$$. In order for that not to happen, the concentrations of $$C$$ and $$D$$ will have to increase again, and those of $$A$$ and $$B$$ must decrease. That happens until a new balance is reached when the value of the equilibrium constant expression reverts to what it was before. The position of equilibrium moves - not because Le Chatelier says it must - but because of the need to keep a constant value for the equilibrium constant.
If you decrease the concentration of $$C$$:
## Changing pressure
This only applies to systems involving at least one gas. Equilibrium constants are not changed if you change the pressure of the system. The only thing that changes an equilibrium constant is a change of temperature. The position of equilibrium may be changed if you change the pressure. According to Le Chatelier's Principle, the position of equilibrium moves in such a way as to tend to undo the change that you have made.
That means that if you increase the pressure, the position of equilibrium will move in such a way as to decrease the pressure again - if that is possible. It can do this by favoring the reaction which produces the fewer molecules. If there are the same number of molecules on each side of the equation, then a change of pressure makes no difference to the position of equilibrium.
### Case 1: Differing Numbers of Gaseous Species on each side of the Equation
Let's look at the same equilibrium we've used before. This one would be affected by pressure because there are three molecules on the left, but only two on the right. An increase in pressure would move the position of equilibrium to the right.
$A_{(g)} + 2B_{(g)} \rightleftharpoons C_{(g)} + D_{(g)}$
Because this is an all-gas equilibrium, it is much easier to use $$K_p$$:
$K_p = \dfrac{P_c\;P_D}{P_A\;P_B^2} \label{EqC1}$
Once again, it is easy to suppose that, because the position of equilibrium will move to the right if you increase the pressure, $$K_p$$ will increase as well. Not so! To understand why, you need to modify the $$K_p$$ expr ession. Remember the relationship between partial pressure, mole fraction and total pressure?
$P_A = (\text{mole fraction of A} )( \text{total pressure})$
$P_A = \chi_A P_{tot}$
Replacing all the partial pressure terms in $$\ref{EqC1}$$ by mole fractions$$\chi_A$$ and total pressure ($$P_{tot}$$) g ives you this:
$K_p \dfrac{(\chi_C P_{tot} )( \chi_D P_{tot} )}{(\chi_A P_{tot} )(\chi_B P_{tot})^2}$
Most of the "P"s cancel out, with one left at the bottom of the expression.
$K_p \dfrac{\chi_C \chi_D}{\chi_A\chi_B^2 P_{tot}}$
Now, remember that $$K_p$$ has got to stay constant because the temperature is unchanged. How can that happen if you increase P? To compensate, you would have to increase the terms on the top, $$\chi_C$$ and $$\chi_D$$, and decrease the terms on the bottom, $$\chi_A$$ and $$\chi_B$$.
Increasing the terms on the top means that you have increased the mole fractions of the molecules on the right-hand side. Decreasing the terms on the bottom means that you have decreased the mole fractions of the molecules on the left. That is another way of saying that the position of equilibrium has moved to the right - exactly what Le Chatelier's Principle predicts. The position of equilibrium moves so that the value of $$K_p$$ is kept constant.
### Case 2: Same Numbers of Gaseous Species on each side of the Equation
There are the same numbers of molecules on each side of the equation. In this case, the position of equilibrium is not affected by a change of pressure. Why not?
$A_{(g)} + B_{(g)} \rightleftharpoons C_{(g)} + D_{(g)}$
Let's go through the same process as in Case 1:
$K_p = \dfrac{ P_C P_D}{P_A P_B}$
Substituting mole fractions and total pressure:
$K_p = \dfrac{ (\chi_C P_{tot}) (\chi_D P_{tot}) }{ (\chi_A P_{tot}) (\chi_B P_{tot})}$
Cancelling out as far as possible:
$K_p = \dfrac{ \chi_C \chi_D }{ \chi_A \chi_B }$
There is not a single $$P_{tot}$$ left in the expression so changing the pressure makes no difference to the $$K_p$$ expression. The position of equilibrium doesn't need to move to keep $$K_p$$ constant.
## Changing temperature
Equilibrium constants are changed if you change the temperature of the system. $$K_c$$ or $$K_p$$ are constant at constant temperature, but they vary as the temperature changes. Look at the equilibrium involving hydrogen, iodine and hydrogen iodide:
$H_{2(g)} + I_{2(g)} \rightleftharpoons 2HI_{(g)} \label{EqHI}$
with $$\Delta H = -10.4\; kJ/mol$$. The $$K_p$$ expression is:
$P_p =\dfrac{P_{HI}^2}{P_{H_2}P_{(I_2)}}$
Two values for $$K_p$$ a re:
temperature Kp
500 K 160
700 K 54
You can see that as the temperature increases, the value of $$K_p$$ falls. This is typical of what happens with any equilibrium where the forward reaction is exothermic. Increasing the temperature decreases the value of the equilibrium constant. Where the forward reaction is endothermic, increasing the temperature increases the value of the equilibrium constant.
The position of equilibrium also changes if you change the temperature. According to Le Chatelier's Principle, the position of equilibrium moves in such a way as to tend to undo the change that you have made. If you increase the temperature, the position of equilibrium will move in such a way as to reduce the temperature again. It will do that by favoring the reaction which absorbs heat. In the equilibrium we've just looked at ($$\ref{EqHI}$$, that will be the back reaction because the forward reaction is exothermic.
So, according to Le Chatelier's Principle the position of equilibrium will move to the left with increasing temperature. Less hydrogen iodide will be formed, and the equilibrium mixture will contain more unreacted hydrogen and iodine. That is entirely consistent with a fall in the value of the equilibrium constant.
|
2020-10-22 06:25:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6860789656639099, "perplexity": 274.4139864888002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00167.warc.gz"}
|
https://fat-forensics.org/generated/fatf.utils.array.validation.are_similar_dtypes.html
|
# fatf.utils.array.validation.are_similar_dtypes¶
fatf.utils.array.validation.are_similar_dtypes(dtype_a: numpy.dtype, dtype_b: numpy.dtype, strict_comparison: bool = False) → bool[source]
Checks whether two numpy dtypes are similar.
If strict_comparison is set to True the both dtypes have to be exactly the same. Otherwise, if both are either numerical or textual dtypes, they are considered similar.
Parameters
dtype_anumpy.dtype
The first dtype to be compared.
dtype_bnumpy.dtype
The second dtype to be compared.
strict_comparisonboolean, optional (default=False)
When set to True the dtypes have to match exactly. Otherwise, if both are either numerical or textual dtypes, they are considered similar.
Returns
are_similarboolean
True if both dtypes are similar, False otherwise.
Raises
TypeError
Either of the inputs is not a numpy’s dtype object.
ValueError
Either of the input dtypes is structured – this function only accepts plane dtypes.
|
2020-02-19 17:13:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1719782054424286, "perplexity": 11796.931906871854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00382.warc.gz"}
|
https://electronics.stackexchange.com/questions/346918/non-inverting-op-amp-noise-on-the-output-when-touched
|
# Non-Inverting Op-Amp noise on the output when touched
I am using a non-inverting op-amp circuit (shown below) to linearize the output of multiple analogue force sensors. This is the circuit recommended by the manufacturer in their electrical integration document.The circuit is working and I am able Toggle between sensors by driving GPIO low for active sensors and high impedance for disabled sensors. However I have observed a larger amount of noise and oscillation in the output of the opamp when I apply force to the sensor with my bare hands. This noise is almost negligible when I apply force to the sensor whist covering the sensor with an insulative material.
I have found by increasing the capacitance of C1 to from 47pF to 1uF I have been able to eliminate the noise and oscillation. (Why is this?) however this is at the cost of having to increase the time between sensor readings to the point where it is not feasible for my application. The reason I must wait between sensor readings is the large capacitor will hold the voltage of the previous sensor for some time when I toggle between sensors.I have the following questions:
1. What is the likey cause of the noise/oscillation (image show below)
2. How can I eliminate this noise?
3. What is the purpose of C1 and why does increasing C1 remove the noise/oscillation.
As with all noise problems your BEST solution is to prevent the noise from getting picked up by the circuit in the first place. As such you already have a solution, which is to electrically isolate the sensor from prying fingers.
As the doctor said when I told him.. "It hurts when I do this!"
"Well then, don't do that!"
C1 in this circuit turns it into a low pass filter. The feedback impedance will be significantly reduced for higher frequencies on the signal. As such, the larger the capacitance the less "noise" will make it through. At very high frequencies $R_{FEEDBACK}$ --> 0.
However, for a circuit like this, finger interference means you will be injecting a significant amount of low frequency, mains hum, noise. As such, the capacitor would need to be quite large to remove it.
Since you are planning on multiplexing the signal from various sensors, the effect of a large capacitance here is problematic for settling time delay as you have intimated.
As such, and if it is not possible to properly insulate and isolate the sensor from fingers and local RF noise, this is really not a good circuit to use for this application.
Moreover, using the GPIO pins directly to switch between sensors also adds an issue of inter-channel interference and a general inability to REALLY get the bottom end of the sensor hard to ground. Further, switching like that creates what can be problematic transient response issues in the op-amp.
It would be better to receive each sensor value individually, with appropriate noise filtering, and then select the appropriate output to feed to wherever the signal is intended to go.
I made a quick simulation of your system using a different opamp in LTSpice, and adding some capacitance in parallel with the FlexiForce definitely makes the system unstable. This makes sense, because the capacitance shorts the inverting input out for certain frequencies, and makes it tough for the feedback to find the right level such that the negative and positive inputs match.
The way I'd solve this is to simulate the circuit, make sure that the simulation and actual device match relatively well, and then look for a solution. The good news is that your system has a roll off around 1 kHz, which is a pretty low frequency, and the noise tends to be high frequencies (I'm seeing 100 kHz with a 1u touch cap), so I would think you could add extra filtering to get rid of the high frequency noise.
You may can raising the feedback cap and lower the resistor, to increase the stability and keep the bandwidth the same. This will reduce the gain, but you can boost it back with a second stage. You may just want to start with a very stable unity gain buffer, then add gain. This is slightly noisier, but very robust.
|
2019-10-14 20:39:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43631279468536377, "perplexity": 741.6892686579444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00106.warc.gz"}
|
https://itectec.com/superuser/linux-where-should-i-export-an-environment-variable-so-that-all-combinations-of-bash-dash-interactive-non-interactive-login-non-login-will-pick-it-up/
|
# Linux – Where should I export an environment variable so that all combinations of bash/dash, interactive/non-interactive, login/non-login, will pick it up
bashlinuxubuntu 12.04
Here's the motivation for the question:
I'm using Ubuntu 12.04 LTS 2 with the Unity desktop. In my .bashrc file, I append several directories to my PATH variable and define a few environment variables, such as JAVA_HOME. When I launch applications from a terminal (running bash, my default shell), this works great, but for several of the shortcuts that use the Unity launcher, they run apps that seem to be defined to use #!/bin/sh, which is aliased to /bin/dash, and they don't pick up the contents of either ~/.bashrc or ~/.profile.
I suppose I could change all of these shortcuts to use /bin/bash instead of /bin/sh to force it to pick up the .bashrc changes, but that seems really hacky.
Given that Ubuntu 12.04 (by default) aliases /bin/sh to /bin/dash and that my default shell is /bin/bash, is there a single place where I can choose to modify the PATH and define environment variables if I want them to be present under all of these circumstances:
1. Whenever I create a non-login bash shell (using the terminal in unity)
2. Whenever I create a login bash shell (for example, logging in remotely over ssh)
3. Whenever I use a Unity application launcher (given that the launcher uses /bin/sh).
4. Whenever a cron job executes (given that SHELL=/bin/sh in /etc/crontab).
If I understand correctly, I'm guessing that:
• (1)/(2) and (3)/(4) are different because (1)/(2) are bash and (3)/(4) are dash.
• (1) and (2) are different because the files that bash chooses to load differs depending on whether or not it is a login shell.
• (3) and (4) are different because (3) will come at some point after I've logged in (and hence ~/.profile will have been sourced by one of its parent processes, while (4) will come at some point when I'm not logged in, and hence ~/.profile will not have been read.
(I wouldn't be surprised if other factors matter, too, such as whether or not the shell is interactive, so there are probably more combinations that I haven't even anticipated…I'm happy to have my question "improved" in that case.)
I would expect that at some point, someone must have made some sort of guide that tells you how/where to modify environment variables in a shell-independent way (or at least a dash/bash compatible way)…I just can't seem to find the right search terms to locate such a guide.
Solutions or pointers to solutions greatly appreciated!
Updated:
• Clarification: This is the default Ubuntu user created by the 12.04 installation process, so nothing fancy. It does have a ~/.profile (that explicitly sources ~/.bashrc), and the only ~/.bash* files present are .bashrc, .bash_history, and .bash_logout…so no there's no .bash_profile.
• Emphasis on scope: I don't really care about any shells other than the default interactive shell (bash) and any script that happens to use /bin/sh (aliased to dash), so there's no need to complicate this with anything extra for tcsh/ksh/zsh/etc. support.
Shell invocation is a bit of a complicated thing. The bash and dash man pages has INVOCATION sections about this.
In summary they says (there is more detail in the man page, you should read it):
When bash is | it reads
-------------------------------|----------
login shell | /etc/profile and then the first of ~/.bash_profile, ~/.bash_login or ~/.profile that exists.
|
interactive non-login shell | /etc/bash.bashrc then ~/.bashrc
|
non-interactive shell | The contents of $BASH_ENV (if it exists) | interactive (as "sh") | The contents of$ENV (if it exists)
-
When dash is | it reads
-------------------------------|---------
login shell | /etc/profile then .profile
|
interactive shell | The contents of ENV (it it exists, can be set in .profile as well as in initial environment)
I don't know about other shells offhand as I never use any of them. Your best bet might be to set a couple environment variables to point at the common location script and manually source that (when appropriate) in the couple of cases that doesn't cover.
|
2021-10-28 11:05:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.524741530418396, "perplexity": 2697.067401696331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00366.warc.gz"}
|
http://mickopedia.org/mickify.py?topic=Visibility
|
# Visibility
In meteorology, visibility is a holy measure of the distance at which an object or light can be clearly discerned. It is reported within surface weather observations and METAR code either in meters or statute miles, dependin' upon the feckin' country. Jesus Mother of Chrisht almighty. Visibility affects all forms of traffic: roads, sailin' and aviation. Meteorological visibility refers to transparency of air: in dark, meteorological visibility is still the oul' same as in daylight for the feckin' same air, would ye swally that?
## Definition
A commercial aircraft flyin' into the clouds over Los Angeles
ICAO Annex 3 Meteorological Service for International Air Navigation contains the bleedin' followin' definitions and note:
a) the greatest distance at which a bleedin' black object of suitable dimensions, situated near the bleedin' ground, can be seen and recognized when observed against a bleedin' bright background;
b) the oul' greatest distance at which lights of 1,000 candelas can be seen and identified against an unlit background. Here's another quare one.
Note.— The two distances have different values in air of an oul' given extinction coefficient, and the oul' latter b) varies with the background illumination. C'mere til I tell yiz. The former a) is represented by the meteorological optical range (MOR).
Annex 3 also defines Runway Visual Range (RVR) as:
The range over which the pilot of an aircraft on the centre line of a bleedin' runway can see the feckin' runway surface markings or the feckin' lights delineatin' the oul' runway or identifyin' its centre line. Jaysis.
On clear days, Tel Aviv's skyline is visible from the bleedin' Carmel mountains, 80km north
In extremely clean air in Arctic or mountainous areas, the visibility can be up to 70 kilometres (43 mi) to 100 kilometres (62 mi). Would ye believe this shite? However, visibility is often reduced somewhat by air pollution and high humidity, bejaysus. Various weather stations report this as haze (dry) or mist (moist). Fog and smoke can reduce visibility to near zero, makin' drivin' extremely dangerous. Whisht now and listen to this wan. The same can happen in an oul' sandstorm in and near desert areas, or with forest fires, the hoor. Heavy rain (such as from a feckin' thunderstorm) not only causes low visibility, but the inability to brake quickly due to hydroplanin', grand so. Blizzards and ground blizzards (blowin' snow) are also defined in part by low visibility. Story?
## Derivation
To define visibility, we examine the feckin' case of a perfectly black object bein' viewed against a perfectly white background, the shitehawk. The visual contrast, CV(x), at a bleedin' distance x from the oul' black object is defined as the oul' relative difference between the feckin' light intensity of the background and the object
$C_\text{V}(x) = \frac{F_\text{B}(x) - F(x)}{F_\text{B}(x)}$
where FB(x) and F(x) are the feckin' intensities of the background and the feckin' object, respectively. Because the oul' object is assumed to be perfectly black, it must absorb all of the feckin' light incident on it. Whisht now and eist liom. Thus when x=0 (at the oul' object), F(0) = 0 and CV(0) = 1. G'wan now. Between the bleedin' object and the feckin' observer, F(x) is affected by additional light that is scattered into the oul' observer's line of sight and the bleedin' absorption of light by gases and particles. Bejaysus this is a quare tale altogether. , to be sure. Light scattered by particles outside of a particular beam may ultimately contribute to the oul' irradiance at the target, a holy phenomenon known as multiple scatterin'. C'mere til I tell ya. Unlike absorbed light, scattered light is not lost from a bleedin' system, bedad. Rather, it can change directions and contribute to other directions, bedad. It is only lost from the oul' original beam travelin' in one particular direction. Here's a quare one for ye. The multiple scatterin''s contribution to the oul' irradiance at x is modified by the feckin' individual particle scatterin' coefficient, the feckin' number concentration of particles, and the feckin' depth of the feckin' beam. The intensity change dF is the bleedin' result of these effects over a holy distance dx. Because dx is an oul' measure of the amount of suspended gases and particles, the bleedin' fraction of F that is diminished is assumed to be proportional to the oul' distance, dx. Jesus, Mary and holy Saint Joseph. The fractional reduction in F is
$dF = -b_\text{ext} F dx$
where bext is the attenuation coefficient. Here's another quare one for ye. The scatterin' of background light into the observer's line of sight can increase F over the bleedin' distance dx. This increase is defined as b' FB(x) dx, where b' is a holy constant. Jesus, Mary and Joseph. The overall change in intensity is expressed as
$dF(x) = \left[b' F_\text{b}(x) - b_\text{ext} F(x)\right] dx$
Since FB represents the background intensity, it is independent of x by definition. Jaykers! Therefore,
$dF_\text{B}(x) = 0 = \left[b' F_\text{B}(x) - b_\text{ext} F_\text{b}(x)\right] dx$
It is clear from this expression that b' must be equal to bext. Jesus, Mary and holy Saint Joseph. Thus, the bleedin' visual contrast, CV(x), obeys the oul' Beer–Lambert law
$\frac{dC_\text{V}(x)}{dx} = - b_\text{ext} C_\text{V}(x)$
which means that the oul' visual contrast decreases exponentially with the feckin' distance from the feckin' object:
$C_\text{V}(x) = \exp(- b_\text{ext} x)$
Lab experiments have determined that contrast ratios between 0.018 and 0, bejaysus. 03 are perceptible under typical daylight viewin' conditions. A contrast ratio of 2% (CV = 0.02) is usually used to calculate visual range, be the hokey! Pluggin' this value into the feckin' above equation and solvin' for x produces the bleedin' followin' visual range expression (the Koschmeider equation):
$x_\text{V} = \frac{3.912}{b_\text{ext}}$
with xV in units of length. Jaysis. At sea level, the bleedin' Rayleigh atmosphere has an extinction coefficient of approximately 13, you know yourself like. 2 × 10−6 m−1 at an oul' wavelength of 520 nm. This means that in the cleanest possible atmosphere, visibility is limited to about 296 km. Jesus, Mary and Joseph.
## Fog, mist, and haze
The international definition of fog is a holy visibility of less than 1 kilometre (3,300 ft); mist is a holy visibility of between 1 kilometre (0. Here's a quare one. 62 mi) and 2 kilometres (1. Jasus. 2 mi) and haze from 2 kilometres (1.2 mi) to 5 kilometres (3, grand so. 1 mi). Fog and mist are generally assumed to be composed principally of water droplets, haze and smoke can be of smaller particle size; this has implications for sensors such as Thermal Imagers (TI/FLIR) operatin' in the bleedin' far-IR at wavelengths of about 10 μm which are better able to penetrate haze and some smokes because their particle size is smaller than the bleedin' wavelength; the bleedin' IR radiation is therefore not significantly deflected or absorbed by the oul' particles. Soft oul' day. [citation needed]
## Very low visibility
Visibility of less than 100 metres (330 ft) are usually reported as zero. Sure this is it. In these conditions, roads may be closed, or automatic warnin' lights and signs may be activated to warn drivers. These have been put in place in certain areas that are subject to repeatedly low visibility, particularly after traffic collisions or pile-ups involvin' multiple vehicles. Jaysis.
## Low visibility warnings
In addition, an advisory is often issued by a government weather agency for low visibility, such as a dense fog advisory from the U. Jesus Mother of Chrisht almighty. S, the cute hoor. National Weather Service, so it is. These generally advise motorists to avoid travel until the bleedin' fog burns off or other conditions improve. Airport travel is also often delayed by low visibility, sometimes causin' long waits due to instrument flight rules and wider spacin' of aircraft.[citation needed]
## Visibility and air pollution
A visibility reduction is probably the bleedin' most apparent symptom of air pollution. Jaysis. Visibility degradation is caused by the feckin' absorption and scatterin' of light by particles and gases in the oul' atmosphere. Jaysis. Absorption of electromagnetic radiation by gases and particles is sometimes the cause of discolorations in the feckin' atmosphere but usually does not contribute very significantly to visibility degradation. I hope yiz are all ears now. Scatterin' by particulate, on the bleedin' other hand, impairs visibility much more readily. Whisht now and eist liom. Visibility is reduced by significant scatterin' from particles between an observer and a bleedin' distant object. The particles scatter light from the oul' sun and the rest of the bleedin' sky through the bleedin' line of sight of the observer, thereby decreasin' the bleedin' contrast between the oul' object and the feckin' background sky. Particles that are the oul' most effective at reducin' visibility (per unit aerosol mass) have diameters in the bleedin' range of 0, what? 1-1. Here's another quare one. 0 µm. The effect of air molecules on visibility is minor for short visual ranges but must be taken into account for ranges above 30 km, be the hokey!
|
2013-05-25 15:48:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184583067893982, "perplexity": 7824.0557300335695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705957380/warc/CC-MAIN-20130516120557-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/tags/particle-physics/hot
|
# Tag Info
2
Beam instabilities driven by collective effects The from the beam dynamics point of view, we pretty soon encounter instabilities given by collective effects. The most relevant ones, in case of a single bunch, are the beam-beam interaction, which takes place when two bunches cross each others probing their self-fields, and the wakefields/impedance/image ...
1
The motion of a charged particle in a magnetic field is the manifestation of the fundamental relationship between magnetism and electrostatic effects. Exactly why the fundamental forces are the way they are is still beyond modern day physics which is why it is hard to give a satisfactory answer to the first part of your question. It just has to be accepted ...
1
Well, "low pressure" could be considered almost any CRT built by man, since it is very, very hard to achieve a true, absolute vacuum. Cathode rays were systematically observed in Crookes tubes at 10^-6 atm, which is definitely not a vacuum, but can also be considered "low pressure".
1
Elementary particles are classified into two groups: Bosons & Fermions. Fermions comes with two families: quarks and leptons. Leptons come with three generations (till date no fourth generation leptons observed). Same is true for quarks as well.The first generation consists of electron $e^{-}$ and electron-neutrino $\tau_{e}$. Standard way of ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-05-27 04:50:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.484952449798584, "perplexity": 993.3316893631618}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276537.37/warc/CC-MAIN-20160524002116-00194-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://mattermodeling.stackexchange.com/questions/263/can-ab-initio-crystal-structure-methods-predict-the-structure-of-cuprates-from-t
|
Can ab initio crystal structure methods predict the structure of cuprates from their stoichiometry and quantify the brittleness of those materials?
Current ab initio methods may not be able to predict the electronic transport properties of cuprate superconductors but can they be used to predict their crystal structure? Furthermore those materials are typically polycrystalline and tend to brittle, can ab initio methods predict the details of those properties?
This dissertation (Esfahani, M.M.D., 2017. Novel Superconducting Phases of Materials under Pressure by Evolutionary Algorithm USPEX (Doctoral dissertation, State University of New York at Stony Brook)) seems useful.
Yes, indeed it's done by using plane-wave ultrasoft pseudo-potential technique and GGA exchange-correlation. See here: https://www.tandfonline.com/doi/pdf/10.1080/23311940.2016.1231361?needAccess=true . Particularly in this paper authors were able to calculate mechanical properties of Barium cuprate ($$\text{BaCuO}_{2}$$) superconductor and they find indeed it's brittle based on the definition of $$\frac{G}{B} > 0.5$$ where $$G$$ is shear modulus and $$B$$ is the bulk modulus both of them calculated from DFT.
|
2021-10-19 08:24:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484557867050171, "perplexity": 1931.025737672165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00008.warc.gz"}
|
http://wikis.controltheorypro.com/Hysteretic_Damping
|
Hysteretic Damping
Hysteretic Damping Viscous Damping Single DOF In order to prevent spam, users must register before they can edit or create articles.
## 1 Introduction to Hysteretic Damping[1]
Experiments on the damping that occurs in solid materials and structures which have been subjected to cyclic stressing have shown the damping force to be independent of frequency. This internal, or material, damping is referred to as hysteretic damping.
The viscous damping force $LaTeX: c\dot{x}$ is dependent on the frequency of oscillation. Hysteretic damping is not dependent on frequency so $LaTeX: c\dot{x}$ is not an adequate model. Hysteretic damping requires the damping force $LaTeX: c\dot{x}$ to be divided by the frequency of oscillation $LaTeX: \omega_{n}$.
## 2 Hysteretic Damping: Equation of Motion
The equation of motion is therefore
$LaTeX: m\ddot{x}+\left(\frac{c}{\omega_n}\right)\dot{x}+kx=0$
Structures under harmonic forcing experience stress that leads the strain by a constant angle, $LaTeX: \alpha$. For a harmonic strain, $LaTeX: \epsilon=\epsilon_{0}\mbox{ sin }\nu t$, where ν is the forcing frequency. The induced stress is
$LaTeX: \sigma=\sigma_{0}\mbox{ sin}\left(\nu t+\alpha \right)$
Hence
LaTeX: \begin{alignat}{2}\sigma & =\sigma_{0} \mbox{ cos} \left(\alpha\right) \mbox{ sin}\left(\nu t\right)+\sigma_{0} \mbox{ sin} \left(\alpha\right) \mbox{ cos}\left(\nu t\right) \\ & =\sigma_{0} \mbox{ cos} \left(\alpha \right) \mbox{ sin}\left(\nu t\right)+\sigma_{0} \mbox{ sin} \left(\alpha\right) \mbox{ sin}\left(\nu t+\frac{\pi}{2}\right) \end{alignat}
The first component of stress is in phase with the strain $LaTeX: \epsilon$; the second component is in quadrature with $LaTeX: \epsilon$ and $LaTeX: \frac{\pi}{2}$ ahead. Replacing $LaTeX: \frac{\pi}{2}$ with $LaTeX: j=\sqrt{-1}$ leads to,
$LaTeX: \sigma=\sigma_{0} \mbox{ cos}\left(\alpha\right) \mbox{ sin}\left(\nu t \right)+j\sigma_{0}\mbox{ sin}\left( \alpha \right) \mbox{ sin}\left(\nu t \right)$
### 2.1 Hysteretic Damping: Loss Factor
A complex modulus E* is formulated, where
LaTeX: \begin{alignat}{2}E^{*} & =\frac{\sigma}{\epsilon}=\frac{\sigma_{0}}{e_{0}} \mbox{ cos} \left(\alpha \right)+j\frac{\sigma_{0}}{\epsilon_{0}} \mbox{ sin}\left(\alpha \right) \\ & = E^{'}+jE^{''} \end{alignat}
where
$LaTeX: E^{'}$ is the in-phase or storage modulus, and
$LaTeX: E^{''}$ is the quadrature or loss modulus.
The loss factor $LaTeX: \eta$, which is a measure of the hysteretic damping in a structure, is equal to $LaTeX: \frac{E^{''}}{E^{'}}$, that is, $LaTeX: \mbox{tan}\alpha$. Typically the stiffness of a structure cannot be separated from its hysteretic damping, so these quantities are considered as a single coefficient. The complex stiffness $LaTeX: k^{*}$ is given by
$LaTeX: k^{*}=k\left(1+j\eta\right)$,
where
$LaTeX: k$ is the static stiffness and
$LaTeX: \eta$ the hysteretic damping loss factor.
### 2.2 Hysteretic Damping: Equation of Free Motion for a single DOF system
Figure 1: Hysteretic Damping for single DOF system
The equation of free motion for a single DOF system with hysteretic damping is therefore
$LaTeX: m\ddot{x}+k^{*}x=0$
Figure 1 shows a single DOF model with hysteretic damping of coefficient $LaTeX: c_{H}$.
The equation of motion is
$LaTeX: m\ddot{x}+\left(\frac{c_{H}}{\omega}\right)\dot{x}+kx=0$
Now if $LaTeX: x=Xe^{j\omega t}$,
$LaTeX: \dot{x}=j\omega x$ and $LaTeX: \left(\frac{c_{H}}{\omega}\right)\dot{x}=jc_{H}x$
Reformulating the equation of motion it becomes
$LaTeX: m\ddot{x}+\left(k+jc_{H}\right)x=0$
Since
$LaTeX: k+jc_{H}=k\left(1+\frac{jc_{H}}{k}\right)=k\left(1+j\eta\right)=k^{*}$
we can write
$LaTeX: m\ddot{x}+k^{*}x=0$
That is, the combined effect of the elastic and hysteretic resistance to motion can be represented as a complex stiffness, $LaTeX: k^{*}$.
## 3 Hysteretic Damping Loss Factors[2]
A range of values of η for some common engineering materials is given below. For more detailed information on meterial damping mechanisms and loss factors.
Table 1: Loss Factor for select materials
Material Loss Factor (η)
Aluminum-pure
0.00002-0.002
Aluminum alloy-dural
0.0004-0.001
Steel
0.001-0.008
0.008-0.014
Cast iron
0.003-0.03
Manganese copper alloy
0.05-0.1
Rubber-natural
0.1-0.3
Rubber-hard
1.0
Glass
0.0006-0.002
Concrete
0.01-0.06
## 4 Energy dissipated by Hysteretic Damping[3]
The energy dissipated per cycle by a force F acting on a system with hysteretic damping is $LaTeX: \int F dx$, where
$LaTeX: F=k^{*}x=k\left(1+j\eta\right)x$,
and x is the displacement.
For harmonic motion $LaTeX: x=X\mbox{ sin } \omega t$, so
LaTeX: \begin{alignat}F & = kX\mbox{ sin}\left(\omega t\right)+j\eta kX\mbox{ sin}\left(\omega t\right) \\ & = kX\mbox{ sin}\left(\omega t\right)+\eta kX \mbox{ cos}\left(\omega t\right)\end{alignat}. A
Now
$LaTeX: \mbox{sin}\left(\omega t\right)=\frac{x}{X}$ B.1
$LaTeX: \mbox{cos}\left(\omega t\right)=\frac{\sqrt{\left(X^2-x^2\right)}}{X}$ B.2
Substituting the Eqn. (B.1) and Eqn. (B.2) into Eqn. (A)
$LaTeX: F=kx \pm \eta k \sqrt{\left(X^2-x^2\right)}$. C
Eqn. (C) is an ellipse as shown in Figure 2. The energy dissipated is area of this ellipse.
TODO Find image
Performing the integration of Eqn. (C) we get the total energy dissipated
$LaTeX: \int F dx & =\int_{0}^{x} \left(kx \pm \eta k \sqrt{\left(X^2-x^2\right)}\right) dx = \pi X^2 \eta k$.
## 5 Notes
• Beards, C. F. 1995 Engineering Vibration Analysis with Applications to Control Systems. ISBN 034063183X
• Lazan, B. J. 1968 Damping of Materials and Members in Structural Mechanics.
### 5.1 References
1. Beards, pp. 41-43
2. Lazan
3. Beards, pp. 43-45
|
2019-07-18 00:23:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9752169251441956, "perplexity": 3469.8627574986494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.62/warc/CC-MAIN-20190718001934-20190718023934-00176.warc.gz"}
|