text stringlengths 83 3.04k | source stringclasses 1 value |
|---|---|
In marine geology, a guyot (), also called a tablemount, is an isolated underwater volcanic mountain (seamount) with a flat top more than 200 m (660 ft) below the surface of the sea. The diameters of these flat summits can exceed 10 km (6.2 mi). Guyots are most commonly found in the Pacific Ocean, but they have been identified in all the oceans except the Arctic Ocean. They are analogous to tables (such as mesas) on land.
In marine navigation, a pelorus is a reference tool for maintaining bearing of a vessel at sea. It is a "simplified compass" without a directive element, suitably mounted and provided with vanes to permit observation of relative bearings.The instrument was named for one Pelorus, said to have been the pilot for Hannibal, circa 203 BC.
In marine propulsion, a variable-pitch propeller is a type of propeller with blades that can be rotated around their long axis to change the blade pitch. Reversible propellers—those where the pitch can be set to negative values—can also create reverse thrust for braking or going backwards without the need to change the direction of shaft revolution. A controllable pitch propeller (CPP) can be efficient for the full range of rotational speeds and load conditions, since its pitch will be varied to absorb the maximum power that the engine is capable of producing. When fully loaded, a vessel will need more propulsion power than when empty. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
By varying the propeller blades to the optimal pitch, higher efficiency can be obtained, thus saving fuel. A vessel with a VPP can accelerate faster from a standstill and can decelerate much more effectively, making stopping quicker and safer. A CPP can also improve vessel maneuverability by directing a stronger flow of water onto the rudder.However, a fixed variable-pitch propeller (FVPP) is both cheaper and more robust than a CPP.
Also, an FVPP is typically more efficient than a CPP for a single specific rotational speed and load condition. Accordingly, vessels that normally operate at a standard speed (such as large bulk carriers, tankers and container ships) will have an FVPP optimized for that speed. At the other extreme, a canal narrowboat will have a FVPP for two reasons: speed is limited to 4 mph (to protect the canal bank), and the propeller needs to be robust (when encountering underwater obstacles).
Vessels with medium or high speed diesel or gasoline engines use a reduction gear to reduce the engine output speed to an optimal propeller speed—although the large low speed diesels, whose cruising RPM is in the 80 to 120 range, are usually direct drive with direct-reversing engines. While an FVPP-equipped vessel needs either a reversing gear or a reversible engine to reverse, a CPP vessel may not. On a large ship the CPP requires a hydraulic system to control the position of the blades. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Compared to an FPP, a CPP is more efficient in reverse as the blades' leading edges remain as such in reverse also, so that the hydrodynamic cross-sectional shape is optimal for forward propulsion and satisfactory for reverse operations. In the mid-1970s, Uljanik Shipyard in Yugoslavia produced four VLCCs with CPPs – a tanker and three ore/oil carriers – each powered by two 20,000 bhp B & W diesel engines directly driving Kamewa variable-pitch propellers. Due to the high construction cost none of these vessels ever returned a profit over their lifetimes.
For these vessels, fixed variable-pitch propellers would have been more appropriate.Controllable-pitch propellers are usually found on harbour or ocean-going tugs, dredgers, cruise ships, ferries, cargo vessels and larger fishing vessels. Prior to the development of CPPs, some vessels would alternate between "speed wheel" and "power wheel" propellers depending on the task. Current VPP designs can tolerate a maximum output of 44000 kW (60,000 hp).
In maritime law, flotsam, jetsam, lagan, and derelict are specific kinds of shipwreck. The words have specific nautical meanings, with legal consequences in the law of admiralty and marine salvage. A shipwreck is defined as the remains of a ship that has been wrecked—a destroyed ship at sea, whether it has sunk or is floating on the surface of the water. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In maritime transport terms, and most commonly in sailing, jury-rigged is an adjective, a noun, and a verb. It can describe the actions of temporary makeshift running repairs made with only the tools and materials on board; and the subsequent results thereof. The origin of jury-rigged and jury-rigging lies in such efforts done on boats and ships, characteristically sail powered to begin with. Jury-rigging can be applied to any part of a ship; be it its super-structure (hull, decks), propulsion systems (mast, sails, rigging, engine, transmission, propeller), or controls (helm, rudder, centreboard, daggerboards, rigging). Similarly, after a dismasting, a replacement mast, often referred to as a jury mast (and if necessary, yard) would be fashioned, and stayed to allow a watercraft to resume making way.
In marketing and microeconomics, customer switching or consumer switching describes "customers/consumers abandoning a product or service in favor of a competitor". Assuming constant price, product or service quality, counteracting this behaviour in order to achieve maximal customer retention is the business of marketing, public relations and advertising. Brand switching—as opposed to brand loyalty is the outcome of customer switching behaviour.
In marketing and sales, marketing collateral is a collection of media used to support the sales of a product or service. Historically, the term "collateral" specifically referred to brochures or sell sheets developed as sales support tools. These sales aids are intended to make the sales effort easier and more effective.The brand of the company usually presents itself by way of its collateral to enhance its brand through a consistent message and other media, and must use a balance of information, promotional content, and entertainment. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In marketing and the social sciences, observational research (or field research) is a social research technique that involves the direct observation of phenomena in their natural setting. This differentiates it from experimental research in which a quasi-artificial environment is created to control for spurious factors, and where at least one of the variables is manipulated as part of the experilovement.
In marketing jargon, product lining refers to the offering of several related products for individual sale. Unlike product bundling, where several products are combined into one group, which is then offered for sale as a units, product lining involves offering the products for sale separately. A line can comprise related products of various sizes, types, colors, qualities, or prices.
Line depth refers to the number of subcategories under a category. Line consistency refers to how closely related the products that make up the line are. Line vulnerability refers to the percentage of sales or profits that are derived from only a few products in the line. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In comparison to product bundling, which is a strategy of offering more than one product for promotion as one combined item to create differentiation and greater value, product lining consists of selling different related products individually. The products in the product line can come in various sizes, colours, qualities or prices. For instance, the variety of coffees that are offered at a café is one of its product lines and it could consist of flat white, cappuccinos, short black, lattes, mochas, etc. Alternatively, product line of juices and pastries can also be found at a café. The benefits from having a successful product line is the brand identification from customers which result in customer loyalty and multiple purchases. It increases the likelihood of customers purchasing new products from the company that have just been added into the product line due to the previous satisfying purchases.
In marketing strategy, first-mover advantage (FMA) is the competitive advantage gained by the initial ("first-moving") significant occupant of a market segment. First-mover advantage enables a company or firm to establish strong brand recognition, customer loyalty, and early purchase of resources before other competitors enter the market segment. First movers in a specific industry are almost always followed by competitors that attempt to capitalise on the first movers' success. These followers are also aiming to gain market share; however, most of the time the first-movers will already have an established market share, with a loyal customer base that allows them to maintain their market share.
In marketing, Bayesian inference allows for decision making and market research evaluation under uncertainty and with limited data. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In marketing, a blind taste test is often used as a tool for companies to compare their brand to another brand. For example, the Pepsi Challenge is a famous taste test that has been run by Pepsi since 1975. Additionally, taste tests are sometimes used as a tool by companies to develop their brand or new products.
Blind taste tests are ideal for goods such as food or wine that are consumed directly. Researchers use blind taste tests to obtain information about customers' perceptions and preferences on the goods. Blind taste test can be used to: Track views on a product over time assess changes or improvements made to a product gauge reactions to a new product
In marketing, a company’s value proposition is the full mix of benefits or economic value which it promises to deliver to the current and future customers (i.e., a market segment) who will buy their products and/or services. It is part of a company's overall marketing strategy which differentiates its brand and fully positions it in the market. A value proposition can apply to an entire organization, or parts thereof, or customer accounts, or products or services. Creating a value proposition is a part of the overall business strategy of a company. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Kaplan and Norton note thatStrategy is based on a differentiated customer value proposition. Satisfying customers is the source of sustainable value creation. Developing a value proposition is based on a review and analysis of the benefits, costs, and value that an organization can deliver to its customers, prospective customers, and other constituent groups within and outside the organization.
It is also a positioning of value, where Value = Benefits − Cost (cost includes economic risk).A value proposition can be set out as a business or marketing statement (called a "positioning statement") which summarizes why a consumer should buy a product or use a service. A compellingly worded positioning statement has the potential to convince a prospective consumer that a particular product or service which the company offers will add more value or better solve a problem (i.e. the "pain-point") for them than other similar offerings will, thus turning them into a paying client. The positioning statement usually contains references to which sector the company is operating in, what products or services they are selling, who are its target clients and which points differentiate it from other brands and make its product or service a superior choice for those clients.
It is usually communicated to the customers via the company's website and other advertising and marketing materials. Conversely, a customer's value proposition is the perceived subjective value, satisfaction or usefulness of a product or service (based on its differentiating features and its personal and social values for the customer) delivered to and experienced by the customer when they acquire it. It is the net positive subjective difference between the total benefits they obtain from it and the sum of monetary cost and non-monetary sacrifices (relative benefits offered by other alternative competitive products) which they have to give up in return. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
However, often there is a discrepancy between what the company thinks about its value proposition and what the clients think it is.A company's value propositions can evolve, whereby values can add up over time. For example, Apple's value proposition contains a mix of three values.
Originally, in the 1980s, it communicated that its products are creative, elegant and "cool" and thus different from the status quo ("Think different"). Then in the first two decades of the 21st century, it communicated its second value of providing the customers with a reliable, smooth, hassle-free user experience within its ecosystem ("Tech that works"). In the 2020s, Apple's latest differentiating value has been the protection of its client's privacy ("Your data is safe with us").
In marketing, a coupon is a ticket or document that can be redeemed for a financial discount or rebate when purchasing a product. Customarily, coupons are issued by manufacturers of consumer packaged goods or by retailers, to be used in retail stores as a part of sales promotions. They are often widely distributed through mail, coupon envelopes, magazines, newspapers, the Internet (social media, email newsletter), directly from the retailer, and mobile devices such as cell phones. The New York Times reported "more than 900 manufacturers' coupons were distributed" per household, and that "the United States Department of Agriculture estimates that four families in five use coupons. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
"Only about 4 percent" of coupons received were redeemed. Coupons can be targeted selectively to regional markets in which price competition is great. Most coupons have an expiration date, although American military commissaries overseas honor manufacturers' coupons for up to six months past the expiration date.
In marketing, a customer value proposition (CVP) consists of the sum total of benefits which a vendor promises a customer will receive in return for the customer's associated payment (or other value-transfer). Customer Value Management was started by Ray Kordupleski in the 1980s and discussed in his book, Mastering Customer Value Management. A customer value proposition is a business or marketing statement that describes why a customer should buy a product or use a service. It is specifically targeted towards potential customers rather than other constituent groups such as employees, partners or suppliers. Similar to the unique selling proposition, it is a clearly defined statement that is designed to convince customers that one particular product or service will add more value or better solve a problem than others in its competitive set.
In marketing, a product is an object, or system, or service made available for consumer use as of the consumer demand; it is anything that can be offered to a market to satisfy the desire or need of a customer. In retailing, products are often referred to as merchandise, and in manufacturing, products are bought as raw materials and then sold as finished goods. A service is also regarded as a type of product. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In project management, products are the formal definition of the project deliverables that make up or contribute to delivering the objectives of the project. A related concept is that of a sub-product, a secondary but useful result of a production process. Dangerous products, particularly physical ones, that cause injuries to consumers or bystanders may be subject to product liability.
In marketing, a publicity stunt is a planned event designed to attract the public's attention to the event's organizers or their cause. Publicity stunts can be professionally organized, or set up by amateurs. Such events are frequently utilized by both advertisers and celebrities, the majority of whom are notable athletes and politicians. Organizations sometimes seek publicity by staging newsworthy events that attract media coverage.
They can be in the form of groundbreakings, world record attempts, dedications, press conferences, or organized protests. By staging and managing these types of events, the organizations attempt to gain some form of control over what is reported in the media. Successful publicity stunts have news value, offer photo, video, and sound bite opportunities, and are arranged primarily for media coverage.It can be difficult for organizations to design successful publicity stunts that highlight the message instead of burying it. For example, it makes sense for a pizza company to bake the world's largest pizza, but it would not make sense for the YMCA to sponsor that same event. The importance of publicity stunts is for generating news interest and awareness for the concept, product, or service being marketed. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In marketing, abandonment rate is a term associated with the use of virtual shopping carts. Also known as "shopping cart abandonment". Although shoppers in brick and mortar stores rarely abandon their carts, abandonment of virtual shopping carts is quite common.
Marketers can count how many of the shopping carts used in a specified period result in completed sales versus how many are abandoned. The abandonment rate is the ratio of the number of abandoned shopping carts to the number of initiated transactions or to the number of completed transactions.Around 10 sources of information are used before making a decision when buying online (e.g. webshops, review websites, social networks, and the like). In this process the shopper compares at least 5 different websites for the product, and spends up to 20 hours researching. This means that shopping online is not as easy as some predicted 20 years ago.From both business and scientific perspectives, researchers and practitioners have investigated the problem of online shopping abandonment, trying to understand and address the causes of such low conversion rates. They mostly agree that the biggest problems, for online cart abandonment were: lack of transparency, unclear transaction and delivery costs, lack of trust in the online seller, and poor website functioning or complicated processes.
In marketing, brand implementation refers to the physical representation and consistent application of brand identity across visual and verbal media. In visual terms, this can include signage, uniforms, liveries, interior design and branded merchandise. Brand implementation encompasses facets of architecture, product design, industrial design, quantity surveying, engineering, procurement, project management and retail design. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Brand implementation is an integrated part of a branding cycle and needs to be initiated during the brand design and development phase. Brand implementation is the continuous and consistent application of the brand's image in all business units, communication channels and media. This refers to marketing and branding as a unified whole. In that respect, brand implementation is a continuous process, which requires controlling the brand's image and presence despite changes in markets and company structure.
In marketing, brand loyalty describes a consumer's positive feelings towards a brand and their dedication to purchasing the brand's products and/or services repeatedly regardless of deficiencies, a competitor's actions, or changes in the environment. It can also be demonstrated with other behaviors such as positive word-of-mouth advocacy. Corporate brand loyalty is where an individual buys products from the same manufacturer repeatedly and without wavering, rather than from other suppliers. Loyalty implies dedication and should not be confused with habit, its less-than-emotional engagement and commitment. Businesses whose financial and ethical values (for example, ESG responsibilities) rest in large part on their brand loyalty are said to use the loyalty business model.
In marketing, brand management begins with an analysis on how a brand is currently perceived in the market, proceeds to planning how the brand should be perceived if it is to achieve its objectives and continues with ensuring that the brand is perceived as planned and secures its objectives. Developing a good relationship with target markets is essential for brand management. Tangible elements of brand management include the product itself; its look, price, and packaging, etc. The intangible elements are the experiences that the target markets share with the brand, and also the relationships they have with the brand. A brand manager would oversee all aspects of the consumer's brand association as well as relationships with members of the supply chain. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In marketing, branded content (also known as branded entertainment) is content produced by an advertiser or content whose creation was funded by an advertiser. In contrast to content marketing (in which content is presented first and foremost as a marketing ploy for a brand) and product placement (where advertisers pay to have references to their brands incorporated into outside creative works, such as films and television series), branded content is designed to build awareness for a brand by associating it with content that shares its values. The content does not necessarily need to be a promotion for the brand, although it may still include product placement.
Unlike conventional forms of editorial content, branded content is generally funded entirely by a brand or corporation rather than a studio or a group of solely artistic producers. Examples of branded content have appeared in television, film, online content, video games, events, and other installations. Modern branded marketing strategies are intended primarily to counter market trends, such as the decreasing acceptance of traditional commercials or low-quality advertorials.
In marketing, carrying cost, carrying cost of inventory or holding cost refers to the total cost of holding inventory. This includes warehousing costs such as rent, utilities and salaries, financial costs such as opportunity cost, and inventory costs related to perishability, shrinkage (leakage) and insurance. Carrying cost also includes the opportunity cost of reduced responsiveness to customers' changing requirements, slowed introduction of improved items, and the inventory's value and direct expenses, since that money could be used for other purposes. When there are no transaction costs for shipment, carrying costs are minimized when no excess inventory is held at all, as in a just-in-time production system.Excess inventory can be held for one of three reasons. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Cycle stock is held based on the re-order point, and defines the inventory that must be held for production, sale or consumption during the time between re-order and delivery. Safety stock is held to account for variability, either upstream in supplier lead time, or downstream in customer demand. Physical stock is held by consumer retailers to provide consumers with a perception of plenty. Carrying costs typically range between 20 and 30% of a company's inventory value.
In marketing, contact center telephony is the communication and collaboration system used by businesses to either manage high volumes of inbound queries or outbound telephone calls keeping their workforce or agents productive and in control to serve or acquire customers. This business communication system is an extension of computer telephony integration (CTI).
In marketing, customer lifetime value (CLV or often CLTV), lifetime customer value (LCV), or life-time value (LTV) is a prognostication of the net profit contributed to the whole future relationship with a customer. The prediction model can have varying levels of sophistication and accuracy, ranging from a crude heuristic to the use of complex predictive analytics techniques. Customer lifetime value can also be defined as the monetary value of a customer relationship, based on the present value of the projected future cash flows from the customer relationship. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Customer lifetime value is an important concept in that it encourages firms to shift their focus from quarterly profits to the long-term health of their customer relationships. Customer lifetime value is an important metric because it represents an upper limit on spending to acquire new customers. For this reason it is an important element in calculating payback of advertising spent in marketing mix modeling. One of the first accounts of the term customer lifetime value is in the 1988 book Database Marketing, which includes detailed worked examples. Early adopters of customer lifetime value models in the 1990s include Edge Consulting and BrandScience.
In marketing, geodemographic segmentation is a multivariate statistical classification technique for discovering whether the individuals of a population fall into different groups by making quantitative comparisons of multiple characteristics with the assumption that the differences within any group should be less than the differences between groups.
In marketing, geomarketing (also called marketing geography) is a discipline that uses geolocation (geographic information) in the process of planning and implementation of marketing activities. It can be used in any aspect of the marketing mix — the product, price, promotion, or place (geo targeting). Market segments can also correlate with location, and this can be useful in targeted marketing. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Geomarketing is applied in the financial sector by identifying ATMs traffic generators and creating hotspot maps based on geographical parameters integrated with customer behavior.Geomarketing has a direct impact on the development of modern trade and the reorganization of retail types. Site selection becomes automated and based on scientific procedures that saves both time and money. Geomarketing uses key facts, a good base map, Whois data layers, consumer profiling, and success/fail criteria. GPS tracking and GSM localization can be used to obtain the actual position of the travelling customer.
In marketing, ingredient branding or ingredient marketing refers to a process in which a company markets an established ingredient or component used in its own products. The overall marketing strategy seeks to signal a high-quality product based on the perception of the ingredient.From the ingredient company's perspective, they are not required "to convince consumers that their product is valuable, their customers do it for them".
In marketing, lead generation () is the initiation of consumer interest or enquiry into the products or services of a business. A lead is the contact information and, in some cases, demographic information of a customer who is interested in a specific product or service. Leads may come from various sources or activities, for example, digitally via the Internet, through personal referrals, through telephone calls either by the company or telemarketers, through advertisements, and through events. In 2014, a study found that direct traffic, search engines, and web referrals were the three most popular online channels for lead generation, accounting for 93% of leads. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In 2018, Chief Marketer found that B2B marketers favored email, live events, and content marketing as their top three. After the COVID-19 pandemic in 2020, Gartner identified increases in social and search engine optimization for B2B marketers, while B2C marketers favored digital advertising.Lead generation is often paired with lead management to move leads through the purchase funnel. This combination of activities is referred to as pipeline marketing, which is often broken into a marketing and a sales pipeline.
In marketing, manufacturing, call centre operations, and management, mass customization makes use of flexible computer-aided systems to produce custom output. Such systems combine the low unit costs of mass production processes with the flexibility of individual customization.Mass customization is the new frontier in business for both manufacturing and service industries. At its core, is a tremendous increase in variety and customization without a corresponding increase in costs. At its limit, it is the mass production of individually customized goods and services.
At its best, it provides strategic advantage and economic value. It is one of the product design strategies and is currently used with both techniques (delay differentiation and modular design) together with effective innovative climate to enhance the value delivered to customers.Mass customization is the method of "effectively postponing the task of differentiating a product for a specific customer until the latest possible point in the supply network". Kamis, Koufaris and Stern (2008) conducted experiments to test the impacts of mass customization when postponed to the stage of retail, online shopping. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
They found that users perceive greater usefulness and enjoyment with a mass customization interface vs. a more typical shopping interface, particularly in a task of moderate complexity. From collaborative engineering perspective, mass customization can be viewed as collaborative efforts between customers and manufacturers, who have different sets of priorities and need to jointly search for solutions that best match customers' individual specific needs with manufacturers' customization capabilities.The concept of mass customization is attributed to Stan Davis in Future Perfect, and was defined by Tseng & Jiao (2001, p.
685) as "producing goods and services to meet individual customers' needs with near mass production efficiency". Kaplan & Haenlein (2006) concurred, calling it "a strategy that creates value by some form of company-customer interaction at the fabrication and assembly stage of the operations level to create customized products with production cost and monetary price similar to those of mass-produced products". Similarly, McCarthy (2004, p. 348) highlights that mass customization involves balancing operational drivers by defining it as "the capability to manufacture a relatively high volume of product options for a relatively large market (or collection of niche markets) that demands customization, without tradeoffs in cost, delivery and quality".
In marketing, market segmentation is the process of dividing a broad consumer or business market, normally consisting of existing and potential customers, into sub-groups of consumers (known as segments) based on shared characteristics. In dividing or segmenting markets, researchers typically look for common characteristics such as shared needs, common interests, similar lifestyles, or even similar demographic profiles. The overall aim of segmentation is to identify high yield segments – that is, those segments that are likely to be the most profitable or that have growth potential – so that these can be selected for special attention (i.e. become target markets). Many different ways to segment a market have been identified. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Business-to-business (B2B) sellers might segment the market into different types of businesses or countries, while business-to-consumer (B2C) sellers might segment the market into demographic segments, such as lifestyle, behavior, or socioeconomic status. Market segmentation assumes that different market segments require different marketing programs – that is, different offers, prices, promotions, distribution, or some combination of marketing variables.
Market segmentation is not only designed to identify the most profitable segments, but also to develop profiles of key segments in order to better understand their needs and purchase motivations. Insights from segmentation analysis are subsequently used to support marketing strategy development and planning. Many marketers use the S-T-P approach; Segmentation → Targeting → Positioning to provide the framework for marketing planning objectives. That is, a market is segmented, one or more segments are selected for targeting, and products or services are positioned in a way that resonates with the selected target market or markets.
In marketing, multivariate testing or multi-variable testing techniques apply statistical hypothesis testing on multi-variable systems, typically consumers on websites. Techniques of multivariate statistics are used. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In marketing, product bundling is offering several products or services for sale as one combined product or service package. It is a common feature in many imperfectly competitive product and service markets. Industries engaged in the practice include telecommunications services, financial services, health care, information, and consumer electronics.
A software bundle might include a word processor, spreadsheet, and presentation program into a single office suite. The cable television industry often bundles many TV and movie channels into a single tier or package. The fast food industry combines separate food items into a "meal deal" or "value meal".
A bundle of products may be called a package deal; in recorded music or video games, a compilation or box set; or in publishing, an anthology. Most firms are multi-product or multi-service companies faced with the decision whether to sell products or services separately at individual prices or whether combinations of products should be marketed in the form of "bundles" for which a "bundle price" is asked. Price bundling plays an increasingly important role in many industries (e.g. banking, insurance, software, automotive) and some companies even build their business strategies on bundling. In bundle pricing, companies sell a package or set of goods or services for a lower price than they would charge if the customer bought all of them separately. Pursuing a bundle pricing strategy allows a business to increase its profit by using a discount to induce customers to buy more than they otherwise would have. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In marketing, promotion refers to any type of marketing communication used to inform target audiences of the relative merits of a product, service, brand or issue, most of the time persuasive in nature. It helps marketers to create a distinctive place in customers' mind, it can be either a cognitive or emotional route. The aim of promotion is to increase brand awareness, create interest, generate sales or create brand loyalty. It is one of the basic elements of the market mix, which includes the four Ps, i.e., product, price, place, and promotion.Promotion is also one of the elements in the promotional mix or promotional plan.
These are personal selling, advertising, sales promotion, direct marketing, publicity, word of mouth and may also include event marketing, exhibitions and trade shows. A promotional plan specifies how much attention to pay to each of the elements in the promotional mix, and what proportion of the budget should be allocated to each element. Promotion covers the methods of communication that a marketer uses to provide information about its product. Information can be both verbal and visual.
In marketing, the decoy effect (or attraction effect or asymmetric dominance effect) is the phenomenon whereby consumers will tend to have a specific change in preference between two options when also presented with a third option that is asymmetrically dominated. An option is asymmetrically dominated when it is inferior in all respects to one option; but, in comparison to the other option, it is inferior in some respects and superior in others. In other words, in terms of specific attributes determining preferences, it is completely dominated by (i.e., inferior to) one option and only partially dominated by the other. When the asymmetrically dominated option is present, a higher percentage of consumers will prefer the dominating option than when the asymmetrically dominated option is absent. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The asymmetrically dominated option is therefore a decoy serving to increase preference for the dominating option. The decoy effect is also an example of the violation of the independence of irrelevant alternatives axiom of decision theory.
More simply, when deciding between two options, an unattractive third option can change the perceived preference between the other two.The decoy effect is considered particularly important in choice theory because it is a violation of the assumption of "regularity" present in all axiomatic choice models, for example in a Luce model of choice. Regularity means that it should not be possible for the market share of any alternative to increase when another alternative is added to the choice set. The new alternative should reduce, or at best leave unchanged, the choice share of existing alternatives. Regularity is violated in the example shown below where a new alternative C not only changes the relative shares of A and B but actually increases the share of A in absolute terms. Similarly, the introduction of a new alternative D increases the share of B in absolute terms.
In marketing, the promotional mix describes a blend of promotional variables chosen by marketers to help a firm reach its goals. It has been identified as a subset of the marketing mix. It is believed that there is an optimal way of allocating budgets for the different elements within the promotional mix to achieve best marketing results, and the challenge for marketers is to find the right mix of them. Activities identified as elements of the promotional mix vary, but typically include the following: Advertising is the paid presentation and promotion of ideas, goods, or services by an identified sponsor in a mass medium. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Examples include print ads, radio, television, billboard, direct mail, brochures and catalogs, signs, in-store displays, posters, mobile apps, motion pictures, web pages, banner ads, emails. Personal selling is the process of helping and persuading one or more prospects to purchase a good or service or to act on any idea through the use of an oral presentation, often in a face-to-face manner or by telephone. Examples include sales presentations, sales meetings, sales training and incentive programs for intermediary salespeople, samples, and telemarketing.
Sales Promotion is media and non-media marketing communication used for a pre-determined limited time to increase consumer demand, stimulate market demand or improve product availability. Examples include coupons, sweepstakes, contests, product samples, rebates, tie-ins, self-liquidating premiums, trade shows, trade-ins, and exhibitions. Corporate giveaway items, sometimes called swag, can be included within product samples and distributed to participants at an event for promotional purposes.
Public relations or publicity is information about a firm's products and services carried by a third party in an indirect way. This includes free publicity as well as paid efforts to stimulate discussion and interest. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
It can be accomplished by planting a significant news story indirectly in the media, or presenting it favorably through press releases or corporate anniversary parties. Examples include newspaper and magazine articles, TVs and radio presentations, charitable contributions, speeches, issue advertising, seminars. Word of mouth is also a type of publicity, which transform from the person-to-person storytelling to social media influencers, or bloggers promotions today.
Direct Marketing is a channel-agnostic form of advertising that allows businesses and nonprofits to communicate directly to the customer, with methods such as mobile messaging, email, interactive consumer websites, online display ads, fliers, catalog distribution, promotional letters, and outdoor advertising. Corporate image campaigns have been considered as part of the promotional mix. Sponsorship of an event, contest or race is a way to generate publicity.
Guerrilla marketing tactics are unconventional ways to bring attention to an idea, product or service, such as by using graffiti, sticker bombing, posting flyers, using flash mobs, doing viral marketing campaigns, or other methods using the Internet in unexpected ways. Product placement is paying a movie studio or television show to include a product or service prominently in the movie or show. Digital marketing is the marketing of products or services using digital technologies, mainly on the Internet, but also including mobile phones, display advertising, and any other digital medium. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In marketing, the unique selling proposition (USP), also called the unique selling point, or the unique value proposition (UVP) in the business model canvas, is the marketing strategy of informing customers about how one's own brand or product is superior to its competitors (in addition to its other values).It was used in successful advertising campaigns of the early 1940s. The term was coined by television advertising pioneer Rosser Reeves of Ted Bates & Company. Theodore Levitt, a professor at Harvard Business School, suggested that, "Differentiation is one of the most important strategic and tactical activities in which companies must constantly engage." The term has been extended to cover one's "personal brand".
In markup languages and the digital humanities, overlap occurs when a document has two or more structures that interact in a non-hierarchical manner. A document with overlapping markup cannot be represented as a tree. This is also known as concurrent markup. Overlap happens, for instance, in poetry, where there may be a metrical structure of feet and lines; a linguistic structure of sentences and quotations; and a physical structure of volumes and pages and editorial annotations.
In martial arts, a knifehand strike is a strike using the part of the hand opposite the thumb (from the little finger to the wrist), familiar to many people as a karate chop (in Japanese, shutō-uchi). This refers to strikes performed with the side of the knuckle of the small finger. Suitable targets for the knifehand strike include the carotid sinus at the base of the neck (which can cause unconsciousness), mastoid muscles of the neck, the jugular, the throat, the collar bones, ribs, sides of the head, temple, jaw, the third vertebra (key stone of the spinal column), the upper arm, the wrist (knifehand block), the elbow (outside knifehand block), and the knee cap (leg throw).In many Japanese, Korean, and Chinese styles, the knifehand is used to block as well as to strike. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In martial arts, the terms hard and soft technique denote how forcefully a defender martial artist counters the force of an attack in armed and unarmed combat. In the East Asian martial arts, the corresponding hard technique and soft technique terms are 硬 (Japanese: gō, pinyin: yìng) and 柔 (Japanese: jū, pinyin: róu), hence Goju-ryu (hard-soft school), Shorinji Kempo principles of go-ho ("hard method") and ju-ho ("soft method"), Jujutsu ("art of softness") and Judo ("gentle way"). Regardless of origins and styles, "hard and soft" can be seen as simply firm/unyielding in opposition or complementary to pliant/yielding; each has its application and must be used in its own way, and each makes use of specific principles of timing and biomechanics. In addition to describing a physical technique applied with minimal force, "soft" also sometimes refers to elements of a discipline which are viewed as less purely physical; for example, martial arts that are said to be "internal styles" are sometimes also known as "soft styles", for their focus on mental techniques or spiritual pursuits.
In martingale theory, Émery topology is a topology on the space of semimartingales. The topology is used in financial mathematics. The class of stochastic integrals with general predictable integrands coincides with the closure of the set of all simple integrals.The topology was introduced in 1979 by the french mathematician Michel Émery.
In masonry veneer building construction, a shelf angle or masonry support is a steel angle which supports the weight of brick or stone veneer and transfers that weight onto the main structure of the building so that a gap or space can be created beneath to allow building movements to occur. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In masonry, mortar joints are the spaces between bricks, concrete blocks, or glass blocks, that are filled with mortar or grout. If the surface of the masonry remains unplastered, the joints contribute significantly to the appearance of the masonry. Mortar joints can be made in a series of different fashions, but the most common ones are raked, grapevine, extruded, concave, V, struck, flush, weathered and beaded. In order to produce a mortar joint, the mason must use one of several types of jointers (slickers), rakes, or beaders. These tools are run through the grout in between the building material before the grout is solid and create the desired outcome the mason seeks.
In mass communication, digital media is any communication media that operate in conjunction with various encoded machine-readable data formats. Digital content can be created, viewed, distributed, modified, listened to, and preserved on a digital electronics device, including digital data storage media (in contrast to analog electronic media) and digital broadcasting. Digital defines as any data represented by a series of digits, and media refers to methods of broadcasting or communicating this information. Together, digital media refers to mediums of digitized information broadcast through a screen and/or a speaker. This also includes text, audio, video, and graphics that are transmitted over the internet for viewing or listening to on the internet.Digital media platforms, such as YouTube, Vimeo, and Twitch, accounted for viewership rates of 27.9 billion hours in 2020. A contributing factor to its part in what is commonly referred to as the digital revolution can be attributed to the use of interconnectivity. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In mass communication, media are the communication outlets or tools used to store and deliver information or data. The term refers to components of the mass media communications industry, such as print media, publishing, the news media, photography, cinema, broadcasting (radio and television), digital media, and advertising.The development of early writing and paper enabling longer-distance communication systems such as mail, including in the Persian Empire (Chapar Khaneh and Angarium) and Roman Empire, can be interpreted as early forms of media. Writers such as Howard Rheingold have framed early forms of human communication, such as the Lascaux cave paintings and early writing, as early forms of media. Another framing of the history of media starts with the Chauvet Cave paintings and continues with other ways to carry human communication beyond the short range of voice: smoke signals, trail markers, and sculpture.The term media in its modern application relating to communication channels was first used by Canadian communications theorist Marshall McLuhan, who stated in Counterblast (1954): "The media are not toys; they should not be in the hands of Mother Goose and Peter Pan executives.
They can be entrusted only to new artists because they are art forms." By the mid-1960s, the term had spread to general use in North America and the United Kingdom. The phrase mass media was, according to H.L. Mencken, used as early as 1923 in the United States.The term medium (the singular form of media) is defined as "one of the means or channels of general communication, information, or entertainment in society, as newspapers, radio, or television."
In mass spectrometry, Orbitrap is an ion trap mass analyzer consisting of an outer barrel-like electrode and a coaxial inner spindle-like electrode that traps ions in an orbital motion around the spindle. The image current from the trapped ions is detected and converted to a mass spectrum using the Fourier transform of the frequency signal. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In mass spectrometry, a matrix is a compound that promotes the formation of ions. Matrix compounds are used in matrix-assisted laser desorption/ionization (MALDI), matrix-assisted ionization (MAI), and fast atom bombardment (FAB).
In mass spectrometry, data-independent acquisition (DIA) is a method of molecular structure determination in which all ions within a selected m/z range are fragmented and analyzed in a second stage of tandem mass spectrometry. Tandem mass spectra are acquired either by fragmenting all ions that enter the mass spectrometer at a given time (called broadband DIA) or by sequentially isolating and fragmenting ranges of m/z. DIA is an alternative to data-dependent acquisition (DDA) where a fixed number of precursor ions are selected and analyzed by tandem mass spectrometry.
In mass spectrometry, de novo peptide sequencing is the method in which a peptide amino acid sequence is determined from tandem mass spectrometry. Knowing the amino acid sequence of peptides from a protein digest is essential for studying the biological function of the protein. In the old days, this was accomplished by the Edman degradation procedure. Today, analysis by a tandem mass spectrometer is a more common method to solve the sequencing of peptides. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Generally, there are two approaches: database search and de novo sequencing. Database search is a simple version as the mass spectra data of the unknown peptide is submitted and run to find a match with a known peptide sequence, the peptide with the highest matching score will be selected.
This approach fails to recognize novel peptides since it can only match to existing sequences in the database. De novo sequencing is an assignment of fragment ions from a mass spectrum. Different algorithms are used for interpretation and most instruments come with de novo sequencing programs.
In mass spectrometry, direct analysis in real time (DART) is an ion source that produces electronically or vibronically excited-state species from gases such as helium, argon, or nitrogen that ionize atmospheric molecules or dopant molecules. The ions generated from atmospheric or dopant molecules undergo ion-molecule reactions with the sample molecules to produce analyte ions. Analytes with low ionization energy may be ionized directly. The DART ionization process can produce positive or negative ions depending on the potential applied to the exit electrode. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
This ionization can occur for species desorbed directly from surfaces such as bank notes, tablets, bodily fluids (blood, saliva and urine), polymers, glass, plant leaves, fruits & vegetables, clothing, and living organisms. DART is applied for rapid analysis of a wide variety of samples at atmospheric pressure and in the open laboratory environment. It does not need a specific sample preparation, so it can be used for the analysis of solid, liquid and gaseous samples in their native state. With the aid of DART, exact mass measurements can be done rapidly with high-resolution mass spectrometers. DART mass spectrometry has been used in pharmaceutical applications, forensic studies, quality control, and environmental studies.
In mass spectrometry, fragmentation is the dissociation of energetically unstable molecular ions formed from passing the molecules mass spectrum. These reactions are well documented over the decades and fragmentation patterns are useful to determine the molar weight and structural information of unknown molecules. Fragmentation that occurs in tandem mass spectrometry experiments has been a recent focus of research, because this data helps facilitate the identification of molecules.
In mass spectrometry, liquid junction interface is an ion source or set-up that couples peripheric devices, such as capillary electrophoresis, to mass spectrometry. See the IUPAC recommendation definition as a means of coupling capillary electrophoresis to mass spectrometry in which a liquid reservoir surrounds the separation capillary and transfer capillary to the mass spectrometer. The reservoir provides electrical contact for the capillary electrophoresis. The term liquid junction interface has also been used by Henry M. Fales and coworkers for ion sources where the analyte is in direct contact with the high voltage supply. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
This includes in particular nanospray ion sources where a wire made of stainless steel, gold or other conducting material makes contact with the sample solution inside uncoated spray capillaries. The principle is also applied when a stainless steel union connects a chromatography outlet to a spray capillary. Its use has a number of advantages with respect to simplification of interface or source design, easy handling and cost.
Electrolysis effects have to be controlled. Liquid junction interfaces have been used for on-line desalting in conjunction with mass spectrometry. Thereby, chromatographic material such as C18 phase was directly placed in the flow path coming from a pump or an HPLC device. In a variation of the method, fine capillaries were densely packed with chromatographic phase to form separation columns and act as electrospray capillaries at the same time. This method is commonly employed in many proteomics laboratories.It is of note that experimental designs where the direct application of high voltages to liquids to form aerosols and sprays has been described as early as 1917 in the context of not ionization, but atomization of liquids.
In mass spectrometry, matrix-assisted ionization (also inlet ionization) is a low fragmentation (soft) ionization technique which involves the transfer of particles of the analyte and matrix sample from atmospheric pressure (AP) to the heated inlet tube connecting the AP region to the vacuum of the mass analyzer.Initial ionization occurs as the pressure drops within the inlet tube. Inlet ionization is similar to electrospray ionization in that a reverse phase solvent system is used and the ions produced are highly charged, however a voltage or a laser is not always needed. It is a highly sensitive process for small and large molecules like peptides, proteins and lipids that can be coupled to a liquid chromatograph. Inlet ionization techniques can be used with an Orbitrap mass analyzer, Orbitrap fourier transform mass spectrometer, linear trap quadrupole and MALDI-TOF. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In mass spectrometry, matrix-assisted laser desorption/ionization (MALDI) is an ionization technique that uses a laser energy-absorbing matrix to create ions from large molecules with minimal fragmentation. It has been applied to the analysis of biomolecules (biopolymers such as DNA, proteins, peptides and carbohydrates) and various organic molecules (such as polymers, dendrimers and other macromolecules), which tend to be fragile and fragment when ionized by more conventional ionization methods. It is similar in character to electrospray ionization (ESI) in that both techniques are relatively soft (low fragmentation) ways of obtaining ions of large molecules in the gas phase, though MALDI typically produces far fewer multi-charged ions. MALDI methodology is a three-step process.
First, the sample is mixed with a suitable matrix material and applied to a metal plate. Second, a pulsed laser irradiates the sample, triggering ablation and desorption of the sample and matrix material. Finally, the analyte molecules are ionized by being protonated or deprotonated in the hot plume of ablated gases, and then they can be accelerated into whichever mass spectrometer is used to analyse them.
In mass spectrometry, resolution is a measure of the ability to distinguish two peaks of slightly different mass-to-charge ratios ΔM, in a mass spectrum. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In mass spectrometry, the quadrupole mass analyzer (or quadrupole mass filter) is a type of mass analyzer originally conceived by Nobel laureate Wolfgang Paul and his student Helmut Steinwedel. As the name implies, it consists of four cylindrical rods, set parallel to each other. In a quadrupole mass spectrometer (QMS) the quadrupole is the mass analyzer - the component of the instrument responsible for selecting sample ions based on their mass-to-charge ratio (m/z). Ions are separated in a quadrupole based on the stability of their trajectories in the oscillating electric fields that are applied to the rods.
In mass transfer, the sieving coefficient is a measure of equilibration between the concentrations of two mass transfer streams. It is defined as the mean pre- and post-contact concentration of the mass receiving stream divided by the pre- and post-contact concentration of the mass donating stream. S = C r C d {\displaystyle S={\frac {C_{r}}{C_{d}}}} where S is the sieving coefficient Cr is the mean concentration mass receiving stream Cd is the mean concentration mass donating streamA sieving coefficient of unity implies that the concentrations of the receiving and donating stream equilibrate, i.e. the out-flow concentrations (post-mass transfer) of the mass donating and receiving stream are equal to one another.
Systems with sieving coefficient that are greater than one require an external energy source, as they would otherwise violate the laws of thermodynamics. Sieving coefficients less than one represent a mass transfer process where the concentrations have not equilibrated. Contact time between mass streams is important in consider in mass transfer and affects the sieving coefficient. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In massively multiplayer online games, an instance is a special area, typically a dungeon, that generates a new copy of the location for each group, or for a certain number of players, that enters the area. Instancing, the general term for the use of this technique, addresses several problems encountered by players in the shared spaces of virtual worlds. It is not widely known when instances were first used in this genre. However, The Realm Online (1996) is sometimes credited as introducing the concept.
In master locksmithing, key relevance is the measurable difference between an original key and a copy made of that key, either from a wax impression or directly from the original, and how similar the two keys are in size and shape. It can also refer to the measurable difference between a key and the size required to fit and operate the keyway of its paired lock. No two copies of keys are exactly the same, even if they were both made from key blanks that are struck from the same mould or cut from the same duplicating/milling machine with no changes to the bitting settings in between. Even under these favorable circumstances, there will be minute differences between the two key shapes, though their key relevance is extremely high.
In all machining work, there are measurable amounts of difference between the design specification of an object, and its actual manufactured size. In locksmithing, the allowable tolerance is decided by the range of minute differences between a key's size and shape in comparison to the size and shape required to turn the tumblers within the lock. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Key relevance is the measure of similarity between the key and the optimal size needed to fit the lock, or it is the similarity between a duplicate key and the original it is seeking to replicate. Key relevance cannot be deduced from a key code, since the key code merely refers to a central authoritative source for designed shapes and sizes of keys. Typical modern keys require a key relevance of approximately 0.03 millimetres (0.0012 in) to 0.07 millimetres (0.0028 in) (accuracy within 0.75% to 1.75%) in order to operate.
In material science and solid mechanics, orthotropic materials have material properties at a particular point which differ along three orthogonal axes, where each axis has twofold rotational symmetry. These directional differences in strength can be quantified with Hankinson's equation. They are a subset of anisotropic materials, because their properties change when measured from different directions.
A familiar example of an orthotropic material is wood. In wood, one can define three mutually perpendicular directions at each point in which the properties are different. It is most stiff (and strong) along the grain (axial direction), because most cellulose fibrils are aligned that way. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
It is usually least stiff in the radial direction (between the growth rings), and is intermediate in the circumferential direction. This anisotropy was provided by evolution, as it best enables the tree to remain upright.
Because the preferred coordinate system is cylindrical-polar, this type of orthotropy is also called polar orthotropy. Another example of an orthotropic material is sheet metal formed by squeezing thick sections of metal between heavy rollers.
This flattens and stretches its grain structure. As a result, the material becomes anisotropic — its properties differ between the direction it was rolled in and each of the two transverse directions. This method is used to advantage in structural steel beams, and in aluminium aircraft skins. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
If orthotropic properties vary between points inside an object, it possesses both orthotropy and inhomogeneity. This suggests that orthotropy is the property of a point within an object rather than for the object as a whole (unless the object is homogeneous). The associated planes of symmetry are also defined for a small region around a point and do not necessarily have to be identical to the planes of symmetry of the whole object.
Orthotropic materials are a subset of anisotropic materials; their properties depend on the direction in which they are measured. Orthotropic materials have three planes/axes of symmetry. An isotropic material, in contrast, has the same properties in every direction.
It can be proved that a material having two planes of symmetry must have a third one. Isotropic materials have an infinite number of planes of symmetry. Transversely isotropic materials are special orthotropic materials that have one axis of symmetry (any other pair of axes that are perpendicular to the main one and orthogonal among themselves are also axes of symmetry). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
One common example of transversely isotropic material with one axis of symmetry is a polymer reinforced by parallel glass or graphite fibers. The strength and stiffness of such a composite material will usually be greater in a direction parallel to the fibers than in the transverse direction, and the thickness direction usually has properties similar to the transverse direction. Another example would be a biological membrane, in which the properties in the plane of the membrane will be different from those in the perpendicular direction.
Orthotropic material properties have been shown to provide a more accurate representation of bone's elastic symmetry and can also give information about the three-dimensional directionality of bone's tissue-level material properties.It is important to keep in mind that a material which is anisotropic on one length scale may be isotropic on another (usually larger) length scale. For instance, most metals are polycrystalline with very small grains. Each of the individual grains may be anisotropic, but if the material as a whole comprises many randomly oriented grains, then its measured mechanical properties will be an average of the properties over all possible orientations of the individual grains.
In material science, layered materials are solids with highly anisotropic bonding, in which two-dimensional sheets are internally strongly bonded, but only weakly bonded to adjacent layers. Owing to their distinctive structures, layered materials are often suitable for intercalation reactions.One large family of layered materials are metal dichalcogenides. In such materials, the M-chalcogen bonding is strong and covalent. These materials exhibit anisotropic electronic properties such as thermal and electrical conductivity. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In material science, resilience is the ability of a material to absorb energy when it is deformed elastically, and release that energy upon unloading. Proof resilience is defined as the maximum energy that can be absorbed up to the elastic limit, without creating a permanent distortion. The modulus of resilience is defined as the maximum energy that can be absorbed per unit volume without creating a permanent distortion.
It can be calculated by integrating the stress–strain curve from zero to the elastic limit. In uniaxial tension, under the assumptions of linear elasticity, U r = σ y 2 2 E = σ y ε y 2 {\displaystyle U_{r}={\frac {\sigma _{y}^{2}}{2E}}={\frac {\sigma _{y}\varepsilon _{y}}{2}}} where Ur is the modulus of resilience, σy is the yield strength, εy is the yield strain, and E is the Young's modulus. This analysis is not valid for non-linear elastic materials like rubber, for which the approach of area under the curve until elastic limit must be used.
In materials and electric battery research, cobalt oxide nanoparticles usually refers to particles of cobalt(II,III) oxide Co3O4 of nanometer size, with various shapes and crystal structures. Cobalt oxide nanoparticles have potential applications in lithium-ion batteries and electronic gas sensors. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials chemistry, a binary phase or binary compound is a chemical compound containing two different elements. Some binary phase compounds are molecular, e.g. carbon tetrachloride (CCl4). More typically binary phase refers to extended solids. Famous examples zinc sulfide, which contains zinc and sulfur, and tungsten carbide, which contains tungsten and carbon.Phases with higher degrees of complexity feature more elements, e.g. three elements in ternary phases, four elements in quaternary phases. == References ==
In materials chemistry, a quaternary phase is a chemical compound containing four elements. Some compounds can be molecular or ionic, examples being chlorodifluoromethane (CHClF2) sodium bicarbonate (NaCO3H). More typically quaternary phase refers to extended solids. A famous example are the yttrium barium copper oxide superconductors.
In materials engineering and metallurgy, hot hardness or red hardness (when a metal glows a dull red from the heat) corresponds to hardness of a material at high temperatures. As the temperature of the material increases, hardness decreases and at some point a drastic change in hardness occurs. The hardness at this point is termed the hot or red hardness of that material. Such changes can be seen in materials such as heat treated alloys. == References == | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials engineering, suspension plasma spray (SPS) is a form of plasma spraying where the ceramic feedstock is dispersed in a liquid suspension before being injected into the plasma jet.By suspending powder in a fluid, normal feeding problems are circumvented, allowing the deposition of finer microstructures through the use of finer powders. == References ==
In materials management, ABC analysis is an inventory categorisation technique. ABC analysis divides an inventory into three categories—"A items" with very tight control and accurate records, "B items" with less tightly controlled and good records, and "C items" with the simplest controls possible and minimal records. The ABC analysis provides a mechanism for identifying items that will have a significant impact on overall inventory cost, while also providing a mechanism for identifying different categories of stock that will require different management and controls. The ABC analysis suggests that inventories of an organization are not of equal value.
Thus, the inventory is grouped into three categories (A, B, and C) in order of their estimated importance. 'A' items are very important for an organization. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Because of the high value of these 'A' items, frequent value analysis is required. In addition to that, an organization needs to choose an appropriate order pattern (e.g. 'just-in-time') to avoid excess capacity. 'B' items are important, but of course less important than 'A' items and more important than 'C' items. Therefore, 'B' items are intergroup items. 'C' items are marginally important.
In materials modeled by linear elastic fracture mechanics (LEFM), crack extension occurs when the applied energy release rate G {\displaystyle G} exceeds G R {\displaystyle G_{R}} , where G R {\displaystyle G_{R}} is the material's resistance to crack extension. Conceptually G {\displaystyle G} can be thought of as the energetic gain associated with an additional infinitesimal increment of crack extension, while G R {\displaystyle G_{R}} can be thought of as the energetic penalty of an additional infinitesimal increment of crack extension. At any moment in time, if G ≥ G R {\displaystyle G\geq G_{R}} then crack extension is energetically favorable.
A complication to this process is that in some materials, G R {\displaystyle G_{R}} is not a constant value during the crack extension process. A plot of crack growth resistance G R {\displaystyle G_{R}} versus crack extension Δ a {\displaystyle \Delta a} is called a crack growth resistance curve, or R-curve. A plot of energy release rate G {\displaystyle G} versus crack extension Δ a {\displaystyle \Delta a} for a particular loading configuration is called the driving force curve. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The nature of the applied driving force curve relative to the material's R-curve determines the stability of a given crack. The usage of R-curves in fracture analysis is a more complex, but more comprehensive failure criteria compared to the common failure criteria that fracture occurs when G ≥ G c {\displaystyle G\geq G_{c}} where G c {\displaystyle G_{c}} is simply a constant value called the critical energy release rate. An R-curve based failure analysis takes into account the notion that a material's resistance to fracture is not necessarily constant during crack growth. R-curves can alternatively be discussed in terms of stress intensity factors ( K ) {\displaystyle (K)} rather than energy release rates ( G ) {\displaystyle (G)} , where the R-curves can be expressed as the fracture toughness ( K I c {\displaystyle K_{Ic}} , sometimes referred to as K R {\displaystyle K_{R}} ) as a function of crack length a {\displaystyle a} .
In materials science (specifically crystallography), cocrystals are "solids that are crystalline, single-phase materials composed of two or more different molecular or ionic compounds generally in a stoichiometric ratio which are neither solvates nor simple salts." A broader definition is that cocrystals "consist of two or more components that form a unique crystalline structure having unique properties." Several subclassifications of cocrystals exist.Cocrystals can encompass many types of compounds, including hydrates, solvates and clathrates, which represent the basic principle of host–guest chemistry. Hundreds of examples of cocrystallization are reported annually.
In materials science Functionally Graded Materials (FGMs) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science Giant Magnetoimpedance (GMI) is the effect that occurs in some materials where an external magnetic field causes a large variation in the electrical impedance of the material. It should not be confused with the separate physical phenomenon of Giant Magnetoresistance.
In materials science and continuum mechanics, viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like water, resist shear flow and strain linearly with time when a stress is applied. Elastic materials strain when stretched and immediately return to their original state once the stress is removed. Viscoelastic materials have elements of both of these properties and, as such, exhibit time-dependent strain. Whereas elasticity is usually the result of bond stretching along crystallographic planes in an ordered solid, viscosity is the result of the diffusion of atoms or molecules inside an amorphous material.
In materials science and engineering, the yield point is the point on a stress-strain curve that indicates the limit of elastic behavior and the beginning of plastic behavior. Below the yield point, a material will deform elastically and will return to its original shape when the applied stress is removed. Once the yield point is passed, some fraction of the deformation will be permanent and non-reversible and is known as plastic deformation. The yield strength or yield stress is a material property and is the stress corresponding to the yield point at which the material begins to deform plastically. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The yield strength is often used to determine the maximum allowable load in a mechanical component, since it represents the upper limit to forces that can be applied without producing permanent deformation. In some materials, such as aluminium, there is a gradual onset of non-linear behavior, and no precise yield point. In such a case, the offset yield point (or proof stress) is taken as the stress at which 0.2% plastic deformation occurs.
Yielding is a gradual failure mode which is normally not catastrophic, unlike ultimate failure. In solid mechanics, the yield point can be specified in terms of the three-dimensional principal stresses ( σ 1 , σ 2 , σ 3 {\displaystyle \sigma _{1},\sigma _{2},\sigma _{3}} ) with a yield surface or a yield criterion. A variety of yield criteria have been developed for different materials.
In materials science and materials engineering, uranium metallurgy is the study of the physical and chemical behavior of uranium and its alloys.Commercial-grade uranium can be produced through the reduction of uranium halides with alkali or alkaline earth metals. Uranium metal can also be made through electrolysis of KUF5 or UF4, dissolved in a molten CaCl2 and NaCl. Very pure uranium can be produced through the thermal decomposition of uranium halides on a hot filament. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The uranium isotope 235U is used as the fuel for nuclear reactors and nuclear weapons. It is the only isotope existing in nature to any appreciable extent that is fissile, that is, fissionable by thermal neutrons. The isotope 238U is also important because it absorbs neutrons to produce a radioactive isotope that subsequently decays to the isotope 239Pu (plutonium), which also is fissile. Uranium in its natural state comprises just 0.71% 235U and 99.3% 238U, and the main focus of uranium metallurgy is the enrichment of uranium through isotope separation.
In materials science and mathematics, functionally graded elements are elements used in finite element analysis. They can be used to describe a functionally graded material.
In materials science and metallurgy, toughness is the ability of a material to absorb energy and plastically deform without fracturing. Toughness is the strength with which the material opposes rupture. One definition of material toughness is the amount of energy per unit volume that a material can absorb before rupturing. This measure of toughness is different from that used for fracture toughness, which describes the capacity of materials to resist fracture. Toughness requires a balance of strength and ductility. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science and molecular biology, thermostability is the ability of a substance to resist irreversible change in its chemical or physical structure, often by resisting decomposition or polymerization, at a high relative temperature. Thermostable materials may be used industrially as fire retardants. A thermostable plastic, an uncommon and unconventional term, is likely to refer to a thermosetting plastic that cannot be reshaped when heated, than to a thermoplastic that can be remelted and recast. Thermostability is also a property of some proteins. To be a thermostable protein means to be resistant to changes in protein structure due to applied heat.
In materials science and soil mechanics, a slip line field or slip line field theory is a technique often used to analyze the stresses and forces involved in the major deformation of metals or soils. In essence, in some problems including plane strain and plane stress elastic-plastic problems, elastic part of the material prevent unrestrained plastic flow but in many metal-forming processes, such as rolling, drawing, gorging, etc., large unrestricted plastic flows occur except for many small elastic zones. In effect we are concerned with a rigid-plastic material under condition of plane strain. it turns out that the simplest way of solving stress equations is to express them in terms of a coordinate system that is along potential slip (or failure) surfaces. It is for this reason that this type of analysis is termed slip line analysis or the theory of slip line fields in the literature.
In materials science and solid mechanics, Poisson's ratio ν {\displaystyle \nu } (nu) is a measure of the Poisson effect, the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, ν {\displaystyle \nu } is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2–0.3. The ratio is named after the French mathematician and physicist Siméon Poisson.
In materials science and solid mechanics, biaxial tensile testing is a versatile technique to address the mechanical characterization of planar materials. It is a generalized form of tensile testing in which the material sample is simultaneously stressed along two perpendicular axes. Typical materials tested in biaxial configuration include metal sheets,silicone elastomers,composites,thin films,textiles and biological soft tissues.
In materials science and solid mechanics, residual stresses are stresses that remain in a solid material after the original cause of the stresses has been removed. Residual stress may be desirable or undesirable. For example, laser peening imparts deep beneficial compressive residual stresses into metal components such as turbine engine fan blades, and it is used in toughened glass to allow for large, thin, crack- and scratch-resistant glass displays on smartphones. However, unintended residual stress in a designed structure may cause it to fail prematurely. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Residual stresses can result from a variety of mechanisms including inelastic (plastic) deformations, temperature gradients (during thermal cycle) or structural changes (phase transformation). Heat from welding may cause localized expansion, which is taken up during welding by either the molten metal or the placement of parts being welded. When the finished weldment cools, some areas cool and contract more than others, leaving residual stresses. Another example occurs during semiconductor fabrication and microsystem fabrication when thin film materials with different thermal and crystalline properties are deposited sequentially under different process conditions. The stress variation through a stack of thin film materials can be very complex and can vary between compressive and tensile stresses from layer to layer.
In materials science the flow stress, typically denoted as Yf (or σ f {\displaystyle \sigma _{\text{f}}} ), is defined as the instantaneous value of stress required to continue plastically deforming a material - to keep it flowing. It is most commonly, though not exclusively, used in reference to metals. On a stress-strain curve, the flow stress can be found anywhere within the plastic regime; more explicitly, a flow stress can be found for any value of strain between and including yield point ( σ y {\displaystyle \sigma _{\text{y}}} ) and excluding fracture ( σ F {\displaystyle \sigma _{\text{F}}} ): σ y ≤ Y f < σ F {\displaystyle \sigma _{\text{y}}\leq Y_{\text{f}}<\sigma _{\text{F}}} . The flow stress changes as deformation proceeds and usually increases as strain accumulates due to work hardening, although the flow stress could decrease due to any recovery process. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In continuum mechanics, the flow stress for a given material will vary with changes in temperature, T {\displaystyle T} , strain, ε {\displaystyle \varepsilon } , and strain-rate, ε ˙ {\displaystyle {\dot {\varepsilon }}} ; therefore it can be written as some function of those properties: Y f = f ( ε , ε ˙ , T ) {\displaystyle Y_{\text{f}}=f(\varepsilon ,{\dot {\varepsilon }},T)} The exact equation to represent flow stress depends on the particular material and plasticity model being used. Hollomon's equation is commonly used to represent the behavior seen in a stress-strain plot during work hardening: Y f = K ε p n {\displaystyle Y_{\text{f}}=K\varepsilon _{\text{p}}^{\text{n}}} Where Y f {\displaystyle Y_{\text{f}}} is flow stress, K {\displaystyle K} is a strength coefficient, ε p {\displaystyle \varepsilon _{\text{p}}} is the plastic strain, and n {\displaystyle n} is the strain hardening exponent. Note that this is an empirical relation and does not model the relation at other temperatures or strain-rates (though the behavior may be similar).
Generally, raising the temperature of an alloy above 0.5 Tm results in the plastic deformation mechanisms being controlled by strain-rate sensitivity, whereas at room temperature metals are generally strain-dependent. Other models may also include the effects of strain gradients. Independent of test conditions, the flow stress is also affected by: chemical composition, purity, crystal structure, phase constitution, microstructure, grain size, and prior strain.The flow stress is an important parameter in the fatigue failure of ductile materials.
Fatigue failure is caused by crack propagation in materials under a varying load, typically a cyclically varying load. The rate of crack propagation is inversely proportional to the flow stress of the material. == References == | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, MXenes are a class of two-dimensional inorganic compounds , that consist of atomically thin layers of transition metal carbides, nitrides, or carbonitrides. MXenes accept a variety of hydrophilic terminations. MXenes were first reported in 2012.
In materials science, Ostwald's rule or Ostwald's step rule, conceived by Wilhelm Ostwald, describes the formation of polymorphs. The rule states that usually the less stable polymorph crystallizes first. Ostwald's rule is not a universal law but a common tendency observed in nature.This can be explained on the basis of irreversible thermodynamics, structural relationships, or a combined consideration of statistical thermodynamics and structural variation with temperature.
Unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged. For example, out of hot water, metastable, fibrous crystals of benzamide appear first, only later to spontaneously convert to the more stable rhombic polymorph. Another example is magnesium carbonate, which more readily forms dolomite. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A dramatic example is phosphorus, which upon sublimation first forms the less stable white phosphorus, which only slowly polymerizes to the red allotrope. This is notably the case for the anatase polymorph of titanium dioxide, which having a lower surface energy is commonly the first phase to form by crystallisation from amorphous precursors or solutions despite being metastable, with rutile being the equilibrium phase at all temperatures and pressures. == References ==
In materials science, Schmid's law (also Schmid factor) describes the slip plane and the slip direction of a stressed material, which can resolve the most shear stress. Schmid's Law states that the critically resolved shear stress (τ) is equal to the stress applied to the material (σ) multiplied by the cosine of the angle with the vector normal to the glide plane (φ) and the cosine of the angle with the glide direction (λ). Which can be expressed as: τ = m σ {\displaystyle \tau =m\sigma } where m is known as the Schmid factor m = cos ( ϕ ) cos ( λ ) {\displaystyle m=\cos(\phi )\cos(\lambda )} Both factors τ and σ are measured in stress units, which is calculated the same way as pressure (force divided by area). φ and λ are angles. The factor is named after Erich Schmid who coauthored a book with Walter Boas introducing the concept in 1935.
In materials science, a Bingham plastic is a viscoplastic material that behaves as a rigid body at low stresses but flows as a viscous fluid at high stress. It is named after Eugene C. Bingham who proposed its mathematical form.It is used as a common mathematical model of mud flow in drilling engineering, and in the handling of slurries. A common example is toothpaste, which will not be extruded until a certain pressure is applied to the tube. It is then pushed out as a relatively coherent plug. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, a Frank–Read source is a mechanism explaining the generation of multiple dislocations in specific well-spaced slip planes in crystals when they are deformed. When a crystal is deformed, in order for slip to occur, dislocations must be generated in the material. This implies that, during deformation, dislocations must be primarily generated in these planes.
Cold working of metal increases the number of dislocations by the Frank–Read mechanism. Higher dislocation density increases yield strength and causes work hardening of metals. The mechanism of dislocation generation was proposed by and named after British physicist Charles Frank and Thornton Read.
In materials science, a Lomer–Cottrell junction is a particular configuration of dislocations. When two perfect dislocations encounter along a slip plane, each perfect dislocation can split into two Shockley partial dislocations: a leading dislocation and a trailing dislocation. When the two leading Shockley partials combine, they form a separate dislocation with a burgers vector that is not in the slip plane. This is the Lomer–Cottrell dislocation. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
It is sessile and immobile in the slip plane, acting as a barrier against other dislocations in the plane. The trailing dislocations pile up behind the Lomer–Cottrell dislocation, and an ever greater force is required to push additional dislocations into the pile-up. ex. FCC lattice along {111} slip planes |leading| |trailing| a 2 → a 6 + a 6 {\displaystyle {\frac {a}{2}}\rightarrow {\frac {a}{6}}+{\frac {a}{6}}} a 2 → a 6 + a 6 {\displaystyle {\frac {a}{2}}\rightarrow {\frac {a}{6}}+{\frac {a}{6}}} Combination of leading dislocations: a 6 + a 6 → a 3 {\displaystyle {\frac {a}{6}}+{\frac {a}{6}}\rightarrow {\frac {a}{3}}} The resulting dislocation is along the crystal face, which is not a slip plane in FCC at room temperature. Lomer–Cottrell dislocation == References ==
In materials science, a composite laminate is an assembly of layers of fibrous composite materials which can be joined to provide required engineering properties, including in-plane stiffness, bending stiffness, strength, and coefficient of thermal expansion. The individual layers consist of high-modulus, high-strength fibers in a polymeric, metallic, or ceramic matrix material. Typical fibers used include cellulose, graphite, glass, boron, and silicon carbide, and some matrix materials are epoxies, polyimides, aluminium, titanium, and alumina. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Layers of different materials may be used, resulting in a hybrid laminate. The individual layers generally are orthotropic (that is, with principal properties in orthogonal directions) or transversely isotropic (with isotropic properties in the transverse plane) with the laminate then exhibiting anisotropic (with variable direction of principal properties), orthotropic, or quasi-isotropic properties. Quasi-isotropic laminates exhibit isotropic (that is, independent of direction) inplane response but are not restricted to isotropic out-of-plane (bending) response. Depending upon the stacking sequence of the individual layers, the laminate may exhibit coupling between inplane and out-of-plane response. An example of bending-stretching coupling is the presence of curvature developing as a result of in-plane loading.
In materials science, a dislocation or Taylor's dislocation is a linear crystallographic defect or irregularity within a crystal structure that contains an abrupt change in the arrangement of atoms. The movement of dislocations allow atoms to slide over each other at low stress levels and is known as glide or slip. The crystalline order is restored on either side of a glide dislocation but the atoms on one side have moved by one position. The crystalline order is not fully restored with a partial dislocation.
A dislocation defines the boundary between slipped and unslipped regions of material and as a result, must either form a complete loop, intersect other dislocations or defects, or extend to the edges of the crystal. A dislocation can be characterised by the distance and direction of movement it causes to atoms which is defined by the Burgers vector. Plastic deformation of a material occurs by the creation and movement of many dislocations. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The number and arrangement of dislocations influences many of the properties of materials. The two primary types of dislocations are sessile dislocations which are immobile and glissile dislocations which are mobile. Examples of sessile dislocations are the stair-rod dislocation and the Lomer–Cottrell junction.
The two main types of mobile dislocations are edge and screw dislocations. Edge dislocations can be visualized as being caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the surrounding planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side.
This phenomenon is analogous to half of a piece of paper inserted into a stack of paper, where the defect in the stack is noticeable only at the edge of the half sheet. The theory describing the elastic fields of the defects was originally developed by Vito Volterra in 1907. In 1934, Egon Orowan, Michael Polanyi and G. I. Taylor, proposed that the low stresses observed to produce plastic deformation compared to theoretical predictions at the time could be explained in terms of the theory of dislocations. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, a general rule of mixtures is a weighted mean used to predict various properties of a composite material . It provides a theoretical upper- and lower-bound on properties such as the elastic modulus, ultimate tensile strength, thermal conductivity, and electrical conductivity. In general there are two models, one for axial loading (Voigt model), and one for transverse loading (Reuss model).In general, for some material property E {\displaystyle E} (often the elastic modulus), the rule of mixtures states that the overall property in the direction parallel to the fibers may be as high as E c = f E f + ( 1 − f ) E m {\displaystyle E_{c}=fE_{f}+\left(1-f\right)E_{m}} where f = V f V f + V m {\displaystyle f={\frac {V_{f}}{V_{f}+V_{m}}}} is the volume fraction of the fibers E f {\displaystyle E_{f}} is the material property of the fibers E m {\displaystyle E_{m}} is the material property of the matrixIt is a common mistake to believe that this is the upper-bound modulus for Young's modulus.
The real upper-bound Young's modulus is larger than E c {\displaystyle E_{c}} given by this formula. Even if both constituents are isotropic, the real upper bound is E c {\displaystyle E_{c}} plus a term in the order of square of the difference of the Poisson's ratios of the two constituents.The inverse rule of mixtures states that in the direction perpendicular to the fibers, the elastic modulus of a composite can be as low as E c = ( f E f + 1 − f E m ) − 1 . {\displaystyle E_{c}=\left({\frac {f}{E_{f}}}+{\frac {1-f}{E_{m}}}\right)^{-1}.} If the property under study is the elastic modulus, this quantity is called the lower-bound modulus, and corresponds to a transverse loading. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, a grain boundary is the interface between two grains, or crystallites, in a polycrystalline material. Grain boundaries are two-dimensional defects in the crystal structure, and tend to decrease the electrical and thermal conductivity of the material. Most grain boundaries are preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. They are also important to many of the mechanisms of creep. On the other hand, grain boundaries disrupt the motion of dislocations through a material, so reducing crystallite size is a common way to improve mechanical strength, as described by the Hall–Petch relationship.
In materials science, a matrix is a constituent of a composite material.
In materials science, a metal foam is a material or structure consisting of a solid metal (frequently aluminium) with gas-filled pores comprising a large portion of the volume. The pores can be sealed (closed-cell foam) or interconnected (open-cell foam). The defining characteristic of metal foams is a high porosity: typically only 5–25% of the volume is the base metal. The strength of the material is due to the square–cube law. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Metal foams typically retain some physical properties of their base material. Foam made from non-flammable metal remains non-flammable and can generally be recycled as the base material. Its coefficient of thermal expansion is similar while thermal conductivity is likely reduced.
In materials science, a metal matrix composite (MMC) is a composite material with fibers or particles dispersed in a metallic matrix, such as copper, aluminum, or steel. The secondary phase is typically a ceramic (such as alumina or silicon carbide) or another metal (such as steel). They are typically classified according to the type of reinforcement: short discontinuous fibers (whiskers), continuous fibers, or particulates.
There is some overlap between MMCs and cermets, with the latter typically consisting of less than 20% metal by volume. When at least three materials are present, it is called a hybrid composite. MMCs can have much higher strength-to-weight ratios, stiffness, and ductility than traditional materials, so they are often used in demanding applications. MMCs typically have lower thermal and electrical conductivity and poor resistance to radiation, limiting their use in the very harshest environments. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, a partial dislocation is a decomposed form of dislocation that occurs within a crystalline material. An extended dislocation is a dislocation that has dissociated into a pair of partial dislocations. The vector sum of the Burgers vectors of the partial dislocations is the Burgers vector of the extended dislocation.
In materials science, a polymer blend, or polymer mixture, is a member of a class of materials analogous to metal alloys, in which at least two polymers are blended together to create a new material with different physical properties.
In materials science, a polymer matrix composite (PMC) is a composite material composed of a variety of short or continuous fibers bound together by a matrix of organic polymers. PMCs are designed to transfer loads between fibers of a matrix. Some of the advantages with PMCs include their light weight, high resistance to abrasion and corrosion, and high stiffness and strength along the direction of their reinforcements. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, a porous medium or a porous material is a material containing pores (voids). The skeletal portion of the material is often called the "matrix" or "frame". The pores are typically filled with a fluid (liquid or gas). The skeletal material is usually a solid, but structures like foams are often also usefully analyzed using concept of porous media.
A porous medium is most often characterised by its porosity. Other properties of the medium (e.g. permeability, tensile strength, electrical conductivity, tortuosity) can sometimes be derived from the respective properties of its constituents (solid matrix and fluid) and the media porosity and pores structure, but such a derivation is usually complex. Even the concept of porosity is only straightforward for a poroelastic medium.
Often both the solid matrix and the pore network (also known as the pore space) are continuous, so as to form two interpenetrating continua such as in a sponge. However, there is also a concept of closed porosity and effective porosity, i.e. the pore space accessible to flow. Many natural substances such as rocks and soil (e.g. aquifers, petroleum reservoirs), zeolites, biological tissues (e.g. bones, wood, cork), and man made materials such as cements and ceramics can be considered as porous media. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Many of their important properties can only be rationalized by considering them to be porous media. The concept of porous media is used in many areas of applied science and engineering: filtration, mechanics (acoustics, geomechanics, soil mechanics, rock mechanics), engineering (petroleum engineering, bioremediation, construction engineering), geosciences (hydrogeology, petroleum geology, geophysics), biology and biophysics, material science. Two important current fields of application for porous materials are energy conversion and energy storage, where porous materials are essential for superpacitors, (photo-)catalysis, fuel cells, and batteries.
In materials science, a precipitate-free zone (PFZ) refers to microscopic localized regions around grain boundaries that are free of precipitates (solid impurities forced outwards from the grain during crystallization). It is a common phenomenon that arises in polycrystalline materials (crystalline materials with stochastically-oriented grains) where heterogeneous nucleation of precipitates is the dominant nucleation mechanism. This is because grain boundaries are high-energy surfaces that act as sinks for vacancies, causing regions adjacent to a grain boundary to be devoid of vacancies. As it is energetically favorable for heterogeneous nucleation to occur preferentially around defect-rich sites such as vacancies, nucleation of precipitates is impeded in the vacancy-free regions immediately adjacent to grain boundaries | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, a refractory (or refractory material) is a material that is resistant to decomposition by heat, pressure, or chemical attack, and retains strength and form at high temperatures. Refractories are polycrystalline, polyphase, inorganic, non-metallic, porous, and heterogeneous. They are typically composed of oxides or carbides, nitrides etc. of the following elements: silicon, aluminium, magnesium, calcium, boron, chromium and zirconium.ASTM C71 defines refractories as "non-metallic materials having those chemical and physical properties that make them applicable for structures, or as components of systems, that are exposed to environments above 1,000 °F (811 K; 538 °C)".Refractory materials are used in furnaces, kilns, incinerators, and reactors. Refractories are also used to make crucibles and moulds for casting glass and metals and for surfacing flame deflector systems for rocket launch structures. Today, the iron and steel industry and metal casting sectors use approximately 70% of all refractories produced.
In materials science, a sandwich-structured composite is a special class of composite materials that is fabricated by attaching two thin-but-stiff skins to a lightweight but thick core. The core material is normally low strength, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density. Open- and closed-cell-structured foams like polyethersulfone, polyvinylchloride, polyurethane, polyethylene or polystyrene foams, balsa wood, syntactic foams, and honeycombs are commonly used core materials.
Sometimes, the honeycomb structure is filled with other foams for added strength. Open- and closed-cell metal foam can also be used as core materials. Laminates of glass or carbon fiber-reinforced thermoplastics or mainly thermoset polymers (unsaturated polyesters, epoxies...) are widely used as skin materials. Sheet metal is also used as skin material in some cases. The core is bonded to the skins with an adhesive or with metal components by brazing together. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, a single crystal (or single-crystal solid or monocrystalline solid) is a material in which the crystal lattice of the entire sample is continuous and unbroken to the edges of the sample, with no grain boundaries. The absence of the defects associated with grain boundaries can give monocrystals unique properties, particularly mechanical, optical and electrical, which can also be anisotropic, depending on the type of crystallographic structure. These properties, in addition to making some gems precious, are industrially used in technological applications, especially in optics and electronics.Because entropic effects favor the presence of some imperfections in the microstructure of solids, such as impurities, inhomogeneous strain and crystallographic defects such as dislocations, perfect single crystals of meaningful size are exceedingly rare in nature. The necessary laboratory conditions often add to the cost of production.
On the other hand, imperfect single crystals can reach enormous sizes in nature: several mineral species such as beryl, gypsum and feldspars are known to have produced crystals several meters across.The opposite of a single crystal is an amorphous structure where the atomic position is limited to short-range order only. In between the two extremes exist polycrystalline, which is made up of a number of smaller crystals known as crystallites, and paracrystalline phases. Single crystals will usually have distinctive plane faces and some symmetry, where the angles between the faces will dictate its ideal shape. Gemstones are often single crystals artificially cut along crystallographic planes to take advantage of refractive and reflective properties.
In materials science, a thermosetting polymer, often called a thermoset, is a polymer that is obtained by irreversibly hardening ("curing") a soft solid or viscous liquid prepolymer (resin). Curing is induced by heat or suitable radiation and may be promoted by high pressure, or mixing with a catalyst. Heat is not necessarily applied externally, but is often generated by the reaction of the resin with a curing agent (catalyst, hardener). Curing results in chemical reactions that create extensive cross-linking between polymer chains to produce an infusible and insoluble polymer network. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The starting material for making thermosets is usually malleable or liquid prior to curing, and is often designed to be molded into the final shape. It may also be used as an adhesive. Once hardened, a thermoset cannot be melted for reshaping, in contrast to thermoplastic polymers which are commonly produced and distributed in the form of pellets, and shaped into the final product form by melting, pressing, or injection molding.
In materials science, advanced composite materials (ACMs) are materials that are generally characterized by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. These are termed "advanced composite materials" in comparison to the composite materials commonly in use such as reinforced concrete, or even concrete itself. The high strength fibers are also low density while occupying a large fraction of the volume.
Advanced composites exhibit desirable physical and chemical properties that include light weight coupled with high stiffness (elasticity), and strength along the direction of the reinforcing fiber, dimensional stability, temperature and chemical resistance, flex performance, and relatively easy processing. Advanced composites are replacing metal components in many uses, particularly in the aerospace industry. Composites are classified according to their matrix phase. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
These classifications are polymer matrix composites (PMCs), ceramic matrix composites (CMCs), and metal matrix composites (MMCs). Also, materials within these categories are often called "advanced" if they combine the properties of high (axial, longitudinal) strength values and high (axial, longitudinal) stiffness values, with low weight, corrosion resistance, and in some cases special electrical properties. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors.
Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. ACMs have been developing for NASA's Advanced Space Transportation Program, armor protection for Army aviation and the Federal Aviation Administration of the USA, and high-temperature shafting for the Comanche helicopter. Additionally, ACMs have a decades long history in military and government aerospace industries. However, much of the technology is new and not presented formally in secondary or undergraduate education, and the technology of advanced composites manufacture is continually evolving.
In materials science, an interstitial defect is a type of point crystallographic defect where an atom of the same or of a different type, occupies an interstitial site in the crystal structure. When the atom is of the same type as those already present they are known as a self-interstitial defect. Alternatively, small atoms in some crystals may occupy interstitial sites, such as hydrogen in palladium. Interstitials can be produced by bombarding a crystal with elementary particles having energy above the displacement threshold for that crystal, but they may also exist in small concentrations in thermodynamic equilibrium. The presence of interstitial defects can modify the physical and chemical properties of a material. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, asperity, defined as "unevenness of surface, roughness, ruggedness" (from the Latin asper—"rough"), has implications (for example) in physics and seismology. Smooth surfaces, even those polished to a mirror finish, are not truly smooth on a microscopic scale. They are rough, with sharp, rough or rugged projections, termed "asperities". Surface asperities exist across multiple scales, often in a self affine or fractal geometry.
The fractal dimension of these structures has been correlated with the contact mechanics exhibited at an interface in terms of friction and contact stiffness. When two macroscopically smooth surfaces come into contact, initially they only touch at a few of these asperity points. These cover only a very small portion of the surface area.
Friction and wear originate at these points, and thus understanding their behavior becomes important when studying materials in contact. When the surfaces are subjected to a compressive load, the asperities deform through elastic and plastic modes, increasing the contact area between the two surfaces until the contact area is sufficient to support the load. The relationship between frictional interactions and asperity geometry is complex and poorly understood. It has been reported that an increased roughness may under certain circumstances result in weaker frictional interactions while smoother surfaces may in fact exhibit high levels of friction owing to high levels of true contact.The Archard equation provides a simplified model of asperity deformation when materials in contact are subject to a force. Due to the ubiquitous presence of deformable asperities in self affine hierarchical structures, the true contact area at an interface exhibits a linear relationship with the applied normal load. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, bulk density, also called apparent density or volumetric density, is a property of powders, granules, and other "divided" solids, especially used in reference to mineral components (soil, gravel), chemical substances, pharmaceutical ingredients, foodstuff, or any other masses of corpuscular or particulate matter (particles). Bulk density is defined as the mass of the many particles of the material divided by the total volume they occupy. The total volume includes particle volume, inter-particle void volume, and internal pore volume.Bulk density is not an intrinsic property of a material; it can change depending on how the material is handled. For example, a powder poured into a cylinder will have a particular bulk density; if the cylinder is disturbed, the powder particles will move and usually settle closer together, resulting in a higher bulk density. For this reason, the bulk density of powders is usually reported both as "freely settled" (or "poured" density) and "tapped" density (where the tapped density refers to the bulk density of the powder after a specified compaction process, usually involving vibration of the container.) In contrast, particle density is an intrinsic property of the solid and does not include the volume for voids between particles.
In materials science, ceramic matrix composites (CMCs) are a subgroup of composite materials and a subgroup of ceramics. They consist of ceramic fibers embedded in a ceramic matrix. The fibers and the matrix both can consist of any ceramic material, whereby carbon and carbon fibers can also be regarded as a ceramic material.
In materials science, chemical force microscopy (CFM) is a variation of atomic force microscopy (AFM) which has become a versatile tool for characterization of materials surfaces. With AFM, structural morphology is probed using simple tapping or contact modes that utilize van der Waals interactions between tip and sample to maintain a constant probe deflection amplitude (constant force mode) or maintain height while measuring tip deflection (constant height mode). CFM, on the other hand, uses chemical interactions between functionalized probe tip and sample. Choice chemistry is typically gold-coated tip and surface with R−SH thiols attached, R being the functional groups of interest. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
CFM enables the ability to determine the chemical nature of surfaces, irrespective of their specific morphology, and facilitates studies of basic chemical bonding enthalpy and surface energy. Typically, CFM is limited by thermal vibrations within the cantilever holding the probe. This limits force measurement resolution to ~1 pN which is still very suitable considering weak COOH/CH3 interactions are ~20 pN per pair. Hydrophobicity is used as the primary example throughout this consideration of CFM, but certainly any type of bonding can be probed with this method.
In materials science, creep (sometimes called cold flow) is the tendency of a solid material to undergo slow deformation while subject to persistent mechanical stresses. It can occur as a result of long-term exposure to high levels of stress that are still below the yield strength of the material. Creep is more severe in materials that are subjected to heat for long periods and generally increase as they near their melting point. The rate of deformation is a function of the material's properties, exposure time, exposure temperature and the applied structural load.
Depending on the magnitude of the applied stress and its duration, the deformation may become so large that a component can no longer perform its function – for example creep of a turbine blade could cause the blade to contact the casing, resulting in the failure of the blade. Creep is usually of concern to engineers and metallurgists when evaluating components that operate under high stresses or high temperatures. Creep is a deformation mechanism that may or may not constitute a failure mode. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For example, moderate creep in concrete is sometimes welcomed because it relieves tensile stresses that might otherwise lead to cracking. Unlike brittle fracture, creep deformation does not occur suddenly upon the application of stress. Instead, strain accumulates as a result of long-term stress. Therefore, creep is a "time-dependent" deformation.
In materials science, critical resolved shear stress (CRSS) is the component of shear stress, resolved in the direction of slip, necessary to initiate slip in a grain. Resolved shear stress (RSS) is the shear component of an applied tensile or compressive stress resolved along a slip plane that is other than perpendicular or parallel to the stress axis. The RSS is related to the applied stress by a geometrical factor, m, typically the Schmid factor: τ RSS = σ app m = σ app ( cos ϕ cos λ ) {\displaystyle \tau _{\text{RSS}}=\sigma _{\text{app}}m=\sigma _{\text{app}}(\cos \phi \cos \lambda )} where σapp is the magnitude of the applied tensile stress, Φ is the angle between the normal of the slip plane and the direction of the applied force, and λ is the angle between the slip direction and the direction of the applied force. The Schmid factor is most applicable to FCC single-crystal metals, but for polycrystal metals the Taylor factor has been shown to be more accurate.
The CRSS is the value of resolved shear stress at which yielding of the grain occurs, marking the onset of plastic deformation. CRSS, therefore, is a material property and is not dependent on the applied load or grain orientation. The CRSS is related to the observed yield strength of the material by the maximum value of the Schmid factor: σ y = τ CRSS m max {\displaystyle \sigma _{y}={\frac {\tau _{\text{CRSS}}}{m_{\text{max}}}}} CRSS is a constant for crystal families. Hexagonal close-packed crystals, for example, have three main families - basal, prismatic, and pyramidal - with different values for the critical resolved shear stress. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, cross slip is the process by which a screw dislocation moves from one slip plane to another due to local stresses. It allows non-planar movement of screw dislocations. Non-planar movement of edge dislocations is achieved through climb. Since the Burgers vector of a perfect screw dislocation is parallel to the dislocation line, it has an infinite number of possible slip planes (planes containing the dislocation line and the Burgers vector), unlike an edge or mixed dislocation, which has a unique slip plane.
Therefore, a screw dislocation can glide or slip along any plane that contains its Burgers vector. During cross slip, the screw dislocation switches from gliding along one slip plane to gliding along a different slip plane, called the cross-slip plane. The cross slip of moving dislocations can be seen by transmission electron microscopy.
In materials science, direct laser interference patterning (DLIP) is a laser-based technology that uses the physical principle of interference of high-intensity coherent laser beams to produce functional periodic microstructures. In order to obtain interference, the beam is divided by a beam splitter, special prisms, or other elements. The beams are then folded together to form an interference pattern. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Sufficiently high power of the laser beam can thus result in the removal of material at the interference maximums thanks to ablation phenomenon, leaving the material intact at the minimums. In this way, a repeatable pattern can be permanently fixed on the surface of a given material. DLIP can be applied to almost any material and can change the properties of surfaces in many technological areas with regard to electrical and optical properties, tribology (friction and wear), light absorption and wettability (e.g., which can be related to hygienic properties).
In materials science, disappearing polymorphs (or perverse polymorphism) describes a phenomenon in which a seemingly stable crystal structure is suddenly unable to be produced, instead transforming into a polymorph, or differing crystal structure with the same chemical composition, during nucleation. Sometimes the resulting transformation is extremely hard or impractical to reverse, because the new polymorph may be more stable. It is hypothesized that contact with a single microscopic seed crystal of the new polymorph can be enough to start a chain reaction causing the transformation of a much larger mass of material. Widespread contamination with such microscopic seed crystals may lead to the impression that the original polymorph has "disappeared."
This is of concern to both the pharmaceutical and computer hardware industry, where disappearing polymorphs can ruin the effectiveness of their products, and make it impossible to manufacture the original product if there is any contamination. There have been cases of laboratories growing crystals of a particular structure and when they try to recreate this, the original crystal structure isn't created but a new crystal structure is. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The drug paroxetine was subject to a lawsuit that hinged on such a pair of polymorphs, and multiple life-saving drugs, such as ritonavir, have been recalled due to unexpected polymorphism. Although it may seem like a so-called disappearing polymorph has disappeared for good, it is believed that it is always possible in principle to reconstruct the original polymorph, though doing so may be impractically difficult. Disappearing polymorphs are generally metastable forms, that are replaced by a more stable form.It is hypothesized that "unintentional seeding" may also be responsible for the phenomenon in which it often becomes easier to crystallize synthetic compounds over time.
In materials science, dispersion is the fraction of atoms of a material exposed to the surface. In general, D = NS/N, where D is the dispersion, NS is the number of surface atoms and NT is the total number of atoms of the material. It is an important concept in heterogeneous catalysis, since only atoms exposed to the surface can affect catalytic surface reactions. Dispersion increases with decreasing crystallite size and approaches unity at a crystallite diameter of about 0.1 nm.
In materials science, effective medium approximations (EMA) or effective medium theory (EMT) pertain to analytical or theoretical modeling that describes the macroscopic properties of composite materials. EMAs or EMTs are developed from averaging the multiple values of the constituents that directly make up the composite material. At the constituent level, the values of the materials vary and are inhomogeneous. Precise calculation of the many constituent values is nearly impossible. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
However, theories have been developed that can produce acceptable approximations which in turn describe useful parameters including the effective permittivity and permeability of the materials as a whole. In this sense, effective medium approximations are descriptions of a medium (composite material) based on the properties and the relative fractions of its components and are derived from calculations, and effective medium theory. There are two widely used formulae.Effective permittivity and permeability are averaged dielectric and magnetic characteristics of a microinhomogeneous medium.
They both were derived in quasi-static approximation when the electric field inside a mixture particle may be considered as homogeneous. So, these formulae can not describe the particle size effect. Many attempts were undertaken to improve these formulae.
In materials science, environmental stress fracture or environment assisted fracture is the generic name given to premature failure under the influence of tensile stresses and harmful environments of materials such as metals and alloys, composites, plastics and ceramics. Metals and alloys exhibit phenomena such as stress corrosion cracking, hydrogen embrittlement, liquid metal embrittlement and corrosion fatigue all coming under this category. Environments such as moist air, sea water and corrosive liquids and gases cause environmental stress fracture. Metal matrix composites are also susceptible to many of these processes. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Plastics and plastic-based composites may suffer swelling, debonding and loss of strength when exposed to organic fluids and other corrosive environments, such as acids and alkalies. Under the influence of stress and environment, many structural materials, particularly the high-specific strength ones become brittle and lose their resistance to fracture.
While their fracture toughness remains unaltered, their threshold stress intensity factor for crack propagation may be considerably lowered. Consequently, they become prone to premature fracture because of sub-critical crack growth. This article aims to give a brief overview of the various degradation processes mentioned above.
In materials science, fast ion conductors are solid conductors with highly mobile ions. These materials are important in the area of solid state ionics, and are also known as solid electrolytes and superionic conductors. These materials are useful in batteries and various sensors. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Fast ion conductors are used primarily in solid oxide fuel cells. As solid electrolytes they allow the movement of ions without the need for a liquid or soft membrane separating the electrodes. The phenomenon relies on the hopping of ions through an otherwise rigid crystal structure.
In materials science, fatigue is the initiation and propagation of cracks in a material due to cyclic loading. Once a fatigue crack has initiated, it grows a small amount with each loading cycle, typically producing striations on some parts of the fracture surface. The crack will continue to grow until it reaches a critical size, which occurs when the stress intensity factor of the crack exceeds the fracture toughness of the material, producing rapid propagation and typically complete fracture of the structure. Fatigue has traditionally been associated with the failure of metal components which led to the term metal fatigue.
In the nineteenth century, the sudden failing of metal railway axles was thought to be caused by the metal crystallising because of the brittle appearance of the fracture surface, but this has since been disproved. Most materials, such as composites, plastics and ceramics, seem to experience some sort of fatigue-related failure.To aid in predicting the fatigue life of a component, fatigue tests are carried out using coupons to measure the rate of crack growth by applying constant amplitude cyclic loading and averaging the measured growth of a crack over thousands of cycles. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
However, there are also a number of special cases that need to be considered where the rate of crack growth is significantly different compared to that obtained from constant amplitude testing. Such as the reduced rate of growth that occurs for small loads near the threshold or after the application of an overload; and the increased rate of crack growth associated with short cracks or after the application of an underload.If the loads are above a certain threshold, microscopic cracks will begin to initiate at stress concentrations such as holes, persistent slip bands (PSBs), composite interfaces or grain boundaries in metals. The stress values that cause fatigue damage are typically much less than the yield strength of the material.
In materials science, fracture toughness is the critical stress intensity factor of a sharp crack where propagation of the crack suddenly becomes rapid and unlimited. A component's thickness affects the constraint conditions at the tip of a crack with thin components having plane stress conditions and thick components having plane strain conditions. Plane strain conditions give the lowest fracture toughness value which is a material property. The critical value of stress intensity factor in mode I loading measured under plane strain conditions is known as the plane strain fracture toughness, denoted K Ic {\displaystyle K_{\text{Ic}}} .
When a test fails to meet the thickness and other test requirements that are in place to ensure plane strain conditions, the fracture toughness value produced is given the designation K c {\displaystyle K_{\text{c}}} . Fracture toughness is a quantitative way of expressing a material's resistance to crack propagation and standard values for a given material are generally available. Slow self-sustaining crack propagation known as stress corrosion cracking, can occur in a corrosive environment above the threshold K Iscc {\displaystyle K_{\text{Iscc}}} and below K Ic {\displaystyle K_{\text{Ic}}} . Small increments of crack extension can also occur during fatigue crack growth, which after repeated loading cycles, can gradually grow a crack until final failure occurs by exceeding the fracture toughness. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, fragile matter is a granular material that is jammed solid. Everyday examples include beans getting stuck in a hopper in a whole food shop, or milk powder getting jammed in an upside-down bottle. The term was coined by physicist Michael Cates, who asserts that such circumstances warrant a new class of materials.
The jamming thus described can be unjammed by mechanical means, such as tapping or shaking the container, or poking it with a stick. Cates proposed that such jammed systems differ from ordinary solids in that if the direction of the applied stress changes, the jam will break up. Sometimes the change of direction required is very small.
Perhaps the simplest example is a pile of sand, which is solid in the sense that the pile sustains its shape despite the force of gravity. Slight tilting or vibration is enough to enable the grains to shift, collapsing the pile. Not all jammed systems are fragile, i.e. foam. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Shaving foam is jammed because the bubbles are tightly packed together under the isotropic stress imposed by atmospheric pressure. If it were a fragile solid, it would respond plastically to shear stress, however small. But because bubbles deform, foam actually responds elastically provided that the stress is below a threshold value. Fragile matter is also not to be confused with cases in which the particles have adhered to one another ("caking").
In materials science, friability ( FRY-ə-BIL-ə-tee), the condition of being friable, describes the tendency of a solid substance to break into smaller pieces under duress or contact, especially by rubbing. The opposite of friable is indurate. Substances that are designated hazardous, such as asbestos or crystalline silica, are often said to be friable if small particles are easily dislodged and become airborne, and hence respirable (able to enter human lungs), thereby posing a health hazard. Tougher substances, such as concrete, may also be mechanically ground down and reduced to finely divided mineral dust.
However, such substances are not generally considered friable because of the degree of difficulty involved in breaking the substance's chemical bonds through mechanical means. Some substances, such as polyurethane foams, show an increase in friability with exposure to ultraviolet radiation, as in sunlight. Friable is sometimes used metaphorically to describe "brittle" personalities who can be "rubbed" by seemingly-minor stimuli to produce extreme emotional responses. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, galfenol is the general term for an alloy of iron and gallium. The name was first given to iron-gallium alloys by United States Navy researchers in 1998 when they discovered that adding gallium to iron could amplify iron's magnetostrictive effect up to tenfold. Galfenol is of interest to sonar researchers because magnetostrictor materials are used to detect sound, and amplifying the magnetostrictive effect could lead to better sensitivity of sonar detectors. Galfenol is also proposed for vibrational energy harvesting, actuators for precision machine tools, active anti-vibration systems, and anti-clogging devices for sifting screens and spray nozzles.
Galfenol is machinable and can be produced in sheet and wire form.In 2009, scientists from Virginia Polytechnic Institute and State University, and National Institute of Standards and Technology (NIST) used neutron beams to determine the structure of galfenol. They determined that the addition of gallium changes the lattice structure of the iron atoms from regular cubic cells to one in which the faces of some of the cells become slightly rectangular. The elongated cells tend to clump together in the alloy, forming localized clumps within the material.
These clumps have been described by Peter Gehring of the NIST Center for Neutron Research as "something like raisins within a cake". It has also been proposed that there is an intrinsic mechanism generating this enhanced magnetostriction, which has its origins in the electronic structure of the material as described by density functional theory. It is understood that the addition of gallium to pure iron alters the electronic structure and atomic arrangements in the material in such a way as to enhance the material's magnetoelastic constant. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, grain growth is the increase in size of grains (crystallites) in a material at high temperature. This occurs when recovery and recrystallisation are complete and further reduction in the internal energy can only be achieved by reducing the total area of grain boundary. The term is commonly used in metallurgy but is also used in reference to ceramics and minerals. The behaviors of grain growth is analogous to the coarsening behaviors of grains, which implied that both of grain growth and coarsening may be dominated by the same physical mechanism.
In materials science, grain-boundary strengthening (or Hall–Petch strengthening) is a method of strengthening materials by changing their average crystallite (grain) size. It is based on the observation that grain boundaries are insurmountable borders for dislocations and that the number of dislocations within a grain has an effect on how stress builds up in the adjacent grain, which will eventually activate dislocation sources and thus enabling deformation in the neighbouring grain as well. By changing grain size, one can influence the number of dislocations piled up at the grain boundary and yield strength. For example, heat treatment after plastic deformation and changing the rate of solidification are ways to alter grain size.
In materials science, hardness (antonym: softness) is a measure of the resistance to localized plastic deformation induced by either mechanical indentation or abrasion. In general, different materials differ in their hardness; for example hard metals such as titanium and beryllium are harder than soft metals such as sodium and metallic tin, or wood and common plastics. Macroscopic hardness is generally characterized by strong intermolecular bonds, but the behavior of solid materials under force is complex; therefore, hardness can be measured in different ways, such as scratch hardness, indentation hardness, and rebound hardness. Hardness is dependent on ductility, elastic stiffness, plasticity, strain, strength, toughness, viscoelasticity, and viscosity. Common examples of hard matter are ceramics, concrete, certain metals, and superhard materials, which can be contrasted with soft matter. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, intergranular corrosion (IGC), also known as intergranular attack (IGA), is a form of corrosion where the boundaries of crystallites of the material are more susceptible to corrosion than their insides. (Cf. transgranular corrosion.)
In materials science, lamellar structures or microstructures are composed of fine, alternating layers of different materials in the form of lamellae. They are often observed in cases where a phase transition front moves quickly, leaving behind two solid products, as in rapid cooling of eutectic (such as solder) or eutectoid (such as pearlite) systems. Such conditions force phases of different composition to form but allow little time for diffusion to produce those phases' equilibrium compositions. Fine lamellae solve this problem by shortening the diffusion distance between phases, but their high surface energy makes them unstable and prone to break up when annealing allows diffusion to progress.
A deeper eutectic or more rapid cooling will result in finer lamellae; as the size of an individual lamellum approaches zero, the system will instead retain its high-temperature structure. Two common cases of this include cooling a liquid to form an amorphous solid, and cooling eutectoid austenite to form martensite. In biology, normal adult bones possess a lamellar structure which may be disrupted by some diseases. == References == | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics. It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
In materials science, misorientation is the difference in crystallographic orientation between two crystallites in a polycrystalline material. In crystalline materials, the orientation of a crystallite is defined by a transformation from a sample reference frame (i.e. defined by the direction of a rolling or extrusion process and two orthogonal directions) to the local reference frame of the crystalline lattice, as defined by the basis of the unit cell. In the same way, misorientation is the transformation necessary to move from one local crystal frame to some other crystal frame. That is, it is the distance in orientation space between two distinct orientations.
If the orientations are specified in terms of matrices of direction cosines gA and gB, then the misorientation operator ∆gAB going from A to B can be defined as follows: g B = Δ g A B g A Δ g A B = g B g A − 1 {\displaystyle {\begin{aligned}&g_{B}=\Delta g_{AB}g_{A}\\&\Delta g_{AB}=g_{B}g_{A}^{-1}\end{aligned}}} where the term g A − 1 {\displaystyle g_{A}^{-1}} is the reverse operation of gA, that is, transformation from crystal frame A back to the sample frame. This provides an alternate description of misorientation as the successive operation of transforming from the first crystal frame (A) back to the sample frame and subsequently to the new crystal frame (B). Various methods can be used to represent this transformation operation, such as: Euler angles, Rodrigues vectors, axis/angle (where the axis is specified as a crystallographic direction), or unit quaternions. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, paracrystalline materials are defined as having short- and medium-range ordering in their lattice (similar to the liquid crystal phases) but lacking crystal-like long-range ordering at least in one direction.
In materials science, polymorphism describes the existence of a solid material in more than one form or crystal structure. Polymorphism is a form of isomerism. Any crystalline material can exhibit the phenomenon. Allotropy refers to polymorphism for chemical elements.
Polymorphism is of practical relevance to pharmaceuticals, agrochemicals, pigments, dyestuffs, foods, and explosives. According to IUPAC, a polymorphic transition is "A reversible transition of a solid crystalline phase at a certain temperature and pressure (the inversion point) to another phase of the same chemical composition with a different crystal structure." According to McCrone, polymorphs are "different in crystal structure but identical in the liquid or vapor states." Materials with two polymorphs are called dimorphic, with three polymorphs, trimorphic, etc.In some cases, polymorphism was "discovered" on a computer by crystal structure prediction first, before chemists actually synthesize the crystal in the lab. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, quenching is the rapid cooling of a workpiece in water, oil, polymer, air, or other fluids to obtain certain material properties. A type of heat treating, quenching prevents undesired low-temperature processes, such as phase transformations, from occurring. It does this by reducing the window of time during which these undesired reactions are both thermodynamically favorable, and kinetically accessible; for instance, quenching can reduce the crystal grain size of both metallic and plastic materials, increasing their hardness. In metallurgy, quenching is most commonly used to harden steel by inducing a martensite transformation, where the steel must be rapidly cooled through its eutectoid point, the temperature at which austenite becomes unstable.
In steel alloyed with metals such as nickel and manganese, the eutectoid temperature becomes much lower, but the kinetic barriers to phase transformation remain the same. This allows quenching to start at a lower temperature, making the process much easier. High-speed steel also has added tungsten, which serves to raise kinetic barriers, which among other effects gives material properties (hardness and abrasion resistance) as though the workpiece had been cooled more rapidly than it really has. Even cooling such alloys slowly in air has most of the desired effects of quenching; high-speed steel weakens much less from heat cycling due to high-speed cutting.Extremely rapid cooling can prevent the formation of all crystal structures, resulting in amorphous metal or "metallic glass".
In materials science, radiation-absorbent material (RAM) is a material which has been specially designed and shaped to absorb incident RF radiation (also known as non-ionising radiation), as effectively as possible, from as many incident directions as possible. The more effective the RAM, the lower the resulting level of reflected RF radiation. Many measurements in electromagnetic compatibility (EMC) and antenna radiation patterns require that spurious signals arising from the test setup, including reflections, are negligible to avoid the risk of causing measurement errors and ambiguities. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, recrystallization is a process by which deformed grains are replaced by a new set of defect-free grains that nucleate and grow until the original grains have been entirely consumed. Recrystallization is usually accompanied by a reduction in the strength and hardness of a material and a simultaneous increase in the ductility. Thus, the process may be introduced as a deliberate step in metals processing or may be an undesirable byproduct of another processing step. The most important industrial uses are softening of metals previously hardened or rendered brittle by cold work, and control of the grain structure in the final product. Recrystallization temperature is typically 0.3–0.4 times the melting point for pure metals and 0.5 times for alloys.
In materials science, reinforcement is a constituent of a composite material which increases the composite's stiffness and tensile strength.
In materials science, segregation is the enrichment of atoms, ions, or molecules at a microscopic region in a materials system. While the terms segregation and adsorption are essentially synonymous, in practice, segregation is often used to describe the partitioning of molecular constituents to defects from solid solutions, whereas adsorption is generally used to describe such partitioning from liquids and gases to surfaces. The molecular-level segregation discussed in this article is distinct from other types of materials phenomena that are often called segregation, such as particle segregation in granular materials, and phase separation or precipitation, wherein molecules are segregated in to macroscopic regions of different compositions. Segregation has many practical consequences, ranging from the formation of soap bubbles, to microstructural engineering in materials science, to the stabilization of colloidal suspensions. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Segregation can occur in various materials classes. In polycrystalline solids, segregation occurs at defects, such as dislocations, grain boundaries, stacking faults, or the interface between two phases. In liquid solutions, chemical gradients exist near second phases and surfaces due to combinations of chemical and electrical effects. Segregation which occurs in well-equilibrated systems due to the instrinsic chemical properties of the system is termed equilibrium segregation. Segregation that occurs due to the processing history of the sample (but that would disappear at long times) is termed non-equilibrium segregation.
In materials science, shear modulus or modulus of rigidity, denoted by G, or sometimes S or μ, is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: G = d e f τ x y γ x y = F / A Δ x / l = F l A Δ x {\displaystyle G\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\tau _{xy}}{\gamma _{xy}}}={\frac {F/A}{\Delta x/l}}={\frac {Fl}{A\Delta x}}} where τ x y = F / A {\displaystyle \tau _{xy}=F/A\,} = shear stress F {\displaystyle F} is the force which acts A {\displaystyle A} is the area on which the force acts γ x y {\displaystyle \gamma _{xy}} = shear strain. In engineering := Δ x / l = tan θ {\displaystyle :=\Delta x/l=\tan \theta } , elsewhere := θ {\displaystyle :=\theta } Δ x {\displaystyle \Delta x} is the transverse displacement l {\displaystyle l} is the initial length of the area.The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing force by mass times acceleration. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, slip is the large displacement of one part of a crystal relative to another part along crystallographic planes and directions. Slip occurs by the passage of dislocations on close/packed planes, which are planes containing the greatest number of atoms per area and in close-packed directions (most atoms per length). Close-packed planes are known as slip or glide planes.
A slip system describes the set of symmetrically identical slip planes and associated family of slip directions for which dislocation motion can easily occur and lead to plastic deformation. The magnitude and direction of slip are represented by the Burgers vector, b. An external force makes parts of the crystal lattice glide along each other, changing the material's geometry. A critical resolved shear stress is required to initiate a slip.
In materials science, stress relaxation is the observed decrease in stress in response to strain generated in the structure. This is primarily due to keeping the structure in a strained condition for some finite interval of time hence causing some amount of plastic strain. This should not be confused with creep, which is a constant state of stress with an increasing amount of strain. Since relaxation relieves the state of stress, it has the effect of also relieving the equipment reactions. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Thus, relaxation has the same effect as cold springing, except it occurs over a longer period of time. The amount of relaxation which takes place is a function of time, temperature and stress level, thus the actual effect it has on the system is not precisely known, but can be bounded. Stress relaxation describes how polymers relieve stress under constant strain.
Because they are viscoelastic, polymers behave in a nonlinear, non-Hookean fashion. This nonlinearity is described by both stress relaxation and a phenomenon known as creep, which describes how polymers strain under constant stress.
Experimentally, stress relaxation is determined by step strain experiments, i.e. by applying a sudden one-time strain and measuring the build-up and subsequent relaxation of stress in the material (see figure), in either extensional or shear rheology. Viscoelastic materials have the properties of both viscous and elastic materials and can be modeled by combining elements that represent these characteristics. One viscoelastic model, called the Maxwell model predicts behavior akin to a spring (elastic element) being in series with a dashpot (viscous element), while the Voigt model places these elements in parallel. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Although the Maxwell model is good at predicting stress relaxation, it is fairly poor at predicting creep. On the other hand, the Voigt model is good at predicting creep but rather poor at predicting stress relaxation (see viscoelasticity). The extracellular matrix and most tissues are stress relaxing, and the kinetics of stress relaxation have been recognized as an important mechanical cue that affects the migration, proliferation, and differentiation of embedded cells.Stress relaxation calculations can differ for different materials: To generalize, Obukhov uses power dependencies: σ ( t ) = σ 0 1 − {\displaystyle \sigma (t)={\frac {\sigma _{0}}{1-}}} where σ 0 {\displaystyle \sigma _{0}} is the maximum stress at the time the loading was removed (t*), and n is a material parameter. Vegener et al. use a power series to describe stress relaxation in polyamides: σ ( t ) = ∑ m , n A m n m ( ϵ 0 ′ ) n {\displaystyle \sigma (t)=\sum _{m,n}^{}{A_{mn}^{m}(\epsilon '_{0})^{n}}} To model stress relaxation in glass materials Dowvalter uses the following: σ ( t ) = 1 b ⋅ log 10 α ( t − t n ) + 1 10 α ( t − t n ) − 1 {\displaystyle \sigma (t)={\frac {1}{b}}\cdot \log {\frac {10^{\alpha }(t-t_{n})+1}{10^{\alpha }(t-t_{n})-1}}} where α {\displaystyle \alpha } is a material constant and b and t n {\displaystyle t_{n}} depend on processing conditions. The following non-material parameters all affect stress relaxation in polymers: Magnitude of initial loading Speed of loading Temperature (isothermal vs non-isothermal conditions) Loading medium Friction and wear Long-term storage | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, superplasticity is a state in which solid crystalline material is deformed well beyond its usual breaking point, usually over about 400% during tensile deformation. Such a state is usually achieved at high homologous temperature. Examples of superplastic materials are some fine-grained metals and ceramics. Other non-crystalline materials (amorphous) such as silica glass ("molten glass") and polymers also deform similarly, but are not called superplastic, because they are not crystalline; rather, their deformation is often described as Newtonian fluid.
Superplastically deformed material gets thinner in a very uniform manner, rather than forming a "neck" (a local narrowing) that leads to fracture. Also, the formation of microvoids, which is another cause of early fracture, is inhibited. Superplasticity must not be confused with superelasticity.
In materials science, the Burgers vector, named after Dutch physicist Jan Burgers, is a vector, often denoted as b, that represents the magnitude and direction of the lattice distortion resulting from a dislocation in a crystal lattice. The vector's magnitude and direction is best understood when the dislocation-bearing crystal structure is first visualized without the dislocation, that is, the perfect crystal structure. In this perfect crystal structure, a rectangle whose lengths and widths are integer multiples of a (the unit cell edge length) is drawn encompassing the site of the original dislocation's origin. Once this encompassing rectangle is drawn, the dislocation can be introduced. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
This dislocation will have the effect of deforming, not only the perfect crystal structure, but the rectangle as well. The said rectangle could have one of its sides disjoined from the perpendicular side, severing the connection of the length and width line segments of the rectangle at one of the rectangle's corners, and displacing each line segment from each other. What was once a rectangle before the dislocation was introduced is now an open geometric figure, whose opening defines the direction and magnitude of the Burgers vector.
Specifically, the breadth of the opening defines the magnitude of the Burgers vector, and, when a set of fixed coordinates is introduced, an angle between the termini of the dislocated rectangle's length line segment and width line segment may be specified. When calculating the Burgers vector practically, one may draw a rectangular counterclockwise circuit (Burgers circuit) from a starting point to enclose the dislocation (see the picture above). The Burgers vector will be the vector to complete the circuit, i.e., from the end to the start of the circuit.The direction of the vector depends on the plane of dislocation, which is usually on one of the closest-packed crystallographic planes.
In most metallic materials, the magnitude of the Burgers vector for a dislocation is of a magnitude equal to the interatomic spacing of the material, since a single dislocation will offset the crystal lattice by one close-packed crystallographic spacing unit. In edge dislocations, the Burgers vector and dislocation line are perpendicular to one another. In screw dislocations, they are parallel.The Burgers vector is significant in determining the yield strength of a material by affecting solute hardening, precipitation hardening and work hardening. The Burgers vector plays an important role in determining the direction of dislocation line. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, the Charpy impact test, also known as the Charpy V-notch test, is a standardized high strain rate test which determines the amount of energy absorbed by a material during fracture. Absorbed energy is a measure of the material's notch toughness. It is widely used in industry, since it is easy to prepare and conduct and results can be obtained quickly and cheaply.
A disadvantage is that some results are only comparative. The test was pivotal in understanding the fracture problems of ships during World War II.The test was developed around 1900 by S. B. Russell (1898, American) and Georges Charpy (1901, French). The test became known as the Charpy test in the early 1900s due to the technical contributions and standardization efforts by Charpy.
In materials science, the Zener–Hollomon parameter, typically denoted as Z, is used to relate changes in temperature or strain-rate to the stress-strain behavior of a material. It has been most extensively applied to the forming of steels at increased temperature, when creep is active. It is given by Z = ε ˙ exp ( Q / R T ) {\displaystyle Z={\dot {\varepsilon }}\exp(Q/RT)} where ε ˙ {\textstyle {\dot {\varepsilon }}} is the strain rate, Q is the activation energy, R is the gas constant, and T is the temperature. The Zener–Hollomon parameter is also known as the temperature compensated strain rate, since the two are inversely proportional in the definition. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
It is named after Clarence Zener and John Herbert Hollomon, Jr. who established the formula based on the stress-strain behavior in steel.
When plastically deforming a material, the flow stress depends heavily on both the strain-rate and temperature. During forming processes, Z may help determine appropriate changes in strain-rate or temperature when the other variable is altered, in order to keep material flowing properly. Z has also been applied to some metals over a large range of strain rates and temperatures and shown comparable microstructures at the end-of-processing, as long as Z remained similar. This is because the relative activity of various deformation mechanisms is typically inversely proportional to temperature or strain-rate, such that decreasing strain rate or increasing temperature will increase Z and promote plastic deformation.
In materials science, the concept of the Cottrell atmosphere was introduced by A. H. Cottrell and B. A. Bilby in 1949 to explain how dislocations are pinned in some metals by boron, carbon, or nitrogen interstitials. Cottrell atmospheres occur in body-centered cubic (BCC) and face-centered cubic (FCC) materials, such as iron or nickel, with small impurity atoms, such as boron, carbon, or nitrogen. As these interstitial atoms distort the lattice slightly, there will be an associated residual stress field surrounding the interstitial. This stress field can be relaxed by the interstitial atom diffusing towards a dislocation, which contains a small gap at its core (as it is a more open structure), see Figure 1. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Once the atom has diffused into the dislocation core the atom will stay. Typically only one interstitial atom is required per lattice plane of the dislocation. The collection of solute atoms around the dislocation core due to this process is the Cottrell atmosphere.
In materials science, the sessile drop technique is a method used for the characterization of solid surface energies, and in some cases, aspects of liquid surface energies. The main premise of the method is that by placing a droplet of liquid with a known surface energy and contact angle, the surface energy of the solid substrate can be calculated. The liquid used for such experiments is referred to as the probe liquid, and the use of several different probe liquids is required.
In materials science, the sol–gel process is a method for producing solid materials from small molecules. The method is used for the fabrication of metal oxides, especially the oxides of silicon (Si) and titanium (Ti). The process involves conversion of monomers into a colloidal solution (sol) that acts as the precursor for an integrated network (or gel) of either discrete particles or network polymers. Typical precursors are metal alkoxides. Sol-gel process is used to produce ceramic nanoparticles. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene. Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes.
2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements). It is predicted that there are hundreds of stable single-layer materials.
The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication, mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In materials science, the threshold displacement energy (Td) is the minimum kinetic energy that an atom in a solid needs to be permanently displaced from its site in the lattice to a defect position. It is also known as "displacement threshold energy" or just "displacement energy". In a crystal, a separate threshold displacement energy exists for each crystallographic direction.
Then one should distinguish between the minimum (Td,min) and average (Td,ave) over all lattice directions' threshold displacement energies. In amorphous solids, it may be possible to define an effective displacement energy to describe some other average quantity of interest. Threshold displacement energies in typical solids are of the order of 10-50 eV.
In materials science, the yield strength anomaly refers to materials wherein the yield strength (i.e., the stress necessary to initiate plastic yielding) increases with temperature. For the majority of materials, the yield strength decreases with increasing temperature. In metals, this decrease in yield strength is due to the thermal activation of dislocation motion, resulting in easier plastic deformation at higher temperatures.In some cases, a yield strength anomaly refers to a decrease in the ductility of a material with increasing temperature, which is also opposite the trend in the majority of materials. Anomalies in ductility can be more clear, as an anomalous effect on yield strength can be obscured by its typical decrease with temperature. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In concert with yield strength or ductility anomalies, some materials demonstrate extrema in other temperature dependent properties, such as a minimum in ultrasonic damping, or a maximum in electrical conductivity.The yield strength anomaly in β-brass was one of the earliest discoveries such a phenomenon, and several other ordered intermetallic alloys demonstrate this effect. Precipitation-hardened superalloys exhibit a yield strength anomaly over a considerable temperature range. For these materials, the yield strength shows little variation between room temperature and several hundred degrees Celsius.
Eventually, a maximum yield strength is reached. For even higher temperatures, the yield strength decreases and, eventually, drops to zero when reaching the melting temperature, where the solid material transforms into a liquid. For ordered intermetallics, the temperature of the yield strength peak is roughly 50% of the absolute melting temperature.
In materials science, toughening refers to the process of making a material more resistant to the propagation of cracks. When a crack propagates, the associated irreversible work in different materials classes is different. Thus, the most effective toughening mechanisms differ among different materials classes. The crack tip plasticity is important in toughening of metals and long-chain polymers. Ceramics have limited crack tip plasticity and primarily rely on different toughening mechanisms. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.