text
stringlengths
559
401k
source
stringlengths
13
121
A design classic is an industrially manufactured object with timeless aesthetic value. It serves as a standard of its kind and remains up to date regardless of the year of its design. Whether a particular object is a design classic might often be debatable and the term is sometimes abused but there exists a body of acknowledged classics of product designs from the 19th and 20th century. For an object to become a design classic requires time, and whatever lasting impact the design has had on society, together with its influence on later designs, play large roles in determining whether something becomes a design classic. Thus, design classics are often strikingly simple, going to the essence, and are described with words like iconic, neat, valuable or having meaning. == References ==
Wikipedia/Design_classic
Product design is the process of creating new products for businesses to sell to their customers. It involves the generation and development of ideas through a systematic process that leads to the creation of innovative products. Thus, it is a major aspect of new product development. Product design process: The product design process is a set of strategic and tactical activities, from idea generation to commercialization, used to create a product design. In a systematic approach, product designers conceptualize and evaluate ideas, turning them into tangible inventions and products. The product designer's role is to combine art, science, and technology to create new products that people can use. Their evolving role has been facilitated by digital tools that now allow designers to do things that include communicate, visualize, analyze, 3D modeling and actually produce tangible ideas in a way that would have taken greater human resources in the past. Product design is sometimes confused with (and certainly overlaps with) industrial design, and has recently become a broad term inclusive of service, software, and physical product design. Industrial design is concerned with bringing artistic form and usability, usually associated with craft design and ergonomics, together in order to mass-produce goods. Other aspects of product design and industrial design include engineering design, particularly when matters of functionality or utility (e.g. problem-solving) are at issue, though such boundaries are not always clear. == Product design process == There are various product design processes and many focus on different aspects. One example formulation/model of the process is described by Don Koberg and Jim Bagnel in "The Seven Universal Stages of Creative Problem-Solving." The process is usually completed by a group of people with different skills and training—e.g. industrial designers, field experts (prospective users), engineers (for engineering design aspects), depending upon the nature and type of the product involved. The process often involves figuring out what is required, brainstorming possible ideas, creating mock prototypes and then generating the product. However, that is not the end. Product designers would still need to execute the idea, making it into an actual product and evaluating its success (seeing if any improvements are necessary). The product design process has experienced huge leaps in evolution over the last few years with the rise and adoption of 3D printing. New consumer-friendly 3D printers can produce dimensional objects and print upwards with a plastic-like substance opposed to traditional printers that spread ink across a page. The product design process, as expressed by Koberg and Bagnell, typically involves three main aspects: Analysis Concept Synthesis Depending on the kind of product being designed, the latter two sections are most often revisited (e.g. depending on how often the design needs revision, to improve it or to better fit the criteria). This is a continuous loop, where feedback is the main component. Koberg and Bagnell offer more specifics on the process: In their model, "analysis" consists of two stages, "concept" is only one stage, and "synthesis" encompasses the other four. (These terms notably vary in usage in different design frameworks. Here, they are used in the way they're used by Koberg and Bagnell.) === Analysis === Accept situation: Here, the designers decide on committing to the project and finding a solution to the problem. They pool their resources into figuring out how to solve the task most efficiently. Analyze: In this stage, everyone in the team begins research. They gather general and specific materials which will help to figure out how their problem might be solved. This can range from statistics and questionnaires to published articles, among many other sources. === Concept === Define: This is where the key issue of the matter is defined. The conditions of the problem become objectives, and restraints on the situation become the parameters within which the new design must be constructed. === Synthesis === Ideate: The designers here brainstorm different ideas, solutions for their design problem. The ideal brainstorming session does not involve any bias or judgment, but instead builds on original ideas. Select: By now, the designers have narrowed down their ideas to a select few, which can be guaranteed successes and from there they can outline their plan to make the product. Implement: This is where the prototypes are built, the plan outlined in the previous step is realized and the product starts to become an actual object. Evaluate: In the last stage, the product is tested, and from there, improvements are made. Although this is the last stage, it does not mean that the process is over. The finished prototype may not work as well as hoped so new ideas need to be brainstormed. == Double diamond framework == The double diamond framework is a widely used approach for product discovery, which emphasizes a structured method for problem-solving and solution development, encouraging teams to diverge (broad exploration) before converging (focused decision-making). The framework is divided into two primary stages: diverging and converging, each with its own steps and considerations. Diverging stage: During the diverging stage, teams explore the problem space broadly without predefined solutions. This phase involves engaging with core personas, conducting open-ended conversations, and gathering unfiltered input from customer-facing teams. The goal is to identify and document various problem areas, allowing themes and key issues to emerge naturally. Converging stage: As insights emerge, teams transition to the converging stage, where they narrow down problem areas and prioritize solutions. This phase involves defining the problem, understanding major pain points, and advocating for solutions within the organization. Effective convergence requires clear articulation of the problem's significance and consideration of business strategies and feasibility. Iterative process: The double diamond framework is iterative, allowing teams to revisit stages as needed based on feedback and outcomes. Moving back to earlier stages may be necessary if solutions fail to address underlying issues or elicit negative user responses. Success lies in the team's ability to adapt and refine their approach over time. == Creative visualization == In design, creative visualization refers to the process by which computer generated imagery, digital animation, three-dimensional models, and two-dimensional representations, such as architectural blueprints, engineering drawings, and sewing patterns are created and used in order to visualize a potential product prior to production. Such products include prototypes for vehicles in automotive engineering, apparel in the fashion industry, and buildings in architectural design. == Demand-pull innovation and invention-push innovation == Most product designs fall under one of two categories: demand-pull innovation or invention-push innovation. Demand-pull happens when there is an opportunity in the market to be explored by the design of a product. This product design attempts to solve a design problem. The design solution may be the development of a new product or developing a product that's already on the market, such as developing an existing invention for another purpose. Invention-push innovation happens when there is an advancement in intelligence. This can occur through research or it can occur when the product designer comes up with a new product design idea. == Product design expression == Design expression comes from the combined effect of all elements in a product. Color tone, shape and size should direct a person's thoughts towards buying the product. Therefore, it is in the product designer's best interest to consider the audiences who are most likely to be the product's end consumers. Keeping in mind how consumers will perceive the product during the design process will contribute to the product’s success in the market. However, even within a specific audience, it is challenging to cater to each possible personality within that group. One solution to that is to create a product that, in its designed appearance and function, expresses a personality or tells a story. Products that carry such attributes are more likely to give off a stronger expression that will attract more consumers. It is also important to note that design expression does not only concern the appearance of a product, but also its function. For example, just as both visual design and product functionality affect user perception, both aspects must be carefully aligned when they are making a first impression of us. People usually do not appreciate a rude person even if they are good looking. Similarly, a product can have an attractive appearance but if its function does not follow through it will most likely drop in regards to consumer interest. In this sense, designers are like communicators, they use the language of different elements in the product to express something. == Trends in product design == Product designers must consider every detail: how people use and misuse objects, potential flaws in products, errors in the design process, and the ideal ways people wish they could interact with those objects. Many new designs will fail and many won't even make it to market. Some designs eventually become obsolete. The design process itself can be quite frustrating usually taking 5 or 6 tries to get the product design right. A product that fails in the marketplace the first time may be re-introduced to the market 2 more times. If it continues to fail, the product is then considered to be dead because the market believes it to be a failure. Most new products fail, even if there's a great idea behind them. All types of product design are clearly linked to the economic health of manufacturing sectors. Innovation provides much of the competitive impetus for the development of new products, with new technology often requiring a new design interpretation. It only takes one manufacturer to create a new product paradigm to force the rest of the industry to catch up—fueling further innovation. Products designed to benefit people of all ages and abilities—without penalty to any group—accommodate our swelling aging population by extending independence and supporting the changing physical and sensory needs we all encounter as we grow older. == See also == Axiomatic product development lifecycle (APDL) Industrial design Sustainable design Transgenerational design Virtual product development Universal design Inclusive design == References ==
Wikipedia/Product_design
Usage-centered design is an approach to user interface design based on a focus on user intentions and usage patterns. It analyzes users in terms of the roles they play in relation to systems and employs abstract (essential) use cases for task analysis. It derives visual and interaction design from abstract prototypes based on the understanding of user roles and task cases. Usage-centered design was introduced by Larry Constantine and Lucy Lockwood. The primary reference is their book. == Usage-centered design methods == Usage-centered design is largely based on formal, abstract models such as models of interaction between user roles, UML workflow models and task case and role profiles. Usage-centered design proponents argue for abstract modelling while many designers use realistic personas, scenarios and high-fidelity prototypes. The techniques have been applied with particular success in complex software projects, some of which have been reported in case studies. == Usage-centered design and activity-centered design approach == Usage-centered design share some common ideas with activity-centered design. It is concerned more with the activities of users but not the users per se. Constantine (2006) presents an integrated framework where the models of Usage-centered design are enriched with concepts from the Activity theory. == References == === Citations === === Bibliography === Constantine L. Activity Modeling: Toward a Pragmatic Integration of Activity Theory with Usage-Centered Design, 2006 Constantine L., and Lockwood, L. "Structure and Style in Use Cases for User Interfaces." In M. van Harmelan, Ed., Object Modeling and User Interface Design. Boston: Addison-Wesley, 2001. Constantine L., and Lockwood, L. Software for Use: A Practical Guide to the Essential Models and Methods of Usage-Centered Design. Reading, MA: Addison-Wesley, 1999. (Russian translation 2004, Chinese translation 2004, Japanese translation 2005.) Constantine, L. “Usage-Centered Software Engineering: New Models, Methods, and Metrics.” In Purvis, M. (ed.) Software Engineering: Education & Practice. Los Alamitos, CA: IEEE Computer Society Press, 1996. Constantine, L. "Essential Modeling: Use Cases for User Interfaces.” ACM Interactions, 2 (2): 34-46, April 1995. Strope, J. (2003) “Designing for Breakthroughs in User Performance.” In L. Constantine, ed., Performance by Design: Proceedings of forUSE 2003, the Second International Conference on Usage-Centered Design. Rowley, MA: Ampersand Press. Windl, H. (2002) “Designing a Winner: Creating STEP 7 lite with Usage-Centered Design.” In L. Constantine, ed., forUSE 2002: Proceedings of the First International Conference on Usage-Centered Design. Rowley, MA: Ampersand Press. == Further reading == Usage-centered design FAQ
Wikipedia/Usage-centered_design
An architectural competition is a type of design competition, in which an entity that intends to build new work, or is just seeking ideas, invites architects to submit design proposals. The winning scheme is usually chosen by an independent panel of design professionals and stakeholders (such as government and local representatives, the leadership of a cultural institution, etc.). This process is often used to generate new ideas for building and/or landscape design, stimulate public debate, generate publicity for the project and the commissioning entity, and help emerging designers gain exposure (and potentially win commissions that might be out of reach to them otherwise). Architectural competitions are often, though not exclusively, used to award commissions for public buildings: In some countries, rules for tendering public building contracts stipulate some form of open architectural competition. Winning first prize in a competition does not guarantee that the project will be realized. The commissioning body often has the right to veto the winning design, and both requirements and finances may change, thwarting the original intention. (Many competitions have been held and won before the financing was even in place.) The 2002 World Trade Center site design competition is an example of a highly publicized competition, in which only the basic elements of the winning design by Daniel Libeskind appeared in the finished project. == History == Architectural competitions have existed for more than 2,500 years. The design of the Acropolis, in Athens, resulted from an architectural competition in 448 B.C., as did several European cathedrals in the Middle Ages. During the Renaissance, many projects initiated by the papacy or other top religious bodies were decided through design competition. Examples are the Spanish Steps in Rome and, famously, the competition for the dome of the Florence Cathedral, won by Filippo Brunelleschi in 1419. Open competitions emerged in the late 18th century in countries including the United States, Great Britain, Ireland, France, and Sweden. In 19th century England and Ireland, more than 2,500 competitions were held within five decades, with 362 in London alone. The Royal Institute of British Architects drafted its first set of rules in 1839 and its first formal regulations in 1872. German regulations had been introduced in 1867. In the same period, in the Netherlands, an association for the advancement of architecture (Maatschappij tot Bevordering van de Bouwkunst) started organizing conceptual competitions to stimulate creativity among architects. == Competition types == Various competition paradigms exist, most prominently the following types or combinations of them: Open vs, Invited (or Otherwise Limited) Competitions: Open Competitions: international, national, regional, or otherwise defined in scope, they typically have little or no restrictions on who may enter. Invited, Limited, Pre-Qualified, or otherwise Non-Open Competitions restrict who may participate (and, in many cases, also provide stipends or honorariums to participants) Project vs. Ideas Competitions: Project Competitions: seek schemes for specific building and/ or landscape projects that the commissioning entities intend to realize Ideas Competitions: held for the purpose of generating new ideas (in some cases, particularly novel, provocative, or visionary ones) Single- vs. Multi-stage Competitions Single-Stage Competitions: Multi-Stage Competitions (two stages or more), many of which invite only short-listed participants, a limited group of chosen semi-finalists, to continue to the next stage(s), for which they might receive a stipend or honorarium to help cover costs Anonymous vs. Non-Anonymous Competitions: Anonymous Competitions: judged or juried, for greater objectivity, with no knowledge of the names or identities of participating individuals and firms Non-Anonymous (or Cooperative) Competitions: Competing architects and firms are openly identified from the start (competitors might even be invited to present their projects in person to the jury to explain design strategies and provide for project-specific dialogue) Recurrent vs. One-Time Competitions: Seasonal or Annual Competitions: These recurrent competitions, including Europan, put out periodical calls for entries. They may, or may not, result in an actual constructed project, depending on the set-up. One-Time Competitions, held for a specific project Student Design Competitions == Rules and guidelines == The rules of each competition are defined by the organizer; they often, however, follow the guidelines provided by the International Union of Architects or the relevant national or regional architectural organization. Competition guidelines define roles, responsibilities, processes, and procedures within a competition and provide guidance on possible competition types, eligibility criteria, jury composition, participation conditions, payments, prizes, publication of results, and other aspects. In France and Germany, design competitions are compulsory for all public buildings exceeding a certain cost. == Major international architectural design competitions == Most significant among architectural competitions are the ones which are internationally open, attract a large number of design submissions, and the winning design is built. == See also == Architectural design values Student competition Student design competition == References == == Further reading == Andersson E., Bloxham Zettersten, G. und Rönn, M., (eds) Architectural Competitions - Histories and Practice. Stockholm: The Royal Institute of Technology and Rio Kulturkooperativ, 2013. ISBN 978-91-85249-16-9 Chupin, Jean-Pierre, Carmela Cucuzzella and Bechara Helal (eds) Architecture Competitions and the Production of Culture, Quality and Knowledge: An International Inquiry, Montreal: Potential Architecture Books, 2015, ISBN 978-0-9921317-0-8 Collyer, G. Stanley, Competing Globally in Architecture Competitions, Wiley Academy, 2004, ISBN 0470-86-2130 De Jong, Cees and Mattie, Erik: Architectural Competitions 1792-1949, Taschen, 1997, ISBN 3-8228-8599-1 == External links == Architectural Competition - Nordic Symposium Canadian Competitions Catalogue DesignCompetition.com, list of design competitions DCC Directory of Architecture and Design Competitions, Awards, Associations and Design Residencies.[1], list of 1500 architecture and design competitions CABE: Making Competitions Work RIBA Competitions, the Royal Institute of British Architects dedicated RIBA Competitions unit Wettbewerbe Aktuell, a German journal specialized in architectural competitions Handbook of Architectural Design Competitions, American Institute of Architects (AIA) [2] The Competition Project, Inc., a world-wide resource on competitions since 1990 with the periodical publication, COMPETITIONS (1991-2010) and COMPETITIONS Annual (2010-)
Wikipedia/Architectural_design_competition
A design competition or design contest is a competition in which an entity solicits design proposals from the public for a specified purpose. == Architecture == An architectural design competition solicits architects to submit design proposals for a building, bridge, or other structure. Such competitions may be open, receiving bids internationally, domestically, or regionally. The competition may occur in a single stage, or involve two stages, the first of which eliminates non-viable candidates. Famous early examples of design competitions were for the Acropolis of Athens in 448 BCE, and the dome of the Florence Cathedral in 1418. == Coins and stamps == Coin and stamp design contests solicit designs to appear on the face of stamps and usually the obverse of coins. In 1998, the Royal Canadian Mint held the Millennium Coin Design Contest, a competition for the design of 24 quarters, one for each month of 1999 and 2000. == Government procurement == Specific rules are included in the EU's Directive on Public Contracts for the conduct of a design contest organised as part of a procedure leading to the award of a public contract or a design contest with prizes or payments to participants. == Monuments and sculptures == The design of artistic objects and monuments is a common subject in design competitions. A well-known example is the Vietnam Veterans Memorial in Washington D.C. designed by Maya Lin. == Urban space == Urban and landscape projects may solicit design proposals in a competition. Among them are projects for urban parks, streetscapes, and rehabilitation of natural areas. == Student design competition == A student design competition is a student competition to introduce students to real-world engineering practices and design. == See also == Public opinion == References ==
Wikipedia/Design_competition
Employee experience design (EED or EXD) is the application of experience design in order to intentionally design HR products, services, events, and organizational environments with a focus on the quality of the employee experience whilst providing relevant solutions for an organization. == Overview == EED can be described as the "intentional design of the active or passive use of HR products or services", and employee experiences in general, that affect employees' emotional reaction and therefore their particular behaviors and loyalty. The underlying assumption is that best (customer/employee) relationships are emotional in nature and achieved when companies succeed in not only satisfying certain needs (e.g. compensation), but also making interactions pleasurable. The goal is to yield better customer experience through increased employee engagement and employee empowerment. Following Krippendorf, EED focuses on creating meaningful and sense-making opportunities for engagement, and addressing aspirational and fundamental psychological needs of an employee, such as autonomy, competence and relatedness. == Methods == Related to design strategy, EED is a participatory systems approach to workplace improvements that applies methods and principles of experience design, such as design thinking, co-creation and empathic design and new digital tools and technologies. It also uses tools and techniques that are typical to customer experience management and service design, e.g. employee experience journey mapping or touchpoint analysis. Primary design object is the employee experience, which – when successful – an employee finds unique, memorable and sustainable over time, would want to repeat and build upon, and enthusiastically promotes via word of mouth. It is suspected to encourage loyalty by creating an emotional connection through engaging, compelling, and consistent context. The categories for employee experience design context are products, processes, artefacts, content, space and interactions. While employee experience design is beneficial to create positive customer experiences, it is also beneficial for non-customer-facing roles. Many elements can make up a successful employee experience, including office environment, reward and benefits, flexible working and casual dress policies. == Stakeholders == Human resource management, operating across hierarchies and departments, plays a central role in design, distribution and delivery of EED. As co-creation is an important design principle, it is a shared task and joint responsibility of leadership, HR professionals and employees. Following the logic of the service-profit chain, beneficiaries are also customers, as the recipients of improved service quality and the organization itself through increased profits. == References ==
Wikipedia/Employee_experience_design
Environmentally sustainable design (also called environmentally conscious design, eco-design, etc.) is the philosophy of designing physical objects, the built environment, and services to comply with the principles of ecological sustainability and also aimed at improving the health and comfort of occupants in a building. Sustainable design seeks to reduce negative impacts on the environment, the health and well-being of building occupants, thereby improving building performance. The basic objectives of sustainability are to reduce the consumption of non-renewable resources, minimize waste, and create healthy, productive environments. == Theory == The sustainable design intends to "eliminate negative environmental impact through skillful sensitive design". Manifestations of sustainable design require renewable resources and innovation to impact the environment minimally, and connect people with the natural environment. "Human beings don't have a pollution problem; they have a design problem. If humans were to devise products, tools, furniture, homes, factories, and cities more intelligently from the start, they wouldn't even need to think in terms of waste, contamination, or scarcity. Good design would allow for abundance, endless reuse, and pleasure." - The Upcycle by authors Michael Braungart and William McDonough, 2013. Design-related decisions are happening everywhere daily, impacting "sustainable development" or provisioning for the needs of future generations of life on earth. Sustainability and design are intimately linked. Quite simply, our future is designed. The term "design" is here used to refer to practices applied to the making of products, services, as well as business and innovation strategies — all of which inform sustainability. Sustainability can be thought of as the property of continuance; that is, what is sustainable can be continued. === Conceptual problems === ==== Diminishing returns ==== The principle that all directions of progress run out, ending with diminishing returns, is evident in the typical 'S' curve of the technology life cycle and in the useful life of any system as discussed in industrial ecology and life cycle assessment. Diminishing returns are the result of reaching natural limits. Common business management practice is to read diminishing returns in any direction of effort as an indication of diminishing opportunity, the potential for accelerating decline, and a signal to seek new opportunities elsewhere. (see also: law of diminishing returns, marginal utility, and Jevons paradox.) ==== Unsustainable investment ==== A problem arises when the limits of a resource are hard to see, so increasing investment in response to diminishing returns may seem profitable as in the Tragedy of the Commons, but may lead to a collapse. This problem of increasing investment in diminishing resources has also been studied as a cause of civilization collapse by Joseph Tainter among others. This natural error in investment policy contributed to the collapse of both the Roman and Mayan, among others. Relieving over-stressed resources requires reducing pressure on them, not continually increasing it whether more efficiently or not. ==== Negative effects of waste ==== The designer is responsible for choices that place a demand on natural resources, produce waste, and potentially cause irreversible ecosystem damage. About 80 million tonnes of waste in total are generated in the U.K. alone, for example, each year. And concerning only household waste, between 1991–92 and 2007–08, each person in England generated an average of 1.35 pounds of waste per day. Experience has now shown that there is no completely safe method of waste disposal. All forms of disposal have negative effects on the environment, public innovation, and local economies. Landfills have contaminated drinking water. Garbage burned in incinerators has poisoned air, soil, and water. The majority of water treatment systems change the local ecology. Attempts to control or manage wastes after they are produced fail to eliminate environmental impacts. The toxic components of household products pose serious health risks and aggravate the trash problem. In the U.S., about seven pounds in every ton of household garbage contains toxic materials, such as heavy metals like nickel, lead, cadmium, and mercury from batteries, and organic compounds found in pesticides and consumer products, such as air freshener sprays, nail polish, cleaners, and other products. When burned or buried, toxic materials also pose a serious threat to public health and the environment. The only way to avoid environmental harm from waste is to prevent its generation. Pollution prevention means changing the way activities are conducted and eliminating the source of the problem. It does not mean doing without, but doing differently. For example, preventing waste pollution from litter caused by disposable beverage containers does not mean doing without beverages; it just means using refillable bottles. Industrial designer Victor Papanek has stated that when we design and plan things to be discarded, we exercise insufficient care in design. ==== Waste prevention strategies ==== In planning for facilities, a comprehensive design strategy is needed for preventing the generation of solid waste.Improper waste disposal is associated with increasing pollution, climate change, and other hazardous emissions that pose risks to human wellbeing. Therefore, a good garbage prevention strategy would require that everything brought into a facility is recycled for reuse or recycled back into the environment through biodegradation. This would mean a greater reliance on natural materials or products that are compatible with the environment. Any resource-related development is going to have two basic sources of solid waste — materials purchased and used by the facility and those brought into the facility by visitors. The following waste prevention strategies apply to both, although different approaches will be needed for implementation. use products that minimize waste and are nontoxic compost or anaerobically digest biodegradable wastes reuse materials onsite or collect suitable materials for offsite recycling consuming fewer resources means creating less waste, therefore it reduces the impact on the environment. ==== Climate change ==== Perhaps the most obvious and overshadowing driver of environmentally conscious sustainable design can be attributed to global warming and climate change. The sense of urgency that now prevails for humanity to take action against climate change has increased manifold in the past thirty years. Climate change can be attributed to several faults, and improper design that doesn't take into consideration the environment is one of them. While several steps in the field of sustainability have begun, most products, industries, and buildings still consume a lot of energy and create a lot of pollution. ==== Loss of biodiversity ==== Unsustainable design, or simply design, also affects the biodiversity of a region. Improper design of transport highways forces thousands of animals to move further into forest boundaries. Poorly designed hydrothermal dams affect the mating cycle and indirectly, the numbers of local fish. === Sustainable design principles === While the practical application varies among disciplines, some common principles are as follows: Low-impact materials: choose non-toxic, sustainably produced, or recycled materials that require little energy to process Energy efficiency: use manufacturing processes and produce products that require less energy Emotionally durable design: reducing consumption and waste of resources by increasing the durability of relationships between people and products, through design Design for reuse and recycling: "Products, processes, and systems should be designed for performance in a commercial 'afterlife'." Targeted durability, not immortality, should be a design goal. Material diversity in multicomponent products should be minimized to promote disassembly and value retention. Design impact measures for total carbon footprint and life-cycle assessment for any resource used are increasingly required and available.^ Many are complex, but some give quick and accurate whole-earth estimates of impacts. One measure estimates any spending as consuming an average economic share of global energy use of 8,000 BTU (8,400 kJ) per dollar and producing CO2 at the average rate of 0.57 kg of CO2 per dollar (1995 dollars US) from DOE figures. Sustainable design standards and project design guides are also increasingly available and are vigorously being developed by a wide array of private organizations and individuals. There is also a large body of new methods emerging from the rapid development of what has become known as 'sustainability science' promoted by a wide variety of educational and governmental institutions. Biomimicry: "redesigning industrial systems on biological lines ... enabling the constant reuse of materials in continuous closed cycles..." Service substitution: shifting the mode of consumption from personal ownership of products to provision of services that provide similar functions, e.g., from a private automobile to a carsharing service. Such a system promotes minimal resource use per unit of consumption (e.g., per trip driven). Renewable resource: materials should come from nearby (local or bioregional), sustainably managed renewable sources that can be composted when their usefulness has been exhausted. ==== Bill of Rights for the Planet ==== A model of the new design principles necessary for sustainability is exemplified by the "Bill of Rights for the Planet" or "Hannover Principles" - developed by William McDonough Architects for EXPO 2000 that was held in Hannover, Germany. The Bill of Rights: Insist on the right of humanity and nature to co-exist in healthy, supportive, diverse, and sustainable conditions. Recognize Interdependence. The elements of human design interact with and depend on the natural world, with broad and diverse implications at every scale. Expand design considerations to recognize even distant effects. Respect relationships between spirit and matter. Consider all aspects of human settlement including community, dwelling, industry, and trade in terms of existing and evolving connections between spiritual and material consciousness. Accept responsibility for the consequences of design decisions upon human well-being, the viability of natural systems, and their right to co-exist. Create safe objects of long-term value. Do not burden future generations with requirements for maintenance or vigilant administration of potential danger due to the careless creation of products, processes, or standards. Eliminate the concept of waste. Evaluate and optimize the full life-cycle of products and processes, to approach the state of natural systems in which there is no waste. Rely on natural energy flows. Human designs should, like the living world, derive their creative forces from perpetual solar income. Incorporating this energy efficiently and safely for responsible use. Understand the limitations of design. No human creation lasts forever and design does not solve all problems. Those who create and plan should practice humility in the face of nature. Treat nature as a model and mentor, not an inconvenience to be evaded or controlled. Seek constant improvement by the sharing of knowledge. Encourage direct and open communication between colleagues, patrons, manufacturers, and users to link long-term sustainable considerations with ethical responsibility, and re-establish the integral relationship between natural processes and human activity. These principles were adopted by the World Congress of the International Union of Architects (UIA) in June 1993 at the American Institute of Architects (AIA) Expo 93 in Chicago. Further, the AIA and UIA signed a "Declaration of Interdependence for a Sustainable Future." In summary, the declaration states that today's society is degrading its environment and that the AIA, UIA, and their members are committed to: Placing environmental and social sustainability at the core of practices and professional responsibilities Developing and continually improving practices, procedures, products, services, and standards for sustainable design Educating the building industry, clients, and the general public about the importance of sustainable design Working to change policies, regulations, and standards in government and business so that sustainable design will become the fully supported standard practice Bringing the existing built environment up to sustainable design standards. In addition, the Interprofessional Council on Environmental Design (ICED), a coalition of architectural, landscape architectural, and engineering organizations developed a vision statement in an attempt to foster a team approach to sustainable design. ICED states: The ethics, education, and practices of our professions will be directed to shape a sustainable future. . . . To achieve this vision we will join . . . as a multidisciplinary partnership." These activities are an indication that the concept of sustainable design is being supported on a global and interprofessional scale and that the ultimate goal is to become more environmentally responsive. The world needs facilities that are more energy-efficient and that promote conservation and recycling of natural and economic resources. == Economically and socially sustainable design == Environmentally sustainable design is most beneficial when it works hand-in-hand with the other two counterparts of sustainable design – the economic and socially sustainable designs. These three terms are often coined under the title "triple bottom line." In addition to financial terms, value can also be measured in relation to natural capital (the biosphere and earth's resources), social capital (the norms and networks that enable collective action), and human capital (the sum total of knowledge, experience, intellectual property, and labor available to society). In some countries the term sustainable design is known as ecodesign, green design or environmental design. Victor Papanek, embraced social design and social quality and ecological quality, but did not explicitly combine these areas of design concern in one term. Sustainable design and design for sustainability are more common terms, including the triple bottom line (people, planet and profit). Advocates like Ecothis.EU campaign urge all three considerations be taken into account when designing a circular economy. == Aspects of environmentally sustainable design == === Emotionally durable design === According to Jonathan Chapman of Carnegie Mellon University, emotionally durable design reduces the consumption and waste of natural resources by increasing the resilience of relationships established between consumers and products." Essentially, product replacement is delayed by strong emotional ties. In his book, Emotionally Durable Design: Objects, Experiences & Empathy, Chapman describes how "the process of consumption is, and has always been, motivated by complex emotional drivers, and is about far more than just the mindless purchasing of newer and shinier things; it is a journey towards the ideal or desired self, that through cyclical loops of desire and disappointment, becomes a seemingly endless process of serial destruction". Therefore, a product requires an attribute, or number of attributes, which extend beyond utilitarianism. According to Chapman, "emotional durability" can be achieved through consideration of the following five elements: Narrative: How users share a unique personal history with the product. Consciousness: How the product is perceived as autonomous and in possession of its own free will. Attachment: Can a user be made to feel a strong emotional connection to a product? Fiction: The product inspires interactions and connections beyond just the physical relationship. Surface: How the product ages and develops character through time and use. As a strategic approach, "emotionally durable design provides a useful language to describe the contemporary relevance of designing responsible, well made, tactile products which the user can get to know and assign value to in the long-term". According to Hazel Clark and David Brody of Parsons The New School for Design in New York, "emotionally durable design is a call for professionals and students alike to prioritise the relationships between design and its users, as a way of developing more sustainable attitudes to, and in, design things". === Beauty and sustainable design === Because standards of sustainable design appear to emphasize ethics over aesthetics, some designers and critics have complained that it lacks inspiration. Pritzker Architecture Prize winner Frank Gehry has called green building "bogus", and National Design Awards winner Peter Eisenman has dismissed it as "having nothing to do with architecture". In 2009, The American Prospect asked whether "well-designed green architecture" is an "oxymoron". Others claim that such criticism of sustainable design is misguided. A leading advocate for this alternative view is architect Lance Hosey, whose book The Shape of Green: Aesthetics, Ecology, and Design (2012) was the first dedicated to the relationships between sustainability and beauty. Hosey argues not just that sustainable design needs to be aesthetically appealing in order to be successful, but also that following the principles of sustainability to their logical conclusion requires reimagining the shape of everything designed, creating things of even greater beauty. Reviewers have suggested that the ideas in The Shape of Green could "revolutionize what it means to be sustainable". Small and large buildings are beginning to successfully incorporate principles of sustainability into award-winning designs. Examples include One Central Park and the Science Faculty building, UTS. The popular Living Building Challenge has incorporated beauty as one of its petals in building design. Sustainable products and processes are required to be beautiful because it allows for emotional durability, which increases the probability that they are going to be maintained and preserved, decreasing their carbon footprint. Many people also argue that biophilia is innately beautiful. Which is why building architecture is designed such that people feel close to nature and is often surrounded by well-kept lawns – a design that is both "beautiful" and encourages the inculcation of nature in our daily lives. Or utilizes daylight design into the system – reducing lighting loads while also fulfilling our need for being close to that which is outdoors. === Economic aspects === Discussed above, economics is another aspect of it environmental design that is crucial to most design decisions. It is obvious that most people consider the cost of any design before they consider the environmental impacts of it. Therefore, there is a growing nuance of pitching ideas and suggestions for environmentally sustainable design by highlighting the economical profits that they bring to us. "As the green design field matures, it becomes ever more clear that integration is the key to achieving energy and environmental goals especially if cost is a major driver." Building Green Inc. (1999) To achieve the more ambitious goals of the green design movement, architects, engineers and designers need to further embrace and communicate the profit and economic potential of sustainable design measures. Focus should be on honing skills in communicating the economic and profit potential of smart design, with the same rigor that have been applied to advancing technical building solutions. === Standards of evaluation === There are several standards and rating systems developed as sustainability gains popularity. Most rating systems revolve around buildings and energy, and some cover products as well. Most rating systems certify on the basis of design as well as post construction or manufacturing. LEED - Leadership in energy and environmental design. Living building challenge HERS - Home energy rating WELS rating - water efficiency labeling standard BREEAM - Building Research Establishment's Environmental Assessment Method GBI - Green Building Initiative EPA WaterSense Energy Star FSC - Forest Stewardship Council CASBEE - Comprehensive Assessment System for Built Environment Efficiency Passive house. Net-Positive Design Net-Positive Design and Assessment computer app While designing for environmental sustainability, it is imperative that the appropriate units are paid attention to. Often, different standards weigh things in different units, and that can make a huge impact on the outcome of the project. Another important aspect of using standards and looking at data involves understanding the baseline. A poor design baseline with huge improvements often show a higher efficiency percentage, while an intelligent baseline from the start might only have a little improvement needed and show lesser change. Therefore, all data should ideally be compared on similar levels, and also be looked at from multiple unit values. === Greenwashing === Greenwashing is defined to be "the process of conveying a false impression or providing misleading information about how a company's products are more environmentally sound". This can be as simple as using green packaging which subconsciously leads a consumer to think that a product is more environmentally friendly than others. Another example are eco-labels. Companies can take advantage of these certifications for appearance and profit, but their exact meanings are unclear and not readily available. Some labels are more credible than others as they are verified by a credible third-party, while others are self-awarded. The labels are badly regulated and prone to deception. Companies are trying to “promote false solutions to the climate crisis that distract from and delay concrete and credible action.” Says the United Nations Organization in “Greenwashing - the deceptive tactics behind environmental claims”. This can lead people to make different decisions on the basis of potentially false narratives. These labels are highly effective as a study in Sweden found that a 32.8% of purchase behavior on ecological food can be determined by the presence of an eco-label. Increased transparency of these labels and recycling labels can empower consumers to make better choices. The methods used by most assessment tools can also result in greenwashing, as explained in Net-Positive Design and Sustainable Urban Development. === LCA and product life === Life cycle assessment is the complete assessment of materials from their extraction, transport, processing, refining, manufacturing, maintenance, use, disposal, reuse and recycle stages. It helps put into perspective whether a design is actually environmentally sustainable in the long run. Products such as aluminum which can be reused multiple number of times but have a very energy intensive mining and refining which makes it unfavorable. Information such as this is done using LCA and then taken into consideration when designing. == Applications == Applications of this philosophy range from the microcosm — small objects for everyday use, through to the macrocosm — buildings, cities, and the Earth's physical surface. It is a philosophy that can be applied in the fields of architecture, landscape architecture, urban design, urban planning, engineering, graphic design, industrial design, interior design, fashion design and human-computer interaction. Sustainable design is mostly a general reaction to global environmental crises, the rapid growth of economic activity and human population, depletion of natural resources, damage to ecosystems, and loss of biodiversity. In 2013, eco architecture writer Bridgette Meinhold surveyed emergency and long-term sustainable housing projects that were developed in response to these crises in her book, "Urgent Architecture: 40 Sustainable Housing Solutions for a Changing World." Featured projects focus on green building, sustainable design, eco-friendly materials, affordability, material reuse, and humanitarian relief. Construction methods and materials include repurposed shipping containers, straw bale construction, sandbag homes, and floating homes. The limits of sustainable design are shrinking. Because growth in goods and services consistently outpaces gains in efficiency. As a result, the net effect of sustainable design has simply been to improve the efficiency of rapidly increasing impacts. This problem is not solved by the current approach, which focuses on the efficiency of delivering individual goods and services. To address these limitations, scientific researchers propose methodologies. For example, the Design for Strong Sustainability (DfSoSy) methodology integrates ecological balance, social equity, and systemic changes into the design process, emphasizing system robustness over mere efficiency improvements. The fundamental dilemmas are as follows: the increasing complexity of efficiency improvements; the difficulty of implementing new technologies in societies built around old ones; the fact that the physical impacts of delivering goods and services are not localized, but are distributed across economies; and the fact that the scale of resource use is growing and not stabilizing. === Sustainable architecture === Sustainable architecture is the design of sustainable buildings. Sustainable architecture attempts to reduce the collective environmental impacts during the production of building components, during the construction process, as well as during the lifecycle of the building (heating, electricity use, carpet cleaning etc.) This design practice emphasizes efficiency of heating and cooling systems; alternative energy sources such as solar hot water, appropriate building siting, reused or recycled building materials; on-site power generation - solar technology, ground source heat pumps, wind power; rainwater harvesting for gardening, washing and aquifer recharge; and on-site waste management such as green roofs that filter and control stormwater runoff. This requires close cooperation of the design team, the architects, the engineers, and the client at all project stages, from site selection, scheme formation, material selection and procurement, to project implementation. This is also called a charrette. Appropriate building siting and smaller building footprints are vital to an environmentally sustainable design. Oftentimes, a building may be very well designed, and energy efficient but its location requires people to travel far back and forth – increasing pollution that may not be building produced but is directly as a result of the building anyway. Sustainable architecture must also cover the building beyond its useful life. Its disposal or recycling aspects also come under the wing of sustainability. Often, modular buildings are better to take apart and less energy intensive to put together too. The waste from the demolition site must be disposed of correctly and everything that can be harvested and used again should be designed to be extricated from the structure with ease, preventing unnecessary wastage when decommissioning the building. Another important aspect of sustainable architecture stems from the question of whether a structure is needed. Sometimes the best that can be done to make a structure sustainable is retrofitting or upgrading the building services and supplies instead of tearing it down. Abu Dhabi, for example has undergone and is undergoing major retrofitting to slash its energy and water consumption rather than demolishing and rebuilding new structures. Sustainable architects design with sustainable living in mind. Sustainable vs green design is the challenge that designs not only reflect healthy processes and uses but are powered by renewable energies and site specific resources. A test for sustainable design is — can the design function for its intended use without fossil fuel — unplugged. This challenge suggests architects and planners design solutions that can function without pollution rather than just reducing pollution. As technology progresses in architecture and design theories and as examples are built and tested, architects will soon be able to create not only passive, null-emission buildings, but rather be able to integrate the entire power system into the building design. In 2004 the 59 home housing community, the Solar Settlement, and a 60,000 sq ft (5,600 m2) integrated retail, commercial and residential building, the Sun Ship, were completed by architect Rolf Disch in Freiburg, Germany. The Solar Settlement is the first housing community worldwide in which every home, all 59, produce a positive energy balance. An essential element of Sustainable Building Design is indoor environmental quality including air quality, illumination, thermal conditions, and acoustics. The integrated design of the indoor environment is essential and must be part of the integrated design of the entire structure. ASHRAE Guideline 10-2011 addresses the interactions among indoor environmental factors and goes beyond traditional standards. Concurrently, the recent movements of New Urbanism and New Classical Architecture promote a sustainable approach towards construction, that appreciates and develops smart growth, architectural tradition and classical design. This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing estates and suburban sprawl. Both trends started in the 1980s. The Driehaus Architecture Prize is an award that recognizes efforts in New Urbanism and New Classical Architecture, and is endowed with a prize money twice as high as that of the modernist Pritzker Prize. Several advances in sustainable architecture emerged in the late 20th Century that are now widely known by ordinary practitioners. These overlapping but distinct paradigms include Biophilic Urbanism, Permaculture, Biomimicry, Bioregional Planning, Regenerative Design, Circular Systems approaches ranging from Cradle to Cradle product design to the Circular Economy, Nature-Based Design, Net-zero Design, Nature Positive Design, and Net-Positive Design. These paradigms go beyond traditional sustainable design, which simply integrates sustainable design techniques and technologies into conventional urban planning patterns and building design templates. Instead, they represent a broader societal shift (from aiming for resource and energy efficiency) to creating environments that contribute towards net outcomes, such as 'net-positive sustainability'. Net-positive architecture aims to reverse planetary overshoot as well as improving socio-ecological conditions by changing the nature of built environment decision making, design and assessment. ==== Green design ==== Green design has often been used interchangeably with environmentally sustainable design. It is the practice of creating structures by using environment friendly processes. There is a popular debate about this with several arguing that green design is in effect narrower than sustainable design, which takes into account a larger system. Green design focuses on the short-term goals and while it is a worthy goal, a larger impact is possible using sustainable design. It is included in the process of creating a sustainable design. Another factor to be considered is that green design has been stigmatized by popular personalities such as Pritzker Architecture Prize winner Frank Gehry, but this branding hasn't reached sustainable design. A large part of that is because of how environmentally sustainable design is generally used hand in hand with economically sustainable design and socially sustainable design. Finally, green design is although unintentionally, often associated only with architecture while sustainable design has been considered under a much larger scope. === Engineering design === Sustainable engineering is the process of designing or operating systems such that they use energy and resources sustainably, in other words, at a rate that does not compromise the natural environment, or the ability of future generations to meet their own needs. Common engineering focuses revolve around water supply, production, sanitation, cleaning up of pollution and waste sites, restoring natural habitats etc. === Sustainable interior design === Achieving a healthy and aesthetic environment for the occupants of a space is one of the basic rules in the art of Interior design. When applying focus onto the sustainable aspects of the art, Interior Design can incorporate the study and involvement of functionality, accessibility, and aesthetics to environmentally friendly materials. The integrated design of the indoor environment is essential and must be part of the integrated design of the entire structure. ==== Goals of sustainable interior design ==== Improving the overall building performance through the reduction of negative impacts on the environment is the primary goal. According to the Environmental Protection Agency (EPA), Americans spend approximately 90% of their time indoors, where the concentrations of some toxins and impurities are frequently two to five times higher than they are outside. Sustainable interior design solutions strive to create truly inspirational rooms while simultaneously enhancing indoor air quality and mitigating the environmental impact of interior design procedures. This requires interior designers to make ethical design choices and include environmental concerns into their work, as interiors and the environment are closely intertwined. Reducing consumption of non-renewable resources, minimizing waste and creating healthy, productive environments are the primary objectives of sustainability. Optimizing site potential, minimizing non-renewable energy consumption, using environmentally preferable products, protecting and conserving water, enhancing indoor environmental quality, and optimizing operational and maintenance practices are some of the primary principles. An essential element of Sustainable Building Design is indoor environmental quality including air quality, illumination, thermal conditions, and acoustic. Interior design, when done correctly, can harness the true power of sustainable architecture. ==== Incorporating sustainable interior design ==== Sustainable interior design can be incorporated through various techniques: water efficiency, energy efficiency, using non-toxic, sustainable or recycled materials, using manufactured processes and producing products with more energy efficiency, building longer lasting and better functioning products, designing reusable and recyclable products, following the sustainable design standards and guidelines, and more. For example, a room with large windows to allow for maximum sunlight should have neutral colored interiors to help bounce the light around and increase comfort levels while reducing light energy requirement. The size should, however, be carefully considered to avoid window glare. Interior Designers must take types of paints, adhesives, and more into consideration during their designing and manufacturing phase so they do not contribute to harmful environmental factors. Choosing whether to use a wood floor to marble tiled floor or carpeted floor can reduce energy consumption by the level of insulation that they provide. Utilizing materials that can withhold 24-hour health care facilities, such as linoleum, scrubbable cotton wall coverings, recycled carpeting, low toxic adhesive, and more. Furthermore, incorporating sustainability can begin before the construction process begins. Purchasing items from sustainable local businesses, analyzing the longevity of a product, taking part in recycling by purchasing recycled materials, and more should be taken into consideration. Supporting local, sustainable businesses is the first step, as this not only increases the demand for sustainable products, but also reduces unsustainable methods. Traveling all over to find specific products or purchasing products from overseas contributes to carbon emissions in the atmosphere, pulling further away from the sustainable aspect. Once the products are found, it is important to check if the selection follows the Cradle-to-cradle design (C2C) method and they are also able to be reclaimed, recycled, and reused. Also paying close attention to energy-efficient products during this entire process contributes to the sustainability factors. The aesthetic of a space does not have to be sacrificed in order to achieve sustainable interior design. Every environment and space can incorporate materials and choices to reducing environmental impact, while still providing durability and functionality. ==== Promotion of sustainable interior design ==== The mission to incorporate sustainable interior design into every aspect of life is slowly becoming a reality. The commercial Interior Design Association (IIDA) created the sustainability forum to encourage, support, and educate the design community and the public about sustainability. The Athena Sustainable Materials Institute ensures enabling smaller footprints by working with sustainability leaders in various ways in producing and consuming materials. Building Green considers themselves the most trusted voice for sustainable and healthy design, as they offer a variety of resources to dive deep into sustainability. Various acts, such as the Energy Policy Act (EPAct) of 2005 and the Energy Independence and Security Act (EISA) of 2007 have been revised and passed to achieve better efforts towards sustainable design. Federal efforts, such as the signing of a Memorandum of Understanding to the commitment of sustainable design and the Executive Order 13693 have also worked to achieve these concepts. Various guideline and standard documents have been published for the sake of sustainable interior design and companies like LEED (Leadership in Energy and Environmental Design) are guiding and certifying efforts put into motion to contribute to the mission. When the thought of incorporating sustainable design into an interior's design is kept as a top goal for a designer, creating an overall healthy and environmentally friendly space can be achieved. ==== Global examples of sustainable interior design ==== Proximity Hotel in North Carolina, United States of America: The Proximity Hotel was the first hotel to be granted the LEED Platinum certification from the U.S. Green Building Council. Shanghai Natural History Museum in Shanghai, China: This new museum incorporates evaporative cooling and maintained temperatures through is design and structure. Vancouver Convention Centre West in Vancouver, Canada: This world-class facility achieved LEED v4.1 Platinum certification, recognizing its exceptional sustainability performance in areas like energy efficiency, water conservation, waste reduction, and indoor air quality. Bullitt Center in Seattle, Washington, United States of America: Considered "The Greenest Commercial Building in the World," it is the first to achieve the Living Building Challenge certification. Sydney, Australia became the first city in the country to contribute Green roof and Green wall to their architecture following their "Sustainable Sydney 2030" set of goals. === Sustainable urban planning === Sustainable design of cities is the task of designing and planning the outline of cities such that they have a low carbon footprint, have better air quality, rely on more sustainable sources of energy, and have a healthy relationship with the environment. Sustainable urban planning involves many disciplines, including architecture, engineering, biology, environmental science, materials science, law, transportation, technology, economic development, accounting and finance, and government, among others. This kind of planning also develops innovative and practical approaches to land use and its impact on natural resources. New sustainable solutions for urban planning problems can include green buildings and housing, mixed-use developments, walkability, greenways and open spaces, alternative energy sources such as solar and wind, and transportation options. Good sustainable land use planning helps improve the welfare of people and their communities, shaping their urban areas and neighborhoods into healthier, more efficient spaces. Design and planning of neighbourhoods are a major challenge when creating a favourable urban environment. The challenge is based on the principles of integrated approach to different demands: social, architectural, artistic, economic, sanitary and hygienic. Social demands are aimed at constructing network and placing buildings in order to create favourable conditions for their convenient use. Architectural-artistic solutions are aimed at single spatial composition of an area with the surrounding landscape. Economic demands include rational utilization of area territories. Sanitary and hygienic demands are of more interest in terms of creating sustainable urban areas. === Sustainable landscape and garden design === Sustainable landscape architecture is a category of sustainable design and energy-efficient landscaping concerned with the planning and design of outdoor space. Plants and materials may be bought from local growers to reduce energy used in transportation. Design techniques include planting trees to shade buildings from the sun or protect them from wind, using local materials, and on-site composting and chipping not only to reduce green waste hauling but to increase organic matter and therefore carbon in the soil. Some designers and gardeners such as Beth Chatto also use drought-resistant plants in arid areas (xeriscaping) and elsewhere so that water is not taken from local landscapes and habitats for irrigation. Water from building roofs may be collected in rain gardens so that the groundwater is recharged, instead of rainfall becoming surface runoff and increasing the risk of flooding. Areas of the garden and landscape can also be allowed to grow wild to encourage bio-diversity. Native animals may also be encouraged in many other ways: by plants which provide food such as nectar and pollen for insects, or roosting or nesting habitats such as trees, or habitats such as ponds for amphibians and aquatic insects. Pesticides, especially persistent pesticides, must be avoided to avoid killing wildlife. Soil fertility can be managed sustainably by the use of many layers of vegetation from trees to ground-cover plants and mulches to increase organic matter and therefore earthworms and mycorrhiza; nitrogen-fixing plants instead of synthetic nitrogen fertilizers; and sustainably harvested seaweed extract to replace micronutrients. Sustainable landscapes and gardens can be productive as well as ornamental, growing food, firewood and craft materials from beautiful places. Sustainable landscape approaches and labels include organic farming and growing, permaculture, agroforestry, forest gardens, agroecology, vegan organic gardening, ecological gardening and climate-friendly gardening. === Sustainable agriculture === Sustainable agriculture adheres to three main goals: Environmental health, Economic profitability, Social and economic equity. A variety of philosophies, policies and practices have contributed to these goals. People in many different capacities, from farmers to consumers, have shared this vision and contributed to it. Despite the diversity of people and perspectives, the following themes commonly weave through definitions of sustainable agriculture. There are strenuous discussions — among others by the agricultural sector and authorities — if existing pesticide protocols and methods of soil conservation adequately protect topsoil and wildlife. Doubt has risen if these are sustainable, and if agrarian reforms would permit an efficient agriculture with fewer pesticides, therefore reducing the damage to the ecosystem. === Energy sector === Sustainable technology in the energy sector is based on utilizing renewable sources of energy such as solar, wind, hydro, bioenergy, geothermal, and hydrogen. Wind energy is the world's fastest growing energy source; it has been in use for centuries in Europe and more recently in the United States and other nations. Wind energy is captured through the use of wind turbines that generate and transfer electricity for utilities, homeowners and remote villages. Solar power can be harnessed through photovoltaics, concentrating solar, or solar hot water and is also a rapidly growing energy source. Advancements in the technology and modifications to photovoltaics cells provide a more in depth untouched method for creating and producing solar power. Researchers have found a potential way to use the photogalvanic effect to transform sunlight into electric energy. The availability, potential, and feasibility of primary renewable energy resources must be analyzed early in the planning process as part of a comprehensive energy plan. The plan must justify energy demand and supply and assess the actual costs and benefits to the local, regional, and global environments. Responsible energy use is fundamental to sustainable development and a sustainable future. Energy management must balance justifiable energy demand with appropriate energy supply. The process couples energy awareness, energy conservation, and energy efficiency with the use of primary renewable energy resources. === Water sector === Sustainable water technologies have become an important industry segment with several companies now providing important and scalable solutions to supply water in a sustainable manner. Beyond the use of certain technologies, Sustainable Design in Water Management also consists very importantly in correct implementation of concepts. Among these principal concepts is the fact normally in developed countries 100% of water destined for consumption, that is not necessarily for drinking purposes, is of potable water quality. This concept of differentiating qualities of water for different purposes has been called "fit-for-purpose". This more rational use of water achieves several economies, that are not only related to water itself, but also the consumption of energy, as to achieve water of drinking quality can be extremely energy intensive for several reasons. === Domestic machinery and furniture === Automobiles, home appliances and furnitures can be designed for repair and disassembly (for recycling), and constructed from recyclable materials such as steel, aluminum and glass, and renewable materials, such as wood and plastics from natural feedstocks. Careful selection of materials and manufacturing processes can often create products comparable in price and performance to non-sustainable products. Even mild design efforts can greatly increase the sustainable content of manufactured items. Improvements to heating, cooling, ventilation and water heating === Design for sustainable manufacturing === Sustainable manufacturing can be defined as the creation of a manufactured product through a concurrent improvement in the resulting effect on factory and product sustainability. The concept of sustainable manufacturing demands a renewed design of production systems in order to condition the related sustainability on product life cycle and Factory operations. Designing sustainable production systems imply, on the one hand, the analysis and optimization of intra-factory aspects that are related to manufacturing plants. Such aspects can regard the resource consumption restrain, the process efficiency, the ergonomics for the factory workers, the elimination of hazardous substances, the minimization of factory emissions and waste as well as internal emissions, the integrated management of information in the production facilities, and the technological updating of machines and plants. Other inter-factories aspects concern the sustainable design of manufactured products, product chain dematerialisation, management of the background and foreground supply chains, support of circular economy paradigm, and the labelling for sustainability. Advantageous reasons for why companies might choose to sustainably manufacture either their products or use a sustainable manufacturing process are: Increase operational efficiency by reducing costs and waste Respond to or reach new customers and increase competitive advantage Protect and strengthen brand and reputation and build public trust Build long-term business viability and success Respond to regulatory constraints and opportunities == Sustainable technologies == Sustainable technologies use less energy, fewer limited resources, do not deplete natural resources, do not directly or indirectly pollute the environment, and can be reused or recycled at the end of their useful life. They may also be technology that help identify areas of growth by giving feedback in terms of data or alerts allowed to be analyzed to improve environmental footprints. There is significant overlap with appropriate technology, which emphasizes the suitability of technology to the context, in particular considering the needs of people in developing countries. The most appropriate technology may not be the most sustainable one; and a sustainable technology may have high cost or maintenance requirements that make it unsuitable as an "appropriate technology", as that term is commonly used. "Technology is deeply entrenched in our society; without it, society would immediately collapse. Moreover, technological changes can be perceived as easier to accomplish than lifestyle changes that might be required to solve the problems that we face." The design of sustainable technology relies heavily on the flow of new information. Sustainable technology such as smart metering systems and intelligent sensors reduce energy consumption and help conserve water. These systems are ones that have more fundamental changes, rather than just switching to simple sustainable designs. Such designing requires constant updates and evolutions, to ensure true environmental sustainability, because the concept of sustainability is ever changing – with regards to our relationship with the environment. A large part of designing sustainable technology involves giving control to the users for their comfort and operation. For example, dimming controls help people adjust the light levels to their comfort. Sectioned lighting and lighting controls let people manipulate their lighting needs without worrying about affecting others – therefore reducing lighting loads. == Innovation and development == The precursor step to environmentally sustainable development must be a sustainable design. By definition, design is defined as purpose, planning, or intention that exists or is thought to exist behind an action, fact, or material object. Development utilizes design and executes it, helping areas, cities, or places to advance. Sustainable development is that development which adheres to the values of sustainability and provide for the society without endangering the ecosystem and its services. "Without development, design is useless. Without design, development is unusable." – Florian Popescu, How to bridge the gap between design and development. Eco-innovation is the design and development of products and processes that contribute to sustainable development, applying the commercial application of knowledge to elicit direct or indirect ecological improvements. This includes a range of related ideas, from environmentally friendly technological advances to socially acceptable innovative paths towards sustainability. WIPO GREEN is an online global marketplace for technology exchange connecting providers and seekers of inventions and innovations in sustainable technology innovations. Several factors drive design innovation in the environmental sphere. These include growing consumer awareness and demand for green products and services, development and (re)discovery of renewable materials, sustainable refurbishment, new technologies for manufacturing and growing use of artificial intelligence-based tools based to map needs and identify areas for improved efficiency. Whatever the industry or product, design rights (whether registered or unregistered) can harness innovative design. Design rights (known as design patents in some jurisdictions) are widely used to protect everything from marketing logos and packaging to the shape of furniture and vehicles and the user interfaces of computers and smartphones. Design rights are available in many jurisdictions and through regional systems. Protection can also be obtained internationally using the WIPO-administered Hague System for the International Registration of Designs. == See also == == References ==
Wikipedia/Sustainable_design
An architectural model is a type of scale model made to study aspects of an architectural design or to communicate design intent. They are made using a variety of materials including paper, plaster, plastic, resin, wood, glass, and metal. Models are built either with traditional handcraft techniques or via 3D printing technologies such as stereolithography, fused filament fabrication, and selective laser sintering. == History == The use of architectural models dates to pre-history. Some of the oldest standing models were found in Malta at Tarxien Temples. Those models are now stored at the National Museum of Archaeology in Malta. == Purpose == Architectural models are used by architects for a range of purposes, including: Ad hoc or "sketch" models are sometimes made to study the interaction of volumes, different viewpoints, or concepts during the design process. They're useful in explaining a complicated or unusual design to builders. They also serve as a focus for discussion between architects, engineers, and town planners. Presentation models can be used to exhibit, visualize, or sell a final design. A model also serves as a show piece. Once a building is finished, the model is sometimes featured in a common area of the building. Types of models include: Exterior models are models of buildings that usually include some landscaping or civic spaces around the building. Interior models are models showing interior space planning, finishes, colors, furniture, and beautification. Landscaping design models are models of landscape design and development, representing features such as walkways, small bridges, pergolas, vegetation patterns, and beautification. Landscape design models usually represent public spaces and, in some cases, include buildings as well. Urban models are typically built at a much smaller scale (starting from 1:500 and less, 1:700, 1:1000, 1:1200, 1:2000, and 1:20,000), representing several city blocks, even a town or village, a large resort, a campus, an industrial facility, a military base, and so on. Urban models are a tool for town and city planning and development. Urban models of large urban areas are displayed at museums such as the Shanghai Urban Planning Exhibition Center, the Queens Museum in New York, the Beijing Planning Exhibition Hall, and the Singapore City Gallery. Engineering and construction models show isolated building or structure elements and components and their interactions. == Virtual modeling == Buildings are increasingly designed in software with CAD (computer-aided design) systems. Early virtual modeling involved the fixing of arbitrary lines and points in virtual space, mainly to produce technical drawings. Modern packages include advanced features such as databases of components, automated engineering calculations, visual fly-throughs, dynamic reflections, and accurate textures and colors. As an extension to CAD (computer-aided design) and BIM (building information modeling), virtual reality architectural sessions are also being adopted. This technology enables participants to be immersed in a 1:1 scale model, essentially experiencing the building before it is built. === List of CAD and BIM software === Autodesk Revit AutoCAD Rhinoceros 3D SketchUp ARCHICAD Vectorworks Autodesk 3ds Max == Materials == Rough study models can be made quickly using cardboard, wooden blocks, polystyrene, foam, foam boards, and other materials. Such models are an efficient design tool for the three-dimensional understanding of a structure, space, or form, and are used by architects, interior designers, and exhibit designers. Common materials used for centuries in the construction of architectural models were card stock, balsa wood, basswood, and other woods. Modern professional architectural model builders use 21st-century materials, such as Taskboard (a flexible and lightweight wood/fiberboard), plastics, wooden and wooden-plastic composites, foams, foam board, and urethane compounds. Several companies produce ready-made pieces for structural components (e.g., girders, beams), siding, furniture, figures (people), vehicles, trees, bushes, and other features that are found in the models. Features such as vehicles, people figurines, trees, streetlights, and others are called "scenery elements" and serve not only to beautify the model but also to help the observer obtain a correct feel of the scale and proportions represented by the model. Increasingly, rapid prototyping techniques such as 3D printing and CNC routing are used to automatically construct models directly from CAD plans. === Cork models === A cork model is an architectural model made predominantly of cork. The art of cork modeling is also called phelloplasty (Greek φελλός phellos, cork). In Napoli in the sixteenth century, cork was being used to create Christmas cribs. The 18th and early 19th centuries saw an increase in the popularity of crib-making there. The invention of architectural models made of cork was self-attributed to Augusto Rosa (1738–1784), but Giovanni Altieri (documented 1766–1790) and Antonio Chichi (1743–1816) were already active in Rome as manufacturers of cork models. Chichi's models were copied by Carl May (1747–1822) and his son Georg Heinrich May (1790–1853). Other artists include Luigi Carotti (Rome), Carlo Lucangeli (1747–1812, Rome, Naples), Domenico Padiglione and his sons Agostino and Felice (Naples), and Auguste Pelet (1785–1865, Nîmes). In Marseille, several scale models were made representing archaeological digs by Hippolyte Augier (1830–1889) (Marseille History Museum/Musée d’Histoire de Marseille) or Stanislas Clastrier (1857–1925). Dieter Cöllen is an example of a contemporary phelloplast that continues the art. ==== Collections ==== Many cork models of classical monuments in Italy were made and sold to tourists during their Grand Tour. Cork, especially when carefully painted, was ideal to reproduce the weathered look of wall surfaces. As a rule, they were produced on a large scale (the Colosseum in Aschaffenburg is three meters long and one meter high) and with high precision. Cork models were esteemed in the princely courts of the 18th century. They were also acquired for their scientific value by schools of architecture in the late 18th and early 19th centuries, or by institutions like the Society of Antiquaries of London and the British Museum, as a way of introducing the general public to ancient architecture. Despite their fragility, cork models have often survived better than wooden models threatened by wood-destroying insects. Apart from kings and princes, cork models were collected by people such as Filippo Farsetti (1703–1744) in Venice, Pierre Gaspard Marie Grimod d'Orsay (1748–1809), and the architect Louis-François Cassas in France, Charles Townley, or Sir J. Soane in London, who turned his home into a museum, and Sir John Soane's Museum, housing a collection of 14 models in cork of Roman and Greek buildings. Chichi's cork models can be found at the Imperial Academy of Arts in Saint Petersburg, Russia (34 models made around 1774); Schloss Wilhelmshöhe, Kassel (33 models made 1777–1782); Hessisches Landesmuseum Darmstadt (26 models acquired 1790–91); and the Herzogliches Museum Gotha (12 models acquired after 1777–1778; see Wikipedia in German). The largest collection of cork models by Carl May, with 54 pieces (after war losses), is in Aschaffenburg (Schloss Johannisburg); another large collection of his models is in the Staatliches Museum Schwerin. In France, the Musée des Antiquités Nationales à Saint-Germain-en-Laye has works by Rosa, Lucandeli and Pelet. The Musée archéologique de Nîmes (Musée archéologique de Nîmes) and the Marseille History Museum also have cork models. Modern cork models of antique buildings by Dieter Cöllen are exhibited in the Praetorium in Cologne. == Scales == Architectural models are being constructed at a much smaller scale than their 1:1 counterpart. The scales and their architectural use are broadly as follows: 1:1 full (or real) size for details 1:2 Details 1:5 Details 1:10 Interior spaces and furniture 1:20 Interior spaces and furniture 1:50 Interior spaces, detailed floor plans, and different floor levels 1:100 Building plans and layouts 1:200 Building plans and layouts 1:500 Building layouts or site plans 1:1000 Urban scale for site or location plans 1:1250 Site plans 1:2500 Site plans and city maps 1:5000 City maps/Island Sometimes model railroad scales such as 1:160 and 1:87 are used due to the ready availability of commercial figures, vehicles, and trees in those scales, and models of large buildings are most often built in approximately that range of scales due to size considerations. == See also == Architectural rendering Maquette Mockup Origamic architecture (OA) Scale model Superquick Cardboard modeling == References == == Further reading == Fankhänel, Teresa. The Architectural Models of Theodore Conrad: The “Miniature Boom” of Mid-Century Modernism. London: Bloomsbury, 2021. Lepik, Andres. Das Architekturmodell in Italien, 1335-1550. Römische Studien Der Bibliotheca Hertziana, Bd. 9. Worms: Wernersche Verlagsgesellschaft, 1994. Liptau, Ralf. Architekturen Bilden: Das Modell in Entwurfsprozessen Der Nachkriegsmoderne. Bielefeld: Transcript, 2019. Lund, David. A History of Architectural Modelmaking in Britain: The Unseen Masters of Scale and Vision. London ; New York: Routledge, Taylor & Francis , 2023. Mindrup, Matthew. The Architectural Model: Histories of the Miniature and the Prototype, the Exemplar and the Muse. Cambridge, Massachusetts: The MIT Press, 2019. Smith, Albert C. Architectural Model as Machine: A New View of Models from Antiquity to the Present Day. Amsterdam: Elsevier, 2004. Wells, Matthew. Modelling the Metropolis: The Architectural Model in Victorian London. Architectural Knowledge. Zürich: gta Verlag, 2023.
Wikipedia/Architectural_model
Information design is the practice of presenting information in a way that fosters an efficient and effective understanding of the information. The term has come to be used for a specific area of graphic design related to displaying information effectively, rather than just attractively or for artistic expression. Information design is closely related to the field of data visualization and is often taught as part of graphic design courses. The broad applications of information design along with its close connections to other fields of design and communication practices have created some overlap in the definitions of communication design, data visualization, and information architecture. According to Per Mollerup, information design is explanation design. It explains facts of the universe and leads to knowledge and informed action. == History == The term 'information design' emerged as a multidisciplinary area of study in the 1970s. Use of the term is said to have started with graphic designers and it was solidified with the publication of the Information Design Journal in 1979. Later, the related International Institute for Information Design (IIID) was set up in 1987 and Information Design Association (IDA) established in 1991. In 1982, Edward Tufte produced a book on information design called The Visual Display of Quantitative Information. The term information graphics tends to be used by those primarily concerned with diagramming and display of quantitative information, such as technical communicators and graphic designers. In technical communication, information design refers to creating an information structure for a set of information aimed at specified audiences. It can be practised on different scales. On a large scale, it implies choosing relevant content and dividing it into separate manuals by audience and purpose. On a medium scale, it means organizing the content in each manual and making sure that overviews, concepts, examples, references, and definitions are included and that topics follow an organizing principle. On a small or detailed scale, it includes logical development of topics, emphasis on what's important, clear writing, navigational clues, and even page design, choice of font, and use of white space. There are many similarities between information design and information architecture. The title of information designer is sometimes used by graphic designers who specialize in creating websites. The skillset of the information designer, as the title is applied more globally, is closer to that of the information architect in the U.S. Similar skills for organization and structure are brought to bear in designing web sites and digital media, with additional constraints and functions that earn a designer the title information architect. In computer science and information technology, 'information design' is sometimes a rough synonym for (but is not necessarily the same discipline as) information architecture, the design of information systems, databases, or data structures. This sense includes data modeling and process analysis. == Early examples == Information design is associated with the age of technology but it does have historical roots. Early instances of modern information design include these effective examples: William Playfair's line, bar, pie, and area charts illustrating England's trade (1786 and 1801) John Snow's spot maps, which pinpointed the source of a deadly cholera outbreak in 1850s London Charles Joseph Minard's 1861 diagram depicting Napoleon's Russian campaign of 1812 W.E.B. Du Bois's data visualization on the lives of Black Americans for the 1900 World's Fair Otto Neurath's International Picture Language of the 1930s Florence Nightingale's information graphic depicting army mortality rates The Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, direction of movement, and temperature. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Edward Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn." == Applications == Information design can be used for broad audiences (such as signs in airports) or specific audiences (such as personalized telephone bills). The resulting work often seeks to improve a user's trust of a product (such as medicine packaging inserts, operational instructions for industrial machinery and information for emergencies). The example of signs also highlights a niche category known as wayfinding. Governments and regulatory authorities have legislated about a number of information design issues, such as the minimum size of type in financial small print, the labelling of ingredients in processed food, and the testing of medicine labelling. Examples of this are the Truth in Lending Act in the USA, which introduced the Schumer box (a concise summary of charges for people applying for a credit card), and the Guideline on the Readability of the Labelling and Package Leaflet of Medicinal Products for Human Use (European Commission, Revision 1, 12 January 2009). Professor Edward Tufte explained that users of information displays are executing particular analytical tasks such as making comparisons or determining causality. The design principle of the information graphic should support the analytical task, showing the comparison or causality. == Simplicity == Simplicity is a major concern in information design. The aim is clarity and understanding. Simplification of messages may imply quantitative reduction but is not restricted to that. Sometimes more information means more clarity. Also, simplicity is a highly subjective matter and should always be evaluated with the information user in mind. Simplicity can be easy when following five simple steps when it comes to information design: Tell the truth, Get to the point, Pick the right tool for the job, Highlight what is important, Of course, keep it simple. These steps will help an information designer narrow down results, as well as keeping their audience engaged. == See also == == References == == External links == InformationDesign.org International Institute for Information Design Communication Research Institute: Defining information design Information Design Journal Archived 2006-02-16 at the Wayback Machine UK Information Design Association Society for Technical Communication Information Design – Information Architecture (ID–IA) Special Interest Group (SIG) Visualizing Information for Advocacy: An Introduction to Information Design Booklet on information design for non-profit and non-governmental organizations. McCandless, David (2010). "The beauty of data visualization".
Wikipedia/Information_design
Virtual home design software is a type of computer-aided design software intended to help architects, designers, and homeowners preview their design implementations on-the-fly. These products may differ from traditional homeowner design software and other online design tools in that they may use HTML5 to ensure dynamic previews of changes. This category of software as a service may put an emphasis on usability, speed, and customization. == Background == Homeowners, contractors, and architects use virtual home exterior design software to help visualize changes to designs. Since virtual home design suites that use HTML5 are able to rapidly propagate changes to the home design, users can A/B test designs much more efficiently than with previous iterations of online design software. Virtual home design software has found widespread usage among homeowners who have suffered property damage, as server-side, HTML5-based design software is ideal for homeowners who wish to see what certain products will look like on damaged areas of their houses. Many apps are aimed at use by non-professionals. Ikea have experimented with this technology and made various tools available online to the general public. Augmented reality has also been used. == Examples == Several manufacturers use virtual home design software to display their products online. These companies that utilize virtual home design software include GAF Materials Corporation, James Hardie, Exterior Portfolio, and CertainTeed. Some companies, such as Design My Exterior, have built virtual home design software that is not limited to products or brands in order to allow for greater flexibility by the end-user. Design My Exterior also uses ImageMapster in order to generate a greater range of options with less processing time. Live Home 3D is a virtual home design software for Microsoft Windows and macOS. == Future applications == Several companies are experimenting with virtual reality for architecture. They design virtual homes and allow customers to walk around with the help of a VR headset (such as the Occulus Rift). This way, customers get a realistic, true-to-scale idea of the result. == Gallery == == References ==
Wikipedia/Virtual_home_design_software
Environmental impact design (EID) is the design of development projects so as to achieve positive environmental objectives that benefit the environment and raise the stock of public goods. == Examples == Examples of EID include: Habitat creation as a result of afforestation projects that can "expand forest resources and reduce the gap between timber production and consumption." An example is the China Afforestation Project. Coastal management projects that strengthens biodiversity and promotes sustainable use of biological resources. Flood defense projects that improve livability in flood-prone areas by reducing future losses. Flood preparedness and mitigation systems can aid in handling periodic flooding. Bridge designs such as concrete bridges that are sustainable, recyclable, durable and can be built quickly, reducing greenhouse gas emissions caused by traffic delays and construction equipment. == Types == Environmental impact design impacts can be broken down into three types: Direct impacts: caused by the project and building process, such as land consumption, erosion and loss of vegetation. Indirect impacts: side-effects of a project such as degradation of surface water quality from erosion of land cleared as a result of a project. Over time, indirect impacts can affect larger geographical areas. Cumulative impacts: synergistic effects such as the impairment of water regulation and filtering capabilities of wetland systems due to construction. Environmental impacts of design must consider the site of the project. Environmental Impact Design should address issues revealed by Environmental impact assessments (EIA). EID looks for ways to minimize costs to the developer, while maximizing the benefit to the environment. == Construction == Historically in construction, the needs of the owner were paramount, as constrained by local laws and policies, such as building safety and zoning. EID broadens those concerns to encompass environmental impacts. Low impact development and ecologically focused building practices originated in Germany following World War II. The widespread destruction and a large homeless population gave Germans the chance to refocus building practices. Prefabrication was adopted in both East and West Germany where, in the 1950s and 60s, modular construction systems were developed for residential buildings. == International programs == In 1992, at the Earth Summit, policy makers adopted Agenda 21, which focused on sustainable development. In 1996, the UN Conference on Human Settlements Habitat II discussed transferring sustainable building practices to an urban scale. From 1999 to 2003, the U.S. Green Building Council kick-started the Leadership in Energy and Environmental Design or (LEED) which is now the most well-known standard for green building. == Building life cycle == The "building life cycle" is an approach to design that considers environmental impacts such as pollution and energy consumption over the life of the building. This theory evolved into the idea of cradle-to-cradle design, which adds the notion that at the end of a building's life, it should be disposed of without environment impact. The Triple Zero standard requires lowering energy, emissions and waste to zero. A successful life cycle building adopts approaches such as the use of recycled materials in the construction process as well as green energy. == See also == Environmental impact assessment Hydropower Sustainability Assessment Protocol Landscape planning Phytoremediation == References ==
Wikipedia/Environmental_impact_design
Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium. == Research areas == === Perceptual, cognitive, and emotional study of sonic interactions === Research in this area focuses on experimental scientific findings about human sound reception in interactive contexts. During closed-loop interactions, the users manipulate an interface that produces sound, and the sonic feedback affects in turn the users’ manipulation. In other words, there is a tight coupling between auditory perception and action. Listening to sounds might not only activate a representation of how the sound was made: it might also prepare the listener to react to the sound. Cognitive representations of sounds might be associated with action-planning schemas, and sounds can also unconsciously cue a further reaction on the part of the listener. Sonic interactions have the potential to influence the users’ emotions: the quality of the sounds affects the pleasantness of the interaction, and the difficulty of the manipulation influences whether the user feels in control or not. === Product sound design === Product design in the context of sonic interaction design is dealing with methods and experiences for designing interactive products having a salient sonic behaviour. Products, in this context, are either tangible and functional objects that are designed to be manipulated, or usable simulations of such objects as in virtual prototyping. Research and development in this area relies on studies from other disciplines, such as: product sound quality; acoustic ecology, i.e. the relationship, mediated through sound, between living beings and their environment; film sound; computer and video game sound; sound culture, i.e. the study of how the production and consumption of sound have changed throughout history and within different societies. In design research for sonic products a set of practices have been inherited from a variety of fields. Such practices have been tested in contexts where research and pedagogy naturally intermix. Among these practices it suffices to mention: bodystorming, especially when combined with vocal sketching, where participants produce vocal imitations to mimic the sonic behavior of objects while they are being interacted with; theatrical practices, such as theatrical metaphors and dramatic performance; basic design, based on demonstrations and intersubjectivity; video prototyping with sonic overlays; Foley artistry in filmmaking; acting out sound dramas. === Interactive art and music === In the context of sonic interaction design, interactive art and music projects are designing and researching aesthetic experiences where sonic interaction is in the focus. The creative and expressive aspects – the aesthetics – are more important than conveying information through sound. Practices include installations, performances, public art and interactions between humans through digitally-augmented objects/environments. These often integrate elements such as embedded technology, gesture-sensitive devices, speakers or context-aware systems. The experience is in the focus, addressing how humans are affected by the sound, and vice versa. Interactive art and music allows researchers to question existing paradigms and models of how humans interact with technology and sound, going beyond paradigms of control (human controlling a machine). Users are part of a loop which includes action and perception. Interactive art and music projects invite explorative actions and playful engagement. There is also a multi-sensory aspect, especially haptic-audio and audio-visual projects are popular. Amongst many other influences, this field is informed by the development of the roles of instrument-maker, composer and performer merging. Artistic research in sonic interaction design is about productions in the interactive arts and performing arts, exploiting the role of enactive engagement with sound–augmented interactive objects. === Sonification === Sonification is the data-dependent generation of sound, if the transformation is systematic, objective and reproducible, so that it can be used as scientific method. For sonic interaction design, sonification provides a set of methods to create interaction sounds that encode relevant data, so that the user can perceive or interpret the conveyed information. Sonification does not necessarily need to represent huge amounts of data in sound, but may only convey one or few data values in a sound. To give an example, imagine a light switch that, on activation would create a short sound that depends on the electric power consumed through the cable: more energy-wasting lamps would perhaps systematically result in more annoying switch sounds. This example shows that sonification aims to provide some information by using its systematic transformation into sound. The integration of data-driven elements in interaction sound may serve different purposes: to allow the users to refine their actions via auditory feedback. Example: the sonification-enhanced drilling machine which indicates by sound when a wanted orientation to the wall is reached. to influence how humans perceive their own body movement. to create a sonic gestalt for the interaction which allows users to compare the detailed performance on repeated interactions: for instance rowing strokes may be sonified so that the sportsmen can better synchronize their action. to enable novel functions that would otherwise be not available (e.g. a bottle that displays by sound how much fluid is poured into glasses so that the users can more easily fill the equal amount of liquid in different glasses). Within the field of sonification, sonic interaction design acknowledges the importance of human interaction for understanding and using auditory feedback. Within sonic interaction design, sonification can help and offer solutions, methods, and techniques to inspire and guide the design of products or interactive systems. == See also == == References == == Further reading == Franinović, K., and Serafin, S., Eds. Sonic interaction design. The MIT Press, Cambridge, Massachusetts, 2013. Stefano Delle Monache, Pietro Polotti, Davide Rocchesso (2010). A Toolkit for Explorations in Sonic Interaction Design. In: Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound, 2010, New York (AM '10), ISBN 978-1-4503-0046-9, doi:10.1145/1859799.1859800, citation Eoin Brazil and Mikael Fernström, (2009). Empirically Based Auditory Display Design. In: Proceedings of the SMC 2009 - 6th Sound and Music Computing Conference, 23–25 July 2009, Porto, Portugal. Available: online. Karmen Franinović, Yon Visell, Daniel Hug, (2007). Sound Embodied: A Report on Sonic Interaction Design in Everyday Artifacts. In: Proceedings of the 13th International Conference on Auditory Display, Montréal, Canada, June 26–29, 2007. Available: online. Ernest A. Edmonds, Alastair Weakley, Linda Candy, Mark Fell, Roger Knott, and Sandra Pauletto, (2005). "The Studio as Laboratory: Combining Creative Practice and Digital Technology Research". International Journal of Man-Machine Studies 63(4–5): 452–481. Available: online. Ernest Edmonds, Andrew Martin, and Sandra Pauletto, (2004). Audio-visual Interfaces in Digital Art. In: Proceedings of the 2004 ACM SIGCHI International Conference on Advances in computer entertainment technology (ACE '04), Singapore, June 3–5, 2004, pp. 331–336, doi:10.1145/1067343.1067392. Available: online. Thomas Hermann and Andy Hunt, (2004). The Importance of Interaction in Sonification. In: Proceedings of ICAD Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6–9, 2004. Available: online. Niklas Röber and Maic Masuch, (2004). Interacting With Sound: An Interaction Paradigm for Virtual Auditory Worlds. In: Proceedings of ICAD Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6–9, 2004. Available: online. == External links ==
Wikipedia/Sonic_interaction_design
Romulus is boundary representation (b-rep) solid modeling software, released first in 1978 by Ian Braid, Charles Lang, Alan Grayer, and the Shape Data team in Cambridge, England. It was the first commercial solid modeling kernel designed for straightforward integration into computer-aided design (CAD) software. Romulus incorporated the CAM-I AIS (Computer Aided Manufacturers International's Application Interface Specification) and was the only solid modeler (other than its successors Parasolid and ACIS) ever to offer a third-party standard application programming interface (API) to facilitate high-level integration into a host CAD software program. Romulus was quickly licensed by Siemens, Hewlett-Packard (HP), and several other CAD software vendors. == See also == Comparison of computer-aided design software Shape Data == References ==
Wikipedia/Romulus_(modelling_kernel)
Parametric design is a design method in which features, such as building elements and engineering components, are shaped based on algorithmic processes rather than direct manipulation. In this approach, parameters and rules establish the relationship between design intent and design response. The term parametric refers to the input parameters that are fed into the algorithms. While the term now typically refers to the use of computer algorithms in design, early precedents can be found in the work of architects such as Antoni Gaudí. Gaudí used a mechanical model for architectural design (see analogical model) by attaching weights to a system of strings to determine shapes for building features like arches. Parametric modeling can be classified into two main categories: Propagation-based systems, where algorithms generate final shapes that are not predetermined based on initial parametric inputs. Constraint systems, in which final constraints are set, and algorithms are used to define fundamental aspects (such as structures or material usage) that satisfy these constraints. Form-finding processes are often implemented through propagation-based systems. These processes optimize certain design objectives against a set of design constraints, allowing the final form of the designed object to be "found" based on these constraints. Parametric tools enable reflection of both the associative logic and the geometry of the form generated by the parametric software. The design interface provides a visual screen to support visualization of the algorithmic structure of the parametric schema to support parametric modification. The principle of parametric design can be defined as mathematical design, where the relationship between the design elements is shown as parameters which could be reformulated to generate complex geometries, these geometries are based on the elements’ parameters, by changing these parameters; new shapes are created simultaneously. In parametric design software, designers and engineers are free to add and adjust the parameters that affect the design results. For example, materials, dimensions, user requirements, and user body data. In the parametric design process, the designer can reveal the versions of the project and the final product, without going back to the beginning, by establishing the parameters and establishing the relationship between the variables after creating the first model. In the parametric design process, any change of parameters like editing or developing will be automatically and immediately updated in the model, which is like a “short cut” to the final model. == Parameter == The word parameter derives from the Greek for para (besides, before or instead of) + metron (measure). If we look at the Greek origin of the word, it becomes clear that the word means a term that stands in for or determines another measure. In parametric CAD software, the term parameter usually signifies a variable term in equations that determine other values. A parameter, as opposed to a constant, is characterized by having a range of possible values. One of the most seductive powers of a parametric system is the ability to explore many design variations by modifying the value of a few controlling parameters == History (early examples) == === Analogue parametric design === One of the earliest instances of parametric design was the upside-down model of churches by Antoni Gaudi. In his design for the Church of Colònia Güell, he created a model of strings weighted down with birdshot to create complex vaulted ceilings and arches. By adjusting the position of the weights or the length of the strings, he could alter the shape of each arch and observe the impact on the connected arches. He placed a mirror at the bottom of the model to see how it would appear when built right-side-up. ==== Features of Gaudí's method ==== Gaudí's analog method incorporated the main features of a computational parametric model (input parameters, equation, output): The string length, birdshot weight, and anchor point location function as independent input parameters. The vertex locations of the points on the strings serve as the model's outcomes. The outcomes are derived using explicit functions, in this case, gravity or Newton's law of motion. By modifying individual parameters of these models, Gaudí could generate different versions of his model while ensuring the resulting structure would stand in pure compression. Instead of manually calculating the results of parametric equations, he could automatically derive the shape of the catenary curves through the force of gravity acting on the strings. German architect Frei Otto also experimented with non-digital parametric processes, using soap bubbles to find optimal shapes of tensegrity structures such as in the Munich Olympic Stadium, designed for the 1972 Summer Olympics in Munich. == Architecture == Nature has often served as inspiration for architects and designers. Computer technology has provided designers and architects with the tools to analyze and simulate the complexity observed in nature and apply it to structural building shapes and urban organizational patterns. In the 1980s, architects and designers began using computers running software developed for the aerospace and moving picture industries to "animate form". One of the first architects and theorists to use computers to generate architecture was Greg Lynn. His blob and fold architecture are early examples of computer-generated architecture. The new Terminal 3 of Shenzhen Bao'an International Airport, completed in 2013, was designed by Italian architect Massimiliano Fuksas with parametric design support from engineering firm Knippers Helbig. It serves as an example of the use of parametric design and production technologies in a large-scale building. In the general architectural design, all design aspects and their dimensions can be considered as parameters, such as location, orientation, shape, solar radiation and so on. The iterative process is an approach to continuously improving a concept, design, or product. Creators produce a prototype, test it, tweak it, and repeat the cycle with the goal of getting closer to the solution. In the case of parametric architecture, iteration can, in principle, create variation at every pass through the same set of instructions. Examples may include varying the size and shape of a floor plate as one builds a skyscraper, or changing the angle of a modular cladding system as it is tiled over an undulating surface. In addition to producing variation, iteration can be a powerful tool for both optimization and minimizing the time needed to achieve that optimization. Using a fluid parametric system, which can give immediate feedback, a designer can generate solutions and test them rapidly by iterating through many possibilities, each created with a different set of parameters. == Urban design == Parametric urbanism focuses on the study and prediction of settlement patterns. Architect Frei Otto identifies occupying and connecting as the two fundamental processes involved in all urbanization. Parametric processes can help optimize pedestrian or vehicle circulation, block and façade orientations, and instantly compare the different performances of multiple urban design options. Parametric design techniques enable architects and urban designers to better address and respond to diverse urban contexts, environmental challenges, and social issues. By integrating data and analysis into the design process, parametric urbanism allows for more informed and adaptive solutions to urban design challenges, ultimately leading to more resilient and sustainable urban environments. == Industrial design == With the development of technology and the improvement of people's quality of life, there are more and more factors that affect the final result of interior and furniture design. Space, form, color, line, light, color, pattern, and texture are all influencing elements. The parametric design method brings industrial designers more design possibilities. The parametric design method gives furniture designers opportunities to challenge more complex furniture structures and create more complex shapes. When dealing with ergonomic problems parametric design methods can help designers create real-use digital scenarios and provide more comfortable design concepts. Using design tables in the furniture industry to implement parametric design is useful when a large order needs to be fulfilled with different sizes of the same model of furniture, as it reduces work time and the possibility of error. == Software == == See also == Design computing – Computing as applied to design Generative design – Iterative design process Parametricism – Modern architectural style Parametrization Responsive computer-aided design – Approach to computer-aided design Typography – Art of arranging type Visual programming language – Programming language written graphically by a user IJP The Book of Surfaces – Book about the geometry and philosophy of architectural surfaces == References ==
Wikipedia/Parametric_design
Tableless web design (or tableless web layout) is a web design method that avoids the use of HTML tables for page layout control purposes. Instead of HTML tables, style sheet languages such as Cascading Style Sheets (CSS) are used to arrange elements and text on a web page. == History == HTML is a markup language whose visual presentation was initially left up to the user. However, as the Internet expanded from the academic and research world into the mainstream in the mid-1990s, and became more media oriented, graphic designers sought ways to control the visual appearance of their Web pages. As popularised especially by the designer David Siegel in his book Creating Killer Web sites, tables and spacers (usually transparent single pixel GIF images with explicitly specified width, height or margins) were used to create and maintain page layouts. In the late 1990s the first reasonably powerful WYSIWYG editors arrived on the market, which meant Web designers no longer needed a technical understanding of HTML to build web pages. Such editors indirectly encouraged extensive use of nested tables to position design elements. As designers edited their documents in these editors, unnecessary code and empty elements were added to the document. Furthermore, unskilled designers were likely to use tables more than required when using a WYSIWYG editor. This practice frequently led to many tables nested within tables, as well as tables with unnecessary rows and columns. The use of graphic editors with slicing tools that output HTML and images directly also promoted poor code with tables often having many rows of 1 pixel height or width. Sometimes many more lines of code were used to render content than the actual content itself. The reliance on tables for layout purposes caused a number of problems. Many web pages were designed with tables nested within tables, resulting in large HTML documents that use more bandwidth than documents with simpler formatting. Furthermore, when a table-based layout is linearized, for example when being parsed by a screen reader or a search engine, the resulting order of the content can be somewhat jumbled and confusing. Cascading Style Sheets (CSS) were developed to improve the separation between design and content, and move back towards a semantic organization of content on the Web. The term "tableless design” implies the use of CSS rather than layout tables to position HTML elements on the page. HTML tables still have their legitimate place when presenting tabular information within web pages, and are also sometimes still used as layout devices in situations for which CSS support is poor or problematical, like vertically centering an element. Another area where tables are still used is e-mailers, because many popular Email clients have not kept up with modern HTML and CSS rendering. In such a scenario, complex e-mailers lose some of their structural and creative alignment. == Adoption == The CSS1 specification was published in December 1996 by the W3C with the aim of improving web accessibility and emphasising the separation of presentational details in style sheets from semantic content in HTML documents. CSS2 in May 1998 (later revised in CSS 2.1 and CSS 2.2) extended CSS1 with facilities for positioning and table layout. The preference for using HTML tables rather than CSS to control the layout of whole web pages was due to several reasons: the desire of content publishers to replicate their existing corporate design elements on their web site; the limitations at the time of CSS support in browsers; the installed base of browsers that did not support CSS; the new web designers' lack of familiarity with the CSS standards; the lack of knowledge of, or concern for the reasons (including HTML semantics and web accessibility) to use CSS instead of what was perceived as an easier way to quickly achieve the intended layouts, and a new breed of WYSIWYG web design tools that encouraged this practice. Landmarks in the adoption of CSS-based layouts include the Web Standards Project's Browser Upgrade campaign of February 2001 and the web design magazine A List Apart's simultaneous redesign, followed by the Wired redesign in 2002. The CSS Zen Garden website, launched in 2003, has been credited with popularising tableless layouts. == Rationale == The intended and semantic purpose of HTML tables lies in presenting tabular data rather than laying out pages. The benefits of using CSS for page layout include improved accessibility of the information to a wider variety of users, using a wide variety of user agents. There are bandwidth savings as large numbers of semantically meaningless <table>, <tr> and <td> tags are removed from dozens of pages leaving fewer, but more meaningful headings, paragraphs and lists. Layout instructions are transferred into site-wide CSS stylesheets, which can be downloaded once and cached for reuse while each visitor navigates the site. Sites may become more maintainable as the whole site can be restyled or re-branded in a single pass merely by altering the mark-up of the specific CSS, affecting every page which relies on that stylesheet. New HTML content can be added in such a way that consistent layout rules are immediately applied to it by the existing CSS without any further effort. == Advantages == === Accessibility === Because of the Internet's rapid growth, expanding disability discrimination legislation, and the increasing use of mobile phones and PDAs, it is necessary for Web content to be made accessible to users operating a wide variety of devices beyond the relatively uniform desktop computer and CRT monitor ecosystem the web first became popular on. Tableless Web design considerably improves Web accessibility in this respect, as tables too wide for a screen need to be scrolled sideways to be read in entirety, whereas text can wrap around. Screen readers and braille devices have fewer problems with tableless designs because they follow a logical structure. The same is true for search engine Web crawlers, the software agents that most web site publishers hope will find their pages, classify them accurately and so enable potential users to find them easily in appropriate searches. As a result of the separation of design (CSS) and structure (HTML), it is also possible to provide different layouts for different devices, e.g. handhelds, mobile phones, etc. It is also possible to specify a different style sheet for print, e.g. to hide or modify the appearance of advertisements or navigation elements that are irrelevant and a nuisance in the printable version of the page. The W3C's Web Content Accessibility Guidelines' guideline no. 3 states "use markup and style sheets and do so properly." The guideline's checkpoint 3.3, a priority-2 checkpoint, says "use style sheets to control layout and presentation." === Bandwidth savings === Tableless design produces web pages with fewer HTML tags used purely to position content. This normally means that the pages themselves become smaller to download. The philosophy implies that all the instructions regarding layout and positioning be moved into external style sheets. According to the basic capabilities of HTTP, as these rarely change and they apply in common to many web pages, they will be cached and reused after the first download. This further reduces bandwidth and download times across the site. === Maintainability === Maintaining a website may require frequent changes, both small and large, to the visual style of a website, depending on the purpose of the site. Under table-based layout, the layout is part of the HTML itself. As such, without the aid of template-based visual editors such as HTML editors, changing the positional layout of elements on a whole site may require a great deal of effort, depending on the amount of repetitive changes required. Even employing sed or similar global find-and-replace utilities cannot alleviate the problem entirely. In tableless layout using CSS, the layout information may reside in a CSS document. Because the layout information may be centralized, it is possible that these changes can be made quickly and globally by default. The HTML files themselves may not need to be adjusted when making layout changes. Also, because the layout information may be stored externally to the HTML, it may be quite easy to add new content in a tableless design, whether modifying an existing page or adding a new page. By contrast, without such a design, the layout for each page may require a more time-consuming manual changing of each instance or use of global find-and-replace utilities. However site owners often want particular pages to be different from others on the site either for a short period or long term. This will often necessitate a separate style sheet to be developed for that page. The page (or template) content usually can remain unaltered however, which is not the case in a tables-based design. == See also == Framing (World Wide Web) Responsive web design Web literacy (design and accessibility) Holy Grail (web design) == References == == External links == W3C Tableless layout HOWTO 13 Reasons Why CSS Is Superior to Tables in Website Design Open Designs (A collection of W3C-compliant tableless web templates)
Wikipedia/Tableless_web_design
Icon design is the process of designing graphic symbols to represent physical objects (pictograms) and abstract concepts (ideograms). In the context of software applications, an icon often represents a program, an action, or data on a computer. == Usage and process == Though the design of icons has existed as long as pictograms and ideograms have, modern icon design primarily exists in maps, public infrastructure like wayfinding, and user interfaces for video games, computers, and mobile devices. Physical venues and events make use of either existing symbols from governments (such as the DOT pictograms) or custom icon designs. Custom icons are most visible as application icons, favicons, and user interface toolbar icons on computers and mobile devices. Modern app icons have a maximum size of 1024×1024 pixels or greater, however icon design involves creating artwork at various sizes for legibility. At smaller sizes, designers often eliminate or reduce unnecessary details while exaggerating important details. Especially for lower-density displays, icons are hinted at various sizes similar to digital type design by aligning shapes to pixel boundaries as to ensure visual clarity. Icons may also need to be altered for different display modes, such as dark mode. The design of icon sets includes consideration to a shared elements, such as a color palette, perspective, and style. The process of icon design includes defining a metaphor, drawing an illustration, creating any necessary alterations for various sizes, and occasionally assembling files into a folder, ICO file, or ICNS file. Vector icons in apps and websites are usually SVG files. Due to their high visibility and relation to logo design and branding, new app icons are frequently criticized. == Notable icon designers == Masaru Katsumi - 1964 Summer Olympics in Tokyo Lance Wyman - 1968 Summer Olympics in Mexico City, National Zoo Rajie Cook & Dan Shanosky - USDOT pictograms Otto Aicher - 1972 Summer Olympics in Munich Susan Kare - Classic Mac OS, Facebook gifts Jon Hicks - Firefox, Skype emoticons, Icon Handbook The Iconfactory - Windows XP, Windows Vista, Twitter emoji == See also == Icon (computing) § Icon creation Skeuomorph == References == == External links == iOS Human Interface Guidelines — App Icon macOS Human Interface Guidelines — Designing App Icons Microsoft Design Language — Icons Microsoft Icon guidelines for UWP apps Microsoft guidelines on designing Windows Aero Icons Microsoft guidelines on designing Windows XP icons Android guidelines on icon design
Wikipedia/Icon_design
Data Design System AS (DDS) supplies the construction industry with software tools for building information modelling (BIM). The company was founded in 1984 in Stavanger, Norway. In 2021, the company merged into Graphisoft. in the Nemetschek Group. DDS is an active member of buildingSMART. DDS has its headquarters at Stavanger, Norway. Other locations include Oslo and Bergen (both in Norway). DDS has several subsidiaries, among them DDS Building Innovation AS and Data Design System GmbH. The main product line is tools for building services/MEP (mechanical, electrical, plumbing) engineers. The company distributes DDScad MEP, mainly in continental Europe from its office in Ascheberg, Germany. The company also develops software tools for the design and production of timber-frame buildings, DDScad Architect & Construction, from its office in Stavanger. == See also == Comparison of CAD editors for AEC Comparison of CAD, CAM and CAE file viewers == References == == External links == Official website
Wikipedia/Data_Design_System
Participatory design (originally co-operative design, now often co-design) is an approach to design attempting to actively involve all stakeholders (e.g. employees, partners, customers, citizens, end users) in the design process to help ensure the result meets their needs and is usable. Participatory design is an approach which is focused on processes and procedures of design and is not a design style. The term is used in a variety of fields e.g. software design, urban design, architecture, landscape architecture, product design, sustainability, graphic design, industrial design, planning, and health services development as a way of creating environments that are more responsive and appropriate to their inhabitants' and users' cultural, emotional, spiritual and practical needs. It is also one approach to placemaking. Recent research suggests that designers create more innovative concepts and ideas when working within a co-design environment with others than they do when creating ideas on their own. Companies increasingly rely on their user communities to generate new product ideas, marketing them as "user-designed" products to the wider consumer market; consumers who are not actively participating but observe this user-driven approach show a preference for products from such firms over those driven by designers. This preference is attributed to an enhanced identification with firms adopting a user-driven philosophy, consumers experiencing empowerment by being indirectly involved in the design process, leading to a preference for the firm's products. If consumers feel dissimilar to participating users, especially in demographics or expertise, the effects are weakened. Additionally, if a user-driven firm is only selectively open to user participation, rather than fully inclusive, observing consumers may not feel socially included, attenuating the identified preference. Participatory design has been used in many settings and at various scales. For some, this approach has a political dimension of user empowerment and democratization. This inclusion of external parties in the design process does not excuse designers of their responsibilities. In their article "Participatory Design and Prototyping", Wendy Mackay and Michel Beaudouin-Lafon support this point by stating that "[a] common misconception about participatory design is that designers are expected to abdicate their responsibilities as designers and leave the design to users. This is never the case: designers must always consider what users can and cannot contribute." In several Scandinavian countries, during the 1960s and 1970s, participatory design was rooted in work with trade unions; its ancestry also includes action research and sociotechnical design. == Definition == In participatory design, participants (putative, potential or future) are invited to cooperate with designers, researchers and developers during an innovation process. Co-design requires the end user's participation: not only in decision making but also in idea generation. Potentially, they participate during several stages of an innovation process: they participate during the initial exploration and problem definition both to help define the problem and to focus ideas for solution, and during development, they help evaluate proposed solutions. Maarten Pieters and Stefanie Jansen describe co-design as part of a complete co-creation process, which refers to the "transparent process of value creation in ongoing, productive collaboration with, and supported by all relevant parties, with end-users playing a central role" and covers all stages of a development process. === Differing terms === In "Co-designing for Society", Deborah Szebeko and Lauren Tan list various precursors of co-design, starting with the Scandinavian participatory design movement and then state "Co-design differs from some of these areas as it includes all stakeholders of an issue not just the users, throughout the entire process from research to implementation." In contrast, Elizabeth Sanders and Pieter Stappers state that "the terminology used until the recent obsession with what is now called co-creation/co-design" was "participatory design". They also discuss the differences between co-design and co-creation and how they are "often confused and/or treated synonymously with one another". In their words, "Co-creation is a very broad term with applications ranging from the physical to the metaphysical and from the material to the spiritual", while seeing "co-design [as] a specific instance of co-creation". Pulling from the idea of what co-creation is, the definition of co-design in the context of their paper developed into "the creativity of designers and people not trained in design working together in the design development process". Another term brought up in this article front end design, which was formerly known as pre-design. "The goal of the explorations in the front end is to determine what is to be designed and sometimes what should not be designed and manufactured" and provides a space for the initial stages of co-design to take place. An alternate definition of co-design has been brought up by Maria Gabriela Sanchez and Lois Frankel. They proposed that "Co-design may be considered, for the purpose of this study, as an interdisciplinary process that involves designers and non-designers in the development of design solutions" and that "the success of the interdisciplinary process depends on the participation of all the stakeholders in the project". "Co-design is a perfect example of interdisciplinary work, where designer, researcher, and user work collaboratively in order to reach a common goal. The concept of interdisciplinarity, however, becomes broader in this context where it not only results from the union of different academic disciplines, but from the combination of different perspectives on a problem or topic." === Fourth Order Design === Similarly, another perspective comes from Golsby-Smith's "Fourth Order Design" which outlines a design process in which end-user participation is required and favours individual process over outcome. Buchanan's definition of culture as a verb is a key part of Golsby-Smith's argument in favour of fourth order design. In Buchanan's words, "Culture is not a state, expressed in an ideology or a body of doctrines. It is an activity. Culture is the activity of ordering, disordering and reordering in the search for understanding and for values which guide action." Therefore, to design for the fourth-order one must design within the widest scope. The system is discussion and the focus falls onto process rather than outcome. The idea that culture and people are an integral part of participatory design is supported by the idea that a "key feature of the field is that it involves people or communities: it is not merely a mental place or a series of processes". "Just as a product is not only a thing, but exists within a series of connected processes, so these processes do not live in a vacuum, but move through a field of less tangible factors such as values, beliefs and the wider context of other contingent processes." === Different dimensions === As described by Sanders and Stappers, one could position co-design as a form of human-centered design across two different dimensions. One dimension is the emphasis on research or design, another dimension is how much people are involved. Therefore, there are many forms of co-design, with different degrees of emphasis on research or design and different degrees of stakeholder involvement. For instance, one of the forms of co-design which involves stakeholders strongly early at the front end design process in the creative activities is generative co-design. Generative co-design is increasingly being used to involve different stakeholders as patient, care professionals and designers actively in the creative making process to develop health services. Another dimension to consider is that of the crossover between design research and education. An example of this is a study that was completed at the Middle East Technical University in Turkey, the purpose of which was to look into the use of “team development [in] enhancing interdisciplinary collaboration between design and engineering students using design thinking”. The students in this study were tasked with completing a group project and reporting on the experience of working together. One of the main takeaways was that "Interdisciplinary collaboration is an effective way to address complex problems with creative solutions. However, a successful collaboration requires teams first to get ready to work in harmony towards a shared goal and to appreciate interdisciplinarity" == History == From the 1960s onward there was a growing demand for greater consideration of community opinions in major decision-making. In Australia many people believed that they were not being planned 'for' but planned 'at'. (Nichols 2009). A lack of consultation made the planning system seem paternalistic and without proper consideration of how changes to the built environment affected its primary users. In Britain "the idea that the public should participate was first raised in 1965." However the level of participation is an important issue. At a minimum public workshops and hearings have now been included in almost every planning endeavour. Yet this level of consultation can simply mean information about change without detailed participation. Involvement that 'recognises an active part in plan making' has not always been straightforward to achieve. Participatory design has attempted to create a platform for active participation in the design process, for end users. === History in Scandinavia === Participatory design was actually born in Scandinavia and called cooperative design. However, when the methods were presented to the US community 'cooperation' was a word that didn't resonate with the strong separation between workers and managers - they weren't supposed to discuss ways of working face-to-face. Hence, 'participatory' was instead used as the initial Participatory Design sessions weren't a direct cooperation between workers and managers, sitting in the same room discussing how to improve their work environment and tools, but there were separate sessions for workers and managers. Each group was participating in the process, not directly cooperating. (in historical review of Cooperative Design, at a Scandinavian conference). In Scandinavia, research projects on user participation in systems development date back to the 1970s. The so-called "collective resource approach" developed strategies and techniques for workers to influence the design and use of computer applications at the workplace: The Norwegian Iron and Metal Workers Union (NJMF) project took a first move from traditional research to working with people, directly changing the role of the union clubs in the project. The Scandinavian projects developed an action research approach, emphasizing active co-operation between researchers and workers of the organization to help improve the latter's work situation. While researchers got their results, the people whom they worked with were equally entitled to get something out of the project. The approach built on people's own experiences, providing for them resources to be able to act in their current situation. The view of organizations as fundamentally harmonious—according to which conflicts in an organization are regarded as pseudo-conflicts or "problems" dissolved by good analysis and increased communication—was rejected in favor of a view of organizations recognizing fundamental "un-dissolvable" conflicts in organizations (Ehn & Sandberg, 1979). In the Utopia project (Bødker et al., 1987, Ehn, 1988), the major achievements were the experience-based design methods, developed through the focus on hands-on experiences, emphasizing the need for technical and organizational alternatives (Bødker et al., 1987). The parallel Florence project (Gro Bjerkness & Tone Bratteteig) started a long line of Scandinavian research projects in the health sector. In particular, it worked with nurses and developed approaches for nurses to get a voice in the development of work and IT in hospitals. The Florence project put gender on the agenda with its starting point in a highly gendered work environment. The 1990s led to a number of projects including the AT project (Bødker et al., 1993) and the EureCoop/EuroCode projects (Grønbæk, Kyng & Mogensen, 1995). In recent years, it has been a major challenge to participatory design to embrace the fact that much technology development no longer happens as design of isolated systems in well-defined communities of work (Beck, 2002). At the dawn of the 21st century, we use technology at work, at home, in school, and while on the move. == Co-design == As mentioned above, one definition of co-design states that it is the process of working with one or more non-designers throughout the design process. This method is focused on the insights, experiences and input from end-users on a product or service, with the aim to develop strategies for improvement. It is often used by trained designers who recognize the difficulty in properly understanding the cultural, societal, or usage scenarios encountered by their user. C. K. Prahalad and Venkat Ramaswamy are usually given credit for bringing co-creation/co-design to the minds of those in the business community with the 2004 publication of their book, The Future of Competition: Co-Creating Unique Value with Customers. They propose: The meaning of value and the process of value creation are rapidly shifting from a product and firm-centric view to personalized consumer experiences. Informed, networked, empowered and active consumers are increasingly co-creating value with the firm. The phrase co-design is also used in reference to the simultaneous development of interrelated software and hardware systems. The term co-design has become popular in mobile phone development, where the two perspectives of hardware and software design are brought into a co-design process. Results directly related to integrating co-design into existing frameworks is "researchers and practitioners have seen that co-creation practiced at the early front end of the design development process can have an impact with positive, long-range consequences." === New role of the designer under co-design === Co-design is an attempt to define a new evolution of the design process and with that, there is an evolution of the designer. Within the co-design process, the designer is required to shift their role from one of expertise to one of an egalitarian mindset. The designer must believe that all people are capable of creativity and problem solving. The designer no longer exists from the isolated roles of researcher and creator, but now must shift to roles such as philosopher and facilitator. This shift allows for the designer to position themselves and their designs within the context of the world around them creating better awareness. This awareness is important because in the designer's attempt to answer a question, "[they] must address all other related questions about values, perceptions, and worldview". Therefore, by shifting the role of the designer not only do the designs better address their cultural context yet so do the discussions around them. == Discourses == Discourses in the PD literature have been sculpted by three main concerns: (1) the politics of design, (2) the nature of participation, and (3) methods, tools and techniques for carrying out design projects (Finn Kensing & Jeanette Blomberg, 1998, p. 168). === Politics of design === The politics of design have been the concern for many design researchers and practitioners. Kensing and Blomberg illustrate the main concerns which related to the introduction of new frameworks such as system design which related to the introduction of computer-based systems and power dynamics that emerge within the workspace. The automation introduced by system design has created concerns within unions and workers as it threatened their involvement in production and their ownership over their work situation. Asaro (2000) offers a detailed analysis of the politics of design and the inclusion of "users" in the design process. === Nature of participation === Major international organizations such as Project for Public Spaces create opportunities for rigorous participation in the design and creation of place, believing that it is the essential ingredient for successful environments. Rather than simply consulting the public, PPS creates a platform for the community to participate and co-design new areas, which reflect their intimate knowledge. Providing insights, which independent design professionals such as architects or even local government planners may not have. Using a method called Place Performance Evaluation or (Place Game), groups from the community are taken on the site of proposed development, where they use their knowledge to develop design strategies, which would benefit the community. "Whether the participants are schoolchildren or professionals, the exercise produces dramatic results because it relies on the expertise of people who use the place every day, or who are the potential users of the place." This successfully engages with the ultimate idea of participatory design, where various stakeholders who will be the users of the end product, are involved in the design process as a collective. Similar projects have had success in Melbourne, Australia particularly in relation to contested sites, where design solutions are often harder to establish. The Talbot Reserve in the suburb of St. Kilda faced numerous problems of use, such as becoming a regular spot for sex workers and drug users to congregate. A Design In, which incorporated a variety of key users in the community about what they wanted for the future of the reserve allowed traditionally marginalised voices to participate in the design process. Participants described it as 'a transforming experience as they saw the world through different eyes.' (Press, 2003, p. 62). This is perhaps the key attribute of participatory design, a process which, allows multiple voices to be heard and involved in the design, resulting in outcomes which suite a wider range of users. It builds empathy within the system and users where it is implemented, which makes solving larger problems more holistically. As planning affects everyone it is believed that "those whose livelihoods, environments and lives are at stake should be involved in the decisions which affect them" (Sarkissian and Perglut, 1986, p. 3). C. West Churchman said systems thinking "begins when first you view the world through the eyes of another". === In the built environment === Participatory design has many applications in development and changes to the built environment. It has particular currency to planners and architects, in relation to placemaking and community regeneration projects. It potentially offers a far more democratic approach to the design process as it involves more than one stakeholder. By incorporating a variety of views there is greater opportunity for successful outcomes. Many universities and major institutions are beginning to recognise its importance. The UN, Global studio involved students from Columbia University, University of Sydney and Sapienza University of Rome to provide design solutions for Vancouver's downtown eastside, which suffered from drug- and alcohol-related problems. The process allowed cross-discipline participation from planners, architects and industrial designers, which focused on collaboration and the sharing of ideas and stories, as opposed to rigid and singular design outcomes. (Kuiper, 2007, p. 52) ==== Public interest design ==== Public interest design is a design movement, extending to architecture, with the main aim of structuring design around the needs of the community. At the core of its application is participatory design. Through allowing individuals to have a say in the process of design of their own surrounding built environment, design can become proactive and tailored towards addressing wider social issues facing that community. Public interest design is meant to reshape conventional modern architectural practice. Instead of having each construction project solely meet the needs of the individual, public interest design addresses wider social issues at their core. This shift in architectural practice is a structural and systemic one, allowing design to serve communities responsibly. Solutions to social issues can be addressed in a long-term manner through such design, serving the public, and involving it directly in the process through participatory design. The built environment can become the very reason for social and community issues to arise if not executed properly and responsibly. Conventional architectural practice often does cause such problems since only the paying client has a say in the design process. That is why many architects throughout the world are employing participatory design and practicing their profession more responsibly, encouraging a wider shift in architectural practice. Several architects have largely succeeded in disproving theories that deem public interest design and participatory design financially and organizationally not feasible. Their work is setting the stage for the expansion of this movement, providing valuable data on its effectiveness and the ways in which it can be carried out. == Difficulties of adoption and involvement == Participatory Design is a growing practice within the field of design yet has not yet been widely implemented. Some barriers to the adoption of participatory design are listed below. === Doubt of universal creativity === A belief that creativity is a restricted skill would invalidate the proposal of participatory design to allow a wider reach of affected people to participate in the creative process of designing. However, this belief is based on a limited view of creativity which does not recognize that creativity can manifest in a wide range of activities and experiences. This doubt can be damaging not only to individuals but also to society as a whole. By assuming that only a select few possess creative talent, we may overlook the unique perspectives, ideas, and solutions. === Lack of technology in software based co-op design === Often co-op based design technology assumes users have equal knowledge of technology used. For example: Co-op 3d-design program can let multiple people design at same time, but does not have support for guided help – tell the other guy what to do through markings and text, without talking to the person. In programming, one also have the lack of guided help support, concerning co-op based programing. One have support for letting multiple people programming at same time, but here one also have lack of guided help support – text saying write this code, hints from other user, that one can mark relevant stuff on screen and so on. This is a problem in pair-programming, with communication as a bottle neck – one should have possibility to mark, configure and guide the user without knowledge. === Self-serving hierarchies === In a profit-motivated system, the commercial field of design may feel fearful of relinquishing some control in order to empower those who are typically not involved in the process of design. Commercial organizational structures often prioritize profit, individual gain, or status over the well-being of the community or other externalities. However, participatory practices are not impossible to implement in commercial settings. It may be difficult for those who have acquired success in a hierarchical structure to imagine alternative systems of open collaboration. === Lack of investment === Although participatory design has been of interest in design academia, applied uses require funding and dedication from many individuals. The high time and financial costs make research and development of participatory design less appealing for speculative investors. It also may be difficult to find or convince enough shareholders or community members to commit their time and effort to a project. However, widespread and involved participation is critical to the process. Successful examples of participatory design are critical because they demonstrate the benefits of this approach and inspire others to adopt it. A lack of funding or interest can cause participatory projects to revert to practices where the designer initiates and dominates rather than facilitating design by the community. === Differing priorities between designers and participants === Participatory design projects which involve a professional designer as a facilitator to a larger group can have difficulty with competing objectives. Designers may prioritize aesthetics while end-users may prioritize functionality and affordability. Addressing these differing priorities may involve finding creative solutions that balance the needs of all stakeholders, such as using low-cost materials that meet functional requirements while also being aesthetically pleasing. Despite any potential predetermined assumptions, "the users’ knowledge has to be considered as important as the knowledge of the other professionals in the team, [as this] can be an obstacle to the co-design practice." "[The future of] co-designing will be a close collaboration between all the stakeholders in the design development process together with a variety of professionals having hybrid design/research skills." === Emotional and ethical dimensions === Recent scholarship has highlighted the complex emotional landscape navigated by researchers engaged in participatory design, especially in contexts involving vulnerable or marginalized communities. Emotional challenges such as guilt and shame often emerge as researchers confront the disparity between their professional objectives and the lived realities of the communities they engage with. These emotions may stem from unmet expectations, perceived exploitation, or limited project impact. For instance, researchers may experience a sense of guilt when project outcomes fail to meet community needs or when research goals appear to benefit academic careers more than the communities themselves. The ethical dilemmas associated with balancing research agendas, funding constraints, and community needs can create a conflict between professional obligations and personal commitments, potentially leading to emotional burnout or moral distress. Consequently, there is a growing call within the field for frameworks that address these emotional aspects, advocate for ethical reflexivity, and promote sustained engagement strategies that align more closely with community well-being and autonomy. This perspective broadens the traditional scope of participatory design by acknowledging the emotional toll on researchers, thereby emphasizing the need for supportive structures that account for these emotional and ethical intricacies. == From Community Consultation to Community Design == Many local governments require community consultation in any major changes to the built environment. Community involvement in the planning process is almost a standard requirement in most strategic changes. Community involvement in local decision making creates a sense of empowerment. The City of Melbourne Swanston Street redevelopment project received over 5000 responses from the public allowing them to participate in the design process by commenting on seven different design options. While the City of Yarra recently held a "Stories in the Street" consultation, to record peoples ideas about the future of Smith Street. It offered participants a variety of mediums to explore their opinions such as mapping, photo surveys and storytelling. Although local councils are taking positive steps towards participatory design as opposed to traditional top down approaches to planning, many communities are moving to take design into their own hands. Portland, Oregon City Repair Project is a form of participatory design, which involves the community co-designing problem areas together to make positive changes to their environment. It involves collaborative decision-making and design without traditional involvement from local government or professionals but instead runs on volunteers from the community. The process has created successful projects such as intersection repair, which saw a misused intersection develop into a successful community square. In Malawi, a UNICEF WASH programme trialled participatory design development for latrines in order to ensure that users participate in creating and selecting sanitation technologies that are appropriate and affordable for them. The process provided an opportunity for community members to share their traditional knowledge and skills in partnership with designers and researchers. Peer-to-peer urbanism is a form of decentralized, participatory design for urban environments and individual buildings. It borrows organizational ideas from the open-source software movement, so that knowledge about construction methods and urban design schemes is freely exchanged. === In software development === In the English-speaking world, the term has a particular currency in the world of software development, especially in circles connected to Computer Professionals for Social Responsibility (CPSR), who have put on a series of Participatory Design Conferences. It overlaps with the approach extreme programming takes to user involvement in design, but (possibly because of its European trade union origins) the Participatory Design tradition puts more emphasis on the involvement of a broad population of users rather than a small number of user representatives. Participatory design can be seen as a move of end-users into the world of researchers and developers, whereas empathic design can be seen as a move of researchers and developers into the world of end-users. There is a very significant differentiation between user-design and user-centered design in that there is an emancipatory theoretical foundation, and a systems theory bedrock (Ivanov, 1972, 1995), on which user-design is founded. Indeed, user-centered design is a useful and important construct, but one that suggests that users are taken as centers in the design process, consulting with users heavily, but not allowing users to make the decisions, nor empowering users with the tools that the experts use. For example, Wikipedia content is user-designed. Users are given the necessary tools to make their own entries. Wikipedia's underlying wiki software is based on user-centered design: while users are allowed to propose changes or have input on the design, a smaller and more specialized group decide about features and system design. Participatory work in software development has historically tended toward two distinct trajectories, one in Scandinavia and northern Europe, and the other in North America. The Scandinavian and northern European tradition has remained closer to its roots in the labor movement (e.g., Beck, 2002; Bjerknes, Ehn, and Kyng, 1987). The North American and Pacific rim tradition has tended to be both broader (e.g., including managers and executives as "stakeholders" in design) and more circumscribed (e.g., design of individual features as contrasted with the Scandinavian approach to the design of entire systems and design of the work that the system is supposed to support) (e.g., Beyer and Holtzblatt, 1998; Noro and Imada, 1991). However, some more recent work has tended to combine the two approaches (Bødker et al., 2004; Muller, 2007). == Research methodology == Increasingly researchers are focusing on co-design as a way of doing research, and therefore are developing parts of its research methodology. For instance, in the field of generative co-design Vandekerckhove et al. have proposed a methodology to assemble a group of stakeholders to participate in generative co-design activities in the early innovation process. They propose first to sample a group of potential stakeholders through snowball sampling, afterwards interview these people and assess their knowledge and inference experience, lastly they propose to assemble a diverse group of stakeholders according to their knowledge and inference experience. Though not completely synonymous, research methods of Participatory Design can be defined under Participatory Research (PR): a term for research designs and frameworks using direct collaboration with those affected by the studied issue. More specifically, Participatory Design has evolved from Community-Based Research and Participatory Action Research (PAR). PAR is a qualitative research methodology involving: "three types of change, including critical consciousness development of researchers and participants, improvement of lives of those participating in research, and transformation of societal 'decolonizing' research methods with the power of healing and social justice". Participatory Action Research (PAR) is a subset of Community-Based Research aimed explicitly at including participants and empowering people to create measurable action. PAR practices across various disciplines, with research in Participatory Design being an application of its different qualitative methodologies. Just as PAR is often used in social sciences, for example, to investigate a person's lived experience concerning systemic structures and social power relations, Participatory Design seeks to deeply understand stakeholders' experiences by directly engaging them in the problem-defining and solving processes. Therefore, in Participatory Design, research methods extend beyond simple qualitative and quantitative data collection. Rather than being concentrated within data collection, research methods of Participatory Design are tools and techniques used throughout co-designing research questions, collecting, analyzing, and interpreting data, knowledge dissemination, and enacting change. When facilitating research in Participatory Design, decisions are made in all research phases to assess what will produce genuine stakeholder participation. By doing so, one of Participatory Design's goals is to dismantle the power imbalance existing between 'designers' and 'users.' Applying PR and PAR research methods seeks to engage communities and question power hierarchies, which "makes us aware of the always contingent character of our presumptions and truths... truths are logical, contingent and intersubjective... not directed toward some specific and predetermined end goal... committed to denying us the (seeming) firmness of our commonsensical assumptions". Participatory design offers this denial of our "commonsensical assumptions" because it forces designers to consider knowledge beyond their craft and education. Therefore, a designer conducting research for Participatory Design assumes the role of facilitator and co-creator. == See also == Co-creation Computer-supported cooperative work Design thinking Participatory action research Permaculture Public participation Service design User innovation User participation in architecture (N.J. Habraken, Giancarlo De Carlo, and Structuralists such as Aldo van Eyck) == Notes == == References ==
Wikipedia/Participatory_design
.design is a generic top-level domain name in the Domain Name System of the Internet. It was proposed in ICANN's new generic top-level domain (gTLD) program, and became available to the general public on May 12, 2015. Top Level Design was the domain name registry for the string until April 2021, when it was transferred to GoDaddy Registry. == History == In September 2014, Portland, Oregon-based Top Level Design (TLD) won the right to operate the .design top-level domain after beating out six other applicants in a private auction. According to TLD's CEO Ray King, winning the auction was "very important" and one of the company's top priorities, evidenced by its name. He told Domain Name Wire, "Think of all the things that require design. Design permeates all aspects of culture.".design domain registrations became available to the general public on May 12, 2015. According to The Domains, more than 5,200 .design domains were registered on the first day of general availability. CentralNic provides backend services through an exclusive distribution agreement and shares in the global revenues from .design domain names. Ben Crawford, CentralNic's CEO, said of the top-level domain, "It has impressive commercial potential, and it will be adopted more quickly than many other TLDs as it caters, among many other groups, to one of the best-informed professions on new Internet developments – website designers". Ahead of .design's launch, King said of the gTLD: Design has always been fundamental to the way people, companies and products present themselves, and since the elegance of the Apple revolution there has been a conscious embrace of how design is a lot more than just how a product or service looks, but a fundamental carrier of one's message. Beyond that, design is what I call a 'horizontal-vertical', in that it is a broad term, but not generic like .web or .online. It touches on many clear vertical markets such as graphic design, interior design, lighting design, web design, fashion design and many more. These are all distinct markets whose survival depends on them being ahead of the curve and defining future trends — .design does just that. == See also == List of Internet top-level domains .wiki, another top-level domain operated by Top Level Design == References ==
Wikipedia/.design
Industrial computed tomography (CT) scanning is any computer-aided tomographic process, usually X-ray computed tomography, that uses irradiation to produce three-dimensional internal and external representations of a scanned object. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for industrial CT scanning have been flaw detection, failure analysis, metrology, assembly analysis and reverse engineering applications. Just as in medical imaging, industrial imaging includes both nontomographic radiography (industrial radiography) and computed tomographic radiography (computed tomography). == Types of scanners == Line beam scanning is the traditional process of industrial CT scanning. X-rays are produced and the beam is collimated to create a line. The X-ray line beam is then translated across the part and data is collected by the detector. The data is then reconstructed to create a 3-D volume rendering of the part. In cone beam scanning, the part to be scanned is placed on a rotary table. As the part rotates, the cone of X-rays produce a large number of 2D images that are collected by the detector. The 2D images are then processed to create a 3D volume rendering of the external and internal geometries of the part. == History == Industrial CT scanning technology was introduced in 1972 with the invention of the CT scanner for medical imaging by Godfrey Hounsfield. The invention earned him a Nobel Prize in medicine, which he shared with Allan McLeod Cormack. Many advances in CT scanning have allowed for its use in the industrial field for metrology in addition to the visual inspection primarily used in the medical field (medical CT scan). == Analysis and inspection techniques == Various inspection uses and techniques include part-to-CAD comparisons, part-to-part comparisons, assembly and defect analysis, void analysis, wall thickness analysis, and generation of CAD data. The CAD data can be used for reverse engineering, geometric dimensioning and tolerance analysis, and production part approval. === Assembly === One of the most recognized forms of analysis using CT is for assembly, or visual analysis. CT scanning provides views inside components in their functioning position, without disassembly. Some software programs for industrial CT scanning allow for measurements to be taken from the CT dataset volume rendering. These measurements are useful for determining the clearances between assembled parts or the dimension of an individual feature. === Void, crack and defect detection === Traditionally, determining defects, voids and cracks within an object would require destructive testing. CT scanning can detect internal features and flaws displaying this information in 3D without destroying the part. Industrial CT scanning (3D X-ray) is used to detect flaws inside a part such as porosity, an inclusion, or a crack. It has been also used to detect the origin and propagation of damages in concrete. Metal casting and moulded plastic components are typically prone to porosity because of cooling processes, transitions between thick and thin walls, and material properties. Void analysis can be used to locate, measure, and analyze voids inside plastic or metal components. === Geometric dimensioning and tolerancing analysis === Traditionally, without destructive testing, full metrology has only been performed on the exterior dimensions of components, such as with a coordinate-measuring machine (CMM) or with a vision system to map exterior surfaces. Internal inspection methods would require using a 2D X-ray of the component or the use of destructive testing. Industrial CT scanning allows for full non-destructive metrology. With unlimited geometrical complexity, 3D printing allows for complex internal features to be created with no impact on cost, such features are not accessible using traditional CMM. The first 3D printed artefact that is optimised for characterisation of form using computed tomography CT === Image-based finite element methods === Image-based finite element method converts the 3D image data from X-ray computed tomography directly into meshes for finite element analysis. Benefits of this method include modelling complex geometries (e.g. composite materials) or accurately modelling "as manufactured" components at the micro-scale. == Trends and Developments == The industrial computed tomography market is forecast to reach a size of USD 773.45 million to USD 1,116.5 million between 2029 and 2030. Regional trends show that strong market growth is expected, particularly in the Asia-Pacific region, but also in North America and Europe, due to strict safety regulations and preventive maintenance of industrial equipment. Growth is being driven primarily by the ongoing development of CT devices and services that enable precise and non-destructive testing of components. Innovations such as the use of artificial intelligence for automated fault analyses and the development of mobile CT systems are expanding the possibilities. == Developments for Forensic Science == Computed Tomography (CT) has become an increasingly valuable tool in forensic science, particularly in conducting virtual autopsies. Unlike traditional autopsies, which require invasive procedures, CT scans allow for non-invasive internal examinations of the body, producing detailed 3D images of bones, organs, and soft tissues. This technology is especially useful for detecting fractures, foreign objects (such as bullets or shrapnel), gas embolisms, and signs of trauma that may not be immediately visible externally. CT scans can preserve forensic evidence more effectively and are particularly beneficial in cases involving mass disasters, decomposition, or cultural and religious objections to dissection. Furthermore, digital imaging from CT can be stored and reviewed multiple times, aiding both legal investigations and educational purposes. Overall, CT has enhanced the accuracy, efficiency, and accessibility of post-mortem examinations in forensic contexts. === List of uses of CT scanning in Forensic Science === Sources: Virtual Autopsies (Virtopsies) Non-invasive internal examinations of deceased individuals. Detection of Fractures Identification of skull, rib, and other skeletal fractures, especially those not visible externally. Visualization of Foreign Objects Location and analysis of bullets, shrapnel, or other embedded materials. Assessment of Trauma Differentiation between antemortem (before death) and postmortem (after death) injuries. Analysis of Gas Embolisms Identification of air or gas in blood vessels, which may indicate drowning, decompression sickness, or medical malpractice. Age Estimation Evaluation of skeletal maturity and dental development in unidentified remains. Facial Reconstruction Support High-resolution skull imaging for reconstructing a face digitally. Identification of Pathologies Detection of diseases, infections, or chronic conditions that may relate to cause of death. Documentation and Archiving Permanent, revisitable digital records of body condition and evidence. Comparison with Antemortem Data Matching postmortem CT scans with medical imaging from a person’s lifetime for identification. Explosion and Blast Injury Analysis Assessment of internal damage patterns caused by high-pressure events. Burn Victim Analysis Examining internal structures in severely burned bodies when traditional autopsy is limited. Decomposition Studies Monitoring changes in tissues and gases during the decomposition process. Cultural/Religious Sensitivity Alternative to invasive autopsies when cutting open a body is not permitted. Mass Disaster Victim Identification Efficient imaging of multiple bodies for quick identification and trauma assessment. == See also == CT scan Industrial radiography Cone beam computed tomography, applications in quality control and metrology. PCB reverse engineering, an application of industrial CT to image printed circuit boards non-destructively. == References ==
Wikipedia/Industrial_CT_scanning
Model-based definition (MBD), sometimes called digital product definition (DPD), is the practice of using 3D models (such as solid models, 3D PMI and associated metadata) within 3D CAD software to define (provide specifications for) individual components and product assemblies. The types of information included are geometric dimensioning and tolerancing (GD&T), component level materials, assembly level bills of materials, engineering configurations, design intent, etc. By contrast, other methodologies have historically required accompanying use of 2D engineering drawings to provide such details. == Use of the 3D digital data set == Modern 3D CAD applications allow for the insertion of engineering information such as dimensions, GD&T, notes and other product details within the 3D digital data set for components and assemblies. MBD uses such capabilities to establish the 3D digital data set as the source of these specifications and design authority for the product. The 3D digital data set may contain enough information to manufacture and inspect product without the need for engineering drawings. Engineering drawings have traditionally contained such information. In many instances, use of some information from 3D digital data set (e.g., the solid model) allows for rapid prototyping of product via various processes, such as 3D printing. A manufacturer may be able to feed 3D digital data directly to manufacturing devices such as CNC machines to manufacture the final product. == Limited Dimension Drawing == Limited Dimension Drawing (LDD), sometimes Reduced Dimension Drawing, are 2D drawings that only contain critical information, noting that all missing information is to be taken from an associated 3D model. For companies in transition to MBD from traditional 2D documentation a Limited Dimension Drawing allows for referencing 3D geometry while retaining a 2D drawing that can be used in existing corporate procedures. Only limited information is placed on the 2D drawing and then a note is placed to notify manufactures they must build off the 3D model for any dimensions not found on the 2D drawing. == Standardization == In 2003, ASME published the ASME Y14.41 Digital Product Definition Data Practices, which was revised in 2012 and again in 2019. The standard provides for the use of many MBD aspects, such as GD&T display and other annotation behaviors within 3D modelling environment. ISO 16792 standardizes MBD within the ISO standards, sharing many similarities with the ASME standard. Other standards, such as ISO 1101 and of AS9100 also make use of MBD. In 2013, the United States Department of Defense released MIL-STD-31000 Revision A to codify the use of MBD as a requirement for technical data packages (TDP). == See also == ASME Y14.41 CAD standards == References == == External links == Model-centric Design, Design World, 2008 Patel, Nikunj (February 2, 2016). "The Argument Against Model-Based Definition". Design News. Ruemler, Shawn P; Zimmerman, Kyle E; Hartman, Nathan W; Hedberg, Thomas; Barnard Feeny, Allison (2016). "Promoting Model-Based Definition to Establish a Complete Product Definition". Journal of Manufacturing Science and Engineering. 139 (5): 051008. doi:10.1115/1.4034625. PMC 5215895. PMID 28070155. Quintana, Virgilio; Rivest, Louis; Pellerin, Robert; Venne, Frédérick; Kheddouci, Fawzi (2010). "Will Model-based Definition replace engineering drawings throughout the product lifecycle? A global perspective from aerospace industry". Computers in Industry. 61 (5): 497–508. doi:10.1016/j.compind.2010.01.005. Miller, Alexander Mcdermott; Alvarez, Ramon; Hartman, Nathan (2018). "Towards an extended model-based definition for the digital twin". Computer-Aided Design and Applications. 15 (6): 880–91. doi:10.1080/16864360.2018.1462569. Zhu, Wenhua; Bricogne, Matthieu; Durupt, Alexandre; Remy, Sébastien; Li, Baorui; Eynard, Benoit (2016). "Implementations of Model Based Definition and Product Lifecycle Management Technologies: A Case Study in Chinese Aeronautical Industry". IFAC-PapersOnLine. 49 (12): 485–90. doi:10.1016/j.ifacol.2016.07.664. Miller, Alexander Mcdermott; Hartman, Nathan W; Hedberg, Thomas; Barnard Feeney, Allison; Zahner, Jesse (2017). "Towards Identifying the Elements of a Minimum Information Model for Use in a Model-Based Definition". Volume 3: Manufacturing Equipment and Systems. V003T04A017. doi:10.1115/MSEC2017-2979. ISBN 978-0-7918-5074-9. Furrer, David U; Dimiduk, Dennis M; Cotton, James D; Ward, Charles H (2017). "Making the Case for a Model-Based Definition of Engineering Materials". Integrating Materials and Manufacturing Innovation. 6 (3): 249–63. doi:10.1007/s40192-017-0102-7. Uski, Pekka; Pulkkinen, Antti; Koskinen, Kari T. (2016). Aaltonen, Jussi; Virkkunen, Riikka; Koskinen, Kari T.; Kuivanen, Risto (eds.). Can a sheet metal product be manufactured without drawings? – Product lifecycle's point of view (PDF). Proceedings of the 1st Annual SMACC Research Seminar 2016. Tampere: Tampere University of Technology. pp. 109–11. ISBN 978-952-15-3832-2. Quintana, Virgilio; Rivest, Louis; Pellerin, Robert; Kheddouci, Fawzi (2012). "Re-engineering the Engineering Change Management process for a drawing-less environment". Computers in Industry. 63 (1): 79–90. doi:10.1016/j.compind.2011.10.003. Hedberg, Thomas D; Hartman, Nathan W; Rosche, Phil; Fischer, Kevin (2016). "Identified research directions for using manufacturing knowledge earlier in the product life cycle". International Journal of Production Research. 55 (3): 819–827. doi:10.1080/00207543.2016.1213453. PMC 5155444. PMID 27990027. Ma, Qin Yi; Song, Li Hua; Xie, Da Peng; Zhou, Mao Jun (2017). "Development of CAD Model Annotation System Based on Design Intent". Applied Mechanics and Materials. 863: 368–72. doi:10.4028/www.scientific.net/AMM.863.368. S2CID 114427127.
Wikipedia/Model-based_definition
In integrated circuit design, physical design is a step in the standard design cycle which follows after the circuit design. At this step, circuit representations of the components (devices and interconnects) of the design are converted into geometric representations of shapes which, when manufactured in the corresponding layers of materials, will ensure the required functioning of the components. This geometric representation is called integrated circuit layout. This step is usually split into several sub-steps, which include both design and verification and validation of the layout. Modern day Integrated Circuit (IC) design is split up into Front-end Design using HDLs and Back-end Design or Physical Design. The inputs to physical design are (i) a netlist, (ii) library information on the basic devices in the design, and (iii) a technology file containing the manufacturing constraints. Physical design is usually concluded by Layout Post Processing, in which amendments and additions to the chip layout are performed. This is followed by the Fabrication or Manufacturing Process where designs are transferred onto silicon dies which are then packaged into ICs. Each of the phases mentioned above has design flows associated with them. These design flows lay down the process and guide-lines/framework for that phase. The physical design flow uses the technology libraries that are provided by the fabrication houses. These technology files provide information regarding the type of silicon wafer used, the standard-cells used, the layout rules (like DRC in VLSI), etc. The physical design engineer (sometimes called physical engineer or physical designer) is responsible for the design and layout (routing), specifically in ASIC/FPGA design. == Divisions == Typically, the IC physical design is categorized into full custom and semi-custom design. Full-Custom: Designer has full flexibility on the layout design, no predefined cells are used. Semi-Custom: Pre-designed library cells (preferably tested with DFM) are used, designer has flexibility in placement of the cells and routing. One can use ASIC for Full Custom design and FPGA for Semi-Custom design flows. The reason being that one has the flexibility to design/modify design blocks from vendor provided libraries in ASIC. This flexibility is missing for Semi-Custom flows using FPGAs (e.g. Altera). == ASIC physical design flow == The main steps in the ASIC physical design flow are: Design Netlist (after synthesis) Floorplanning Partitioning Placement Clock-tree Synthesis (CTS) Routing Physical Verification Layout Post Processing with Mask Data Generation These steps are just the basics. There are detailed PD flows that are used depending on the Tools used and the methodology/technology. Some of the tools/software used in the back-end design are: Cadence (Cadence Encounter RTL Compiler, Encounter Digital Implementation, Cadence Voltus IC Power Integrity Solution, Cadence Tempus Timing Signoff Solution) Synopsys (Design Compiler, IC Compiler II, IC Validator, PrimeTime, PrimePower, PrimeRail) Magma (BlastFusion, etc.) Mentor Graphics (Olympus SoC, IC-Station, Calibre) The ASIC physical design flow uses the technology libraries that are provided by the fabrication houses. Technologies are commonly classified according to minimal feature size. Standard sizes, in the order of miniaturization, are 2μm, 1μm , 0.5μm , 0.35μm, 0.25μm, 180nm, 130nm, 90nm, 65nm, 45nm, 28nm, 22nm, 18nm, 14nm, etc. They may be also classified according to major manufacturing approaches: n-Well process, twin-well process, SOI process, etc. == Design netlist == Physical design is based on a netlist which is the end result of the synthesis process. Synthesis converts the RTL design usually coded in VHDL or Verilog HDL to gate-level descriptions which the next set of tools can read/understand. This netlist contains information on the cells used, their interconnections, area used, and other details. Typical synthesis tools are: Cadence RTL Compiler/Build Gates/Physically Knowledgeable Synthesis (PKS) Synopsys Design Compiler During the synthesis process, constraints are applied to ensure that the design meets the required functionality and speed (specifications). Only after the netlist is verified for functionality and timing it is sent for the physical design flow. == Steps == === Partitioning === Partitioning is a process of dividing the chip into small blocks. This is done mainly to separate different functional blocks and also to make placement and routing easier. Partitioning can be done in the RTL design phase when the design engineer partitions the entire design into sub-blocks and then proceeds to design each module. These modules are linked together in the main module called the TOP LEVEL module. This kind of partitioning is commonly referred to as Logical Partitioning. The goal of partitioning is to split the circuit such that the number of connections between partitions is minimized. === Floorplanning === The second step in the physical design flow is floorplanning. Floorplanning is the process of identifying structures that should be placed close together, and allocating space for them in such a manner as to meet the sometimes conflicting goals of available space (cost of the chip), required performance, and the desire to have everything close to everything else. Based on the area of the design and the hierarchy, a suitable floorplan is decided upon. Floorplanning takes into account the macros used in the design, memory, other IP cores and their placement needs, the routing possibilities, and also the area of the entire design. Floorplanning also determines the IO structure and aspect ratio of the design. A bad floorplan will lead to wastage of die area and routing congestion. In many design methodologies, area and speed are the subjects of trade-offs. This is due to limited routing resources, as the more resources used, the slower the operation. Optimizing for minimum area allows the design both to use fewer resources, and for greater proximity of the sections of the design. This leads to shorter interconnect distances, fewer routing resources used, faster end-to-end signal paths, and even faster and more consistent place and route times. Done correctly, there are no negatives to floorplanning. As a general rule, data-path sections benefit most from floorplanning, whereas random logic, state machines, and other non-structured logic can safely be left to the placer section of the place and route software. Data paths are typically the areas of the design where multiple bits are processed in parallel with each bit being modified the same way with maybe some influence from adjacent bits. Example structures that make up data paths are Adders, Subtractors, Counters, Registers, and Muxes. === Placement === Before the start of placement optimization all Wire Load Models (WLM) are removed. Placement uses RC values from Virtual Route (VR) to calculate timing. VR is the shortest Manhattan distance between two pins. VR RCs are more accurate than WLM RCs. Placement is performed in four optimization phases: Pre-placement optimization In placement optimization Post Placement Optimization (PPO) before clock tree synthesis (CTS) PPO after CTS. Pre-placement Optimization optimizes the netlist before placement, HFNs (High Fanout Nets) are collapsed. It can also downsize the cells. In-placement optimization re-optimizes the logic based on VR. This can perform cell sizing, cell moving, cell bypassing, net splitting, gate duplication, buffer insertion, area recovery. Optimization performs iteration of setup fixing, incremental timing and congestion driven placement. Post placement optimization before CTS performs netlist optimization with ideal clocks. It can fix setup, hold, max trans/cap violations. It can do placement optimization based on global routing. It re does HFN synthesis. Post placement optimization after CTS optimizes timing with propagated clock. It tries to preserve clock skew. === Clock tree synthesis === The goal of clock tree synthesis (CTS) is to minimize skew and insertion delay. Clock is not propagated before CTS as shown in the picture. After CTS hold slack should improve. Clock tree begins at .sdc defined clock source and ends at stop pins of flop. There are two types of stop pins known as ignore pins and sync pins. 'Don't touch' circuits and pins in front end (logic synthesis) are treated as 'ignore' circuits or pins at back end (physical synthesis). 'Ignore' pins are ignored for timing analysis. If clock is divided then separate skew analysis is necessary. Global skew achieves zero skew between two synchronous pins without considering logic relationship. Local skew achieves zero skew between two synchronous pins while considering logic relationship. If clock is skewed intentionally to improve setup slack then it is known as useful skew. Rigidity is the term coined in Astro to indicate the relaxation of constraints. Higher the rigidity tighter is the constraints. In clock tree optimization (CTO) clock can be shielded so that noise is not coupled to other signals. But shielding increases area by 12 to 15%. Since the clock signal is global in nature the same metal layer used for power routing is used for clock also. CTO is achieved by buffer sizing, gate sizing, buffer relocation, level adjustment and HFN synthesis. We try to improve setup slack in pre-placement, in placement and post placement optimization before CTS stages while neglecting hold slack. In post placement optimization after CTS hold slack is improved. As a result of CTS lot of buffers are added. Generally for 100k gates around 650 buffers are added. === Routing === There are two types of routing in the physical design process, global routing and detailed routing. Global routing allocates routing resources that are used for connections. It also does track assignment for a particular net. Detailed routing does the actual connections. Different constraints that are to be taken care during the routing are DRC, wire length, timing etc. === Physical verification === Physical verification checks the correctness of the generated layout design. This includes verifying that the layout Complies with all technology requirements – Design Rule Checking (DRC) Is consistent with the original netlist – Layout vs. Schematic (LVS) Has no antenna effects – Antenna Rule Checking This also includes density verification at the full chip level...Cleaning density is a very critical step in the lower technology nodes Complies with all electrical requirements – Electrical Rule Checking (ERC). === Layout post processing === Layout Post Processing, also known mask data preparation, often concludes physical design and verification. It converts the physical layout (polygons) into mask data (instructions for the photomask writer). It includes Chip finishing, such as inserting company/chip labels and final structures (e.g., seal ring, filler structures), Generating a reticle layout with test patterns and alignment marks, Layout-to-mask preparation that extends layout data with graphics operations (e.g., resolution enhancement technologies, RET) and adjusts the data to mask production devices (photomask writer). == See also == FEOL BEOL == References ==
Wikipedia/Physical_design_(electronics)
A design choice describes the planned way to satisfy an engineering development requirement in a way that could be satisfied differently. Often, there are multiple ways to satisfy a requirement, which necessitates making choices to select from possible design options. Selection is often based on financial considerations, often resulting in the least expensive option. In civil engineering, design choices typically derive from basic principles of materials science and structural design. A suspension bridge, for example, uses the fact that steel is extremely efficient in tension, while a prestressed concrete bridge takes advantage of concrete's relatively low cost by weight and its ability to sustain high compressive loading (see compression). == References ==
Wikipedia/Design_choice
In chemical biology and biomolecular engineering, rational design (RD) is an umbrella term which invites the strategy of creating new molecules with a certain functionality, based upon the ability to predict how the molecule's structure (specifically derived from motifs) will affect its behavior through physical models. This can be done either from scratch or by making calculated variations on a known structure, and usually complements directed evolution. == Applications == As an example, rational design is used to decipher collagen stability, mapping ligand-receptor interactions, unveiling protein folding and dynamics, and creating extra-biological structures by using fluorinated amino acids. To treat cancer, rational design is used for targeted therapies where proteins are engineered to modify the communication of cells with their environment. There is also the rational design of alfa-alkyl auxin molecules, which are auxin analogs capable of binding and blocking the formation of the hormone receptor complex. Other applications of rational design include: Protein design Nucleic acid design Drug design Pathway design == References ==
Wikipedia/Rational_design
Floral design or flower arrangement is the art of using plant material and flowers to create an eye-catching and balanced composition or display. Evidence of refined floral design is found as far back as the culture of ancient Egypt. Floral designs, called arrangements, incorporate the five elements and seven principles of floral design. Floral design is considered a section of floristry. But floral design pertains only to the design and creation of arrangements. It does not include the marketing, merchandising, caring of, growing of, or delivery of flowers. Common flower arrangements in floral design include vase arrangements, wreaths, nosegays, garlands, festoons, boutonnieres, corsages, and bouquets. == History == The Eastern, Western, and European styles have all influenced the commercial floral design industry as it is today. Western design historically is characterized by symmetrical, asymmetrical, horizontal, and vertical style of arrangements. The history of flower arrangement first dates back to Ancient Egypt, and has gradually evolved over time. === Ancient civilizations === Egyptians were among the first to place lotus flowers and buds in vases nearly 4,000 years ago. Egyptians also created bouquets, wreaths, garlands, headwear, and collars. These arrangements often used lotus and papyrus, as they were seen as sacred plants to the goddess Isis. Ancient Greeks and Romans also created garlands and wreaths to wear. Greeks and Romans also created cornucopias full of fruits and vegetables as religious offerings. === Asia === Chinese and Korean arrangements were, and still are, traditionally based upon the Confucian idea of reflection, the Buddhist principle of preservation, and Taoist symbolism. The arrangements of the Chinese and Koreans often use containers of varying height and shape, and use natural elements, such as rocks. Ikebana is the Japanese style of floral design, and incorporates the three main line placements that correspond with heaven, humans, and the earth. === Europe === During the Renaissance, pieces often had a degree of symbolism and used bright, vivid, and contrasting triadic colors. Designs were symmetrical and combined fresh and dried material, as well as fruits and vegetables. These arrangements were often triangular, arching, or ellipse-shaped. In French design, arrangements often used soft pastel colors. Arrangements were often light and airy, and stressed the individual beauty of each flower itself, rather than the entire arrangement. Pieces were semi-ovoid, soft and airy, had a feminine design, were symmetrical, and had no focal point. They accentuated rhythm with curves, lines, and flourishes of plant material. English design drew from the vast variety of plant materials that were available in estates and the countryside. Most arrangements during the various periods were formal pieces, generally triangular in shape, and symmetrical. === The Americas === In the Americas, during the Colonial Period (1607–1699), arrangements were made used gathered wildflowers, grasses, and seed pods. These arrangements reflected a simplistic lifestyle with few luxuries; a reflection of the first colonists to arrive there. American arrangements then evolved from numerous influences, primarily European. As such, American pieces began to reflect the sophistication, symmetry, and shapes of European design ideals of the time. === Modern day === In the mid 20th century, flower arranging and floral design came to be seen as an art form. While modern floral designers and arrangers are still inspired by the naturalistic, 19th century designs, modern designers tend to want to break free from the rigid patterns and restrictions of past period designs. This led to the creation of abstract designs in modern floral arrangement. Other modern designers, however, did not feel inspired or drawn to abstract designs. As such, these designers began to create new design styles. Today's floral arrangements are born out of these two factors. Modern arrangements range from zero abstraction, in which pieces and components are untreated and organized naturally, to total abstraction, which totally disregards patterns and rules. Today, there are many styles of floral design including the Botanical Style, the Garden Style (Hand Tied, Compote or Armature), the Crescent Corsage, the Nosegay Corsage, Pot au Fleur, the Inverted "T", Parallel Systems, Western Line, the Hedgerow Design, Mille de Fleur, and Formal Linear. == Design == === Principles === When creating flower arrangements, there are generally seven principles that floral designers must incorporate into their arrangement to create a flattering and appealing piece. These seven principles include: Proportion: the relationship between the sizes of elements used to create the design (e.g., flowers, foliage, vase, accessories). Scale: the relationship between the overall size of the design and the setting it is placed in. Balance: contains physical balance and visual balance. Physical balance is the distribution of materials and weight across the arrangement; the arrangement should be stable and not at risk of falling over. Visual balance is the poise an arrangement contains upon first glance. There are three types of visual balance: symmetrical, asymmetrical, and open. Focal point: the main feature of the design and/or the first thing that attracts the viewer's eye. Rhythm: the visual flow of the arrangement. This element should encourage the viewer's gaze to move inward, outward, up, and down while looking at the arrangement. Achieved through colors, shapes, lines, textures, and spaces. Harmony: the pleasing combination of colors, material, and texture used in the arrangement. Unity: everything is placed with purpose; achieved when the other 6 principles are in order. It is important to keep in mind that not every arrangement will use all seven principles of design. For example, French Baroque and Rococo style arrangements do not include a focal point. Rococo designs also disregarded proportion; they were to be much taller than they were wide. Some traditional designs disregarded space (and therefore a part of rhythm). Modern abstract designers may disregard the seven principles entirely. === Elements === In addition to the seven principles, there are also five elements of design a designer must keep in mind when arranging flowers. These five elements include: Line: provides the shape and structure for the design. Line also creates paths for the viewer's eye to follow when viewing the arrangement. Lines can defined (clearly visible) or implied (suggested by changes in color, tone, and texture). Line helps build the dimensions and overall shape of the design. Color: the color of the arrangement. There are numerous color schemes, such as monochromatic, triadic, analogous, or complimentary. Different color schemes provide different effects on the feel of the arrangement. Form: the height, width, and depth of the arrangement. Form also helps build the dimensions and overall shape of the design, much like Line. Space: the spacing of flowers, foliage, and other materials. Space ensures every flower is visible, and that the design is not too clumpy, constricted, spaced out, or empty. Texture: the different textures used in an arrangement. Texture gives the arrangement diversity and interest. Texture is one way a floral designer can achieve rhythm. Textures can be smooth, wrinkled, rough, glossy, etc. == Media == === Fresh === The vast majority of the media used in floral design is fresh, or living, media. Fresh media includes flowers and foliage. ==== Flowers ==== Flowers used in floral design are often broke into four categories: line flowers, form flowers, mass flowers, and filler flowers. Each category serves its own purpose in achieving an element or principle of design. The four categories are listed as follows: Line flowers are tall spikes of flowers that bloom along the stem of the plant. They create the outline for an arrangement and determine the height and width of the design. They can be straight, or naturally curving. Most line flowers have larger flowers at the bottom of the stem, that gradually become smaller the closer they are located to the end of the spike. This creates rhythm in the design, as the eye naturally follows the progression. Examples of line flowers include snapdragons, delphiniums, liatris, gladiolus, stock, cattails, and pussywillows. Form flowers are flowers that have interesting colors, textures, and/or patterns that draw attention and stand out among the other pieces in the arrangement. They are most often used as the focal point of the arrangement. Form flowers include irises, calla lilies, anthurium, and orchids. Mass flowers consist of a single stem with one solid, rounded head at the top of the stem. They add mass and visual weight to an arrangement. Mass flowers are often inserted near the rim of the container to draw attention to the focal point, or to serve as the focal point themselves. Mass flowers are often considered the "star of the show" in an arrangement. Oftentimes, more than one type of mass flower is used to create variety and to avoid monotony, or, to otherwise make the arrangement less boring. Mass flowers include carnations, chrysanthemums, daisies, anemone, dahlias, hydrangeas, and roses. Filler flowers are composed of small "sprays" of flowers. Filler flowers are used, as the name suggests, to fill in empty spaces among mass flowers and the framework of the design. Filler flowers also add further dimension to the arrangement. Examples of filler flowers are baby's breath and statice. Just because a flower is defined in one category, that does not exclude it from other categories. For example, chrysthanthemums can be considered both a mass flower or a filler flower, depending on the size and variety of the bloom. Anthuriums and orchids can be considered form flowers, as well as mass flowers. Other flowers commonly used by floral designers include peruvian lilies, cosmos, freesias, gardenia, hyacinth, kalanchoe, larkspur, lavender, lilac, lilies, limonium, lupine, peonies, phlox, protea, ranunculus, sedum, solidago, sunflowers, tulips, and zinnias. ==== Foliage ==== Much like flowers, foliage can also be divided into the same four categories. Usually, they are meant to accent what is being done by their flower counterparts. Line foliages are effective for repeating and complimenting lines established by the line flowers. This creates repetition and unity within the arrangement. Much like line flowers, they can also be straight or curved. Examples of line foliage include bear grass, flax, ivy, and flat ferns, such as sword fern. Form foliages also have unique textures, patterns, or colors that allow them to shine through and stand out in an arrangement. Form foliage are often used to achieve the element of space. They also are used to drag the viewer's eye to the focal point. Form foliages include seeded eucalyptus, calathea, equisetum, diffenbachia, and galax. Mass foliages have the same purpose as mass flowers: to add mass and visual weight to the arrangement. However, they are also effective in filling empty space not occupied by flowers and hiding the mechanics of the design (e.g., floral foam, pot tape, etc.). Mass foliages include leatherleaf fern and salal. Filler foliages are used as accents to create harmony and unity. Depending on the texture of the filler foliage, there can be different effects on the feel of the arrangements. Fillers like plumosa asparagus and sprengeri fern lighten and soften, whereas coarse textures of plants like huckleberry and boxwood create contrast. Another similarity shared in the categorization of foliage in the same way as flowers is that a certain type of foliage may be included in more than one category. Leatherleaf fern can be considered a mass foliage or a line foliage, and ruscus can be considered form foliage or a line foliage. Other foliage used by floral designers today include Italian ruscus, Israeli ruscus, dusty miller, monstera deliciosa, eucalyptus (including silver dollar, gunnii, and baby blue), various types of ferns (such as tree fern), camellia, olive branches, hypericum berries, and pittosporum. === Preserved === Dried materials such as bark, wood, dried flowers, dried (and often aromatic) inflorescences, leaves, leaf skeletons, and other preserved materials are common extensions of the art and media of floral design. They are of practical importance in that they last indefinitely and are independent of the seasons. Their materials offer effects and associations complementary to, and contrasting with, fresh flowers and foliage. == Tools == To create an arrangement, a floral designer has to use a multitude of tools. In general, the most common tools are floral tape, pot tape, glue, flower frogs, cutting tools, floral foam, containers, and wire. Vases and other containers are used to hold the arrangement. They often lend to the final look of a piece, and come in a variety of shapes and sizes to suit numerous types of projects. Floral foam is a piece of dense foam that holds moisture and keeps flowers in place. Most floral foam has a specific container that can hold the foam without anything more than placing it into the container. However, floral foam can be cut into any shape, and therefore placed in any container. In recent years, there has been controversy over the environmental impact of floral foam, as well as the potential negative health effects from inhaling the powder created from unsoaked foam. Nevertheless, floral foam is still an essential tool in floral design. Cutting tools, such as floral knives, floral shears, pruners, and ribbon scissors can be used to cut a variety of materials in floral design. Knives can be used to cut flowers or floral foam. Shears and pruners can also be used to trim and cut foliage and flowers. Ribbon scissors are used to cut ribbon and twine. Adhesive tools include floral tape, pot tape, floral adhesive (also known as cold glue), and hot glue. Floral tape is most often used to secure flowers together or to cover the mechanics of an arrangement, especially when creating a boutonniere or corsage. Pot tape is used to create a grid pattern in vases, which helps keeps flowers and foliage in place. Pot tape can also be used to secure floral foam to a container. Cold glue is used to secure fresh, living flowers together or in place for an arrangement. Hot glue is used to glue non-living media in place or together. Wire is used in floral design for a variety of purposes. It can be used to secure ribbons in place, fix broken stems, or provide strength to weak or flimsy material. Wire comes in different gauges, or sizes, which are used for different applications. Flower frogs are devices that keep flowers upright. They usually have holes to place the flowers into, or spikes to "spear" the cut end of the flower into. == Education == With the ever-growing interest in the natural world and flowers, the floral industry continues to grow. The increase in educational institutes providing training in floral design has expanded to many state universities, certified design schools, and even high schools worldwide. Schools that teach floral design courses teach techniques to arrange flowers, plant identification, foliage and flower care for both fresh and preserved media, retail floral shop practices, and how to place and receive flower orders. Most of these programs reward students with certificates or degrees in floral design, shop management, or artisanship. Floral design course are typically cheaper than most higher education programs, and can cost anywhere from US$125 to over US$25,000. Most courses take around six to eighteen months to complete. The following list is composed of schools and organizations that offer floral design courses: American Institute of Floral Designers Anne Arundel Community College British Florist Association California Polytechnic State University Hong Kong Academy of Flower Arrangement Golden West College Jackson High School (Stark County, Ohio) Judith Blacklock Flower School Mississippi State University New York Institute of Art & Design Nobleman School of Floral Design Rittners School of Floral Design Texas State Florists Association Texas Tech University School of Floral Design The London Flower School == Community == === Floral shops === Floral shops are business establishments that create and sell floral designs. Floral shops often have a vast variety of flowers and foliage to use in creating arrangements, which can be custom ordered or pre-designed. Floral shops usually receive a majority of their business on the following holidays and events: Christmas, Valentine's Day, Administrative Professionals' Day, Mothers' Day, All Souls Day, Advent, Easter, weddings and funerals. Floral shops also include the other aspects of floristry, including marketing, buying and selling of flowers, production, etc. === Street vendors === Street vendors that sell flowers and arrangements are called flower sellers. Flower sellers are popular in countries like Mexico, India, Vietnam and Southwestern states in the United States. === Associations === Prominent industry associations that promote floral design worldwide include the American Institute of Floral Designers (AIFD), the Society of American Florists (SAF), and the National Association of Flower Arranging Societies (NAFAS). In the United States, there are also numerous floriculture and floral design organizations for nearly all of the 50 states in the country. These associations promote floral design through workshops, conferences, flower shows, design competition opportunities and seminars. === Designers === Notable floral designers include Daniel Ost, Junichi Kakizaki, Paula Pryke, Phil Rulloda, Catherine Conlin, Constance Spry, Jennifer McGarigle, Judith Blacklock, Stanlee Gatti, Irene Hayes, Julia Clements, Azuma Makoto, and the White House Chief Floral Designer. == See also == Floristry Floral shop History of flower arrangement Flower seller Ikebana Floral Jamming The Big Flower Fight Interior design Fashion design == References ==
Wikipedia/Floral_design
Transgenerational design is the practice of making products and environments compatible with those physical and sensory impairments associated with human aging and which limit major activities of daily living. The term transgenerational design was coined in 1986, by Syracuse University industrial design professor James J. Pirkl to describe and identify products and environments that accommodate, and appeal to, the widest spectrum of those who would use them—the young, the old, the able, the disabled—without penalty to any group. The transgenerational design concept emerged from his federally funded design-for-aging research project, Industrial design Accommodations: A Transgenerational Perspective. The project's two seminal 1988 publications provided detailed information about the aging process; informed and sensitized industrial design professionals and design students about the realities of human aging; and offered a useful set of guidelines and strategies for designing products that accommodate the changing needs of people of all ages and abilities. == Overview == The transgenerational design concept establishes a common ground for those who are committed to integrating age and ability within the consumer population. Its underlying principle is that people, including those who are aged or impaired, have an equal right to live in a unified society. Transgenerational design practice recognizes that human aging is a continuous, dynamic process that starts at birth and ends with death, and that throughout the aging process, people normally experience occurrences of illness, accidents and declines in physical and sensory abilities that impair one's independence and lifestyle. But most injuries, impairments and disabilities typically occur more frequently as one grows older and experiences the effects of senescence (biological aging). Four facts clarify the interrelationship of age with physical and sensory vulnerability: young people become old young people can become disabled old people can become disabled disabled people become old Within each situation, consumers expect products and services to fulfill and enhance their lifestyle, both physically and symbolically. Transgenerational design focuses on serving their needs through what Cagan and Vogel call "a value oriented product development process". They note that a product is "deemed of value to a customer if it offers a strong effect on lifestyle, enabling features, and meaningful ergonomics" resulting in products that are "useful, usable, and desirable" during both short and long term use by people of all ages and abilities.: p.34  Transgenerational design is "framed as a market-aware response to population aging that fulfills the need for products and environments that can be used by both young and old people living and working in the same environment".: p.16  == Benefits == Transgenerational design benefits all ages and abilities by creating a harmonious bond between products and the people that use them. It satisfies the psychological, physiological, and sociological factors desired—and anticipated—by users of all ages and abilities:: p.32  Safety Comfort Convenience Usability Ergonomics Accommodation Transgenerational design addresses each element and accommodates the user—regardless of age or ability—by providing a sympathetic fit and unencumbered ease of use. Such designs provide greater accessibility by offering wider options and more choices, thereby preserving and extending one's independence, and enhancing the quality of life for all ages and abilities—at no group's expense. Transgenerational designs accommodate rather than discriminate and sympathize rather than stigmatize. They do this by: bridging the transitions across life's stages responding to the widest range of individual differences helping people remain active and independent adapting to changing sensory and physical needs maintaining one's dignity and self-respect enabling one to choose the appropriate means to accomplish activities of daily living == History == Transgenerational design emerged during the mid-1980s coincident with the conception of universal design, an outgrowth of the disability rights movement and earlier barrier-free concepts. In contrast, transgenerational design grew out of the Age Discrimination Act of 1975, which prohibited "discrimination on the basis of age in programs and activities receiving Federal financial assistance", or excluding, denying or providing different or lesser services on the basis of age. The ensuing political interest and debate over the Act's 1978 amendments, which abolished mandatory retirement at age 65, made the issues of aging a major public policy concern by injecting it into the mainstream of societal awareness. === Background === At the start of the 1980s, the oldest members of the population, having matured during the Great Depression, were being replaced by a generation of Baby Boomers, steadily reaching middle age and approaching the threshold of retirement. Their swelling numbers signaled profound demographic changes ahead that would steadily expand the aging population throughout the world. Advancements in medical research were also changing the image of old age—from a social problem of the sick, poor, and senile, whose solutions depend on public policy—to the emerging reality of an active aging population having vigor, resources, and time to apply both. Responding to the public's growing awareness, the media, public policy, and some institutions began to recognize the impending implications. Time and Newsweek devoted cover stories to the "Greying of America". Local radio stations began replacing their rock-and-roll formats with music targeted to more mature tastes. The Collegiate Forum (Dow Jones & Co., Inc.) devoted its Fall 1982 issue entirely to articles on the aging work force. A National Research Conference on Technology and Aging, and the Office of Technological Assessment of the House of Representatives, initiated a major examination of the impact of science and technology on older Americans”. In 1985, the National Endowment for the Arts, the Administration on Aging, the Farmer's Home Administration, and the Department of Housing and Urban Development signed an agreement to improve building, landscape, product and graphic design for older Americans, which included new research applications for old age that recognized the potential for making products easier to use by the elderly, and therefore more appealing and profitable. === Development === In 1987, recognizing the implications of population aging, Syracuse University’s Department of Design, All-University Gerontology Center, and Center for Instructional Development initiated and collaborated on an interdisciplinary project, Industrial Design Accommodations: A Transgenerational Perspective. The year-long project, supported by a Federal grant, joined the knowledge base of gerontology with the professional practice of industrial design. The project defined "the three aspects of aging as physiological, sociological, and psychological; and divided the designer’s responsibility into aesthetic, technological, and humanistic concerns". The strong interrelationship between the physiological aspects of aging and industrial design's humanistic aspects established the project's instructional focus and categorized the physiological aspects of aging as the sensory and physical factors of vision, hearing, touch, and movement. This interrelationship was translated into a series of reference tables, which related specific physical and sensory factors of aging, and were included in the resulting set of design guidelines to: sensitize designers and design students to the aging process provide them with appropriate knowledge about this process accommodate the changing needs of our transgenerational population The project produced and published two instructional manuals—one for instructors and one for design professionals—each containing a detailed set of "design guidelines and strategies for designing transgenerationalproducts". Under terms of the grant, instructional manuals were distributed to all academic programs of industrial design recognized by the National Association of Schools of Art and Design (NASAD). === Chronology === 1988: The term ‘transgenerational design’ first appears to have been publicly recognized and acknowledged by the Bristol-Myers Company in its annual report, which stated, "The trend towards transgenerational design seems to be catching on in some fields", noting that “transgenerational design has the added advantage of circumventing the stigmatizing label of being ‘old’ ”. 1989: The results of the 1987 Federal grant project were first presented at the national conference, Exploration: Technological Innovations for an Aging Population, supported in part by the American Association of Retired Persons (AARP) and the National Institute on Aging. The proceedings focused “on current efforts to address the impact of technology and an aging population, identification of high impact issues and problems, innovative ideas, and potential solutions”. Also in 1989 Design News, the Japanese design magazine, introduced “the new concept of transgenerational design (for) coping with the needs of an aging population and its strategy”, stating that “the impact will soon be felt by all global institutions” and “alter the present course of industrial design practice and education”. 1990: The OXO company introduced the first group of 15 Good Grips kitchen tools to the U.S. Market. “These ergonomically-designed, transgenerational tools set a new standard for the industry and raised the bar to consumer expectation for comfort and performance”. Sam Farber, OXOs founder, stated that “population trends demand transgenerational products, products that will be useful to you throughout the course of your life” because “it extends the life of a product and its materials by anticipating the whole experience of the user”. 1991: The Fall issue of the Design Management Journal addressed the issue of “Responsible Design” and introduced the transgenerational design concept in the article, “Transgenerational Design: A Strategy Whose Time Has Arrived”. The article presented a description, the rationale, and examples of early transgenerational products, and offered “insights on the rationale and benefits of such a transgenerational approach”. 1993: The September–October issue of ‘’AARP The Magazine’’ exposed the transgenerational design concept to the readers in a featured article, “This Bold House”, describing the concept, details, and benefits of a transgenerational house. The article noted that “easy-grip handles, flat thresholds, and adjustable-height vanities are just the beginning in the world’s most accessible house,” providing families of all ages and abilities with “what they will want and need their whole lives”. In November, the transgenerational design concept was introduced in presentations to the European design community at the international symposiums, “Designing for Our Future Selves”, held at the Royal College of Art in London and the Netherlands Design Institute in Rotterdam. 1994: The book, Transgenerational Design: Products for an Aging Population (Pirkl 1994), may be regarded as the prime mover of the widespread acceptance and practice of the transgenerational design concept. It presented the first specialized content and photographic examples of transgenerational products and environments, offering “practical strategies in response to population aging, along with case study examples based on applying a better understanding of age-related capabilities”. It introduced the transgenerational design concept to the international design and gerontology communities, broadening the conventional idea of “environmental support” to include the product environment, sparking scholarly discussions and comparisons with other emerging concepts: (universal design, design for all, inclusive design, and gerontechnology). 1995: The transgenerational design concept was presented at the first of the ‘’International Guest Lecture Series by World Experts’’, sponsored by the European Design for Aging Network (DAN) held consecutively at five international symposiums, “Designing for Our Future Selves”: Royal College of Art, London, November 15; Eindhoven University of Technology, Eindhoven, November 16–19; The Netherlands Design Institute, Amsterdam, November 21; University of Art and Design, Helsinki, November 23–25; and National College of Art and Design, Dublin, November 26–29. 2000: “The Transgenerational House: A Case Study in Accessible Design and Construction” was presented in June at ‘’Designing for the 21st Century: An International Conference on Universal Design’’, held at the Rhode Island College of Art and Design, Providence, RI. 2007: Architectural Graphic Standards, published by the American Institute of Architects and commonly referred to as the “architects bible”, presented a “Transgenerational House” case study in its "Inclusive Design" section. Described as an “intricate exploration in how the execution of detailed thought can create a living environment that serves the young and old alike, across generations”, the study includes plans for the room layout, kitchen, laundry, master bath, adjustable-height vanity, and roll-in shower. 2012: The proliferation of transgenerational design has diminished the tendency to associate age and disability with deficit, decline and incompetence by providing a market-aware response to population aging and the need for living and work environments used by young and old people living and working in the same environment. Continuing to emerge as a growing strategy for developing products, services and environments that accommodate people of all ages and abilities, "transgenerational design has been adopted by major corporations, like Intel, Microsoft and Kodak” who are “looking at product development the same way as designing products for people with visual, hearing and physical impairments,” so that people of any age can use them. Discussions between designers and marketers are indicating that successful transgenerational design “requires the right balance of upfront research work, solid human factors analysis, extensive design exploration, testing and a lot of thought to get it right”, and that “transgenerational design is applicable to any consumer products company—from appliance manufacturers to electronics companies, furniture makers, kitchen and bath and mainstream consumer products companies”. == See also == Ageless computing Curb cut effect Development plan Disability rights movement Inclusion (disability rights) Inclusive design Sensory friendly Urban planning == References ==
Wikipedia/Transgenerational_design
Geometric modeling is a branch of applied mathematics and computational geometry that studies methods and algorithms for the mathematical description of shapes. The shapes studied in geometric modeling are mostly two- or three-dimensional (solid figures), although many of its tools and principles can be applied to sets of any finite dimension. Today most geometric modeling is done with computers and for computer-based applications. Two-dimensional models are important in computer typography and technical drawing. Three-dimensional models are central to computer-aided design and manufacturing (CAD/CAM), and widely used in many applied technical fields such as civil and mechanical engineering, architecture, geology and medical image processing. Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an opaque algorithm that generates its appearance. They are also contrasted with digital images and volumetric models which represent the shape as a subset of a fine regular partition of space; and with fractal models that give an infinitely recursive definition of the shape. However, these distinctions are often blurred: for instance, a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, a fractal model yields a parametric or implicit model when its recursive definition is truncated to a finite depth. Notable awards of the area are the John A. Gregory Memorial Award and the Bézier award. == See also == 2D geometric modeling Architectural geometry Computational conformal geometry Computational topology Computer-aided engineering Computer-aided manufacturing Digital geometry Geometric modeling kernel List of interactive geometry software Parametric equation Parametric surface Solid modeling Space partitioning == References == == Further reading == General textbooks: Jean Gallier (1999). Curves and Surfaces in Geometric Modeling: Theory and Algorithms. Morgan Kaufmann. This book is out of print and freely available from the author. Gerald E. Farin (2002). Curves and Surfaces for CAGD: A Practical Guide (5th ed.). Morgan Kaufmann. ISBN 978-1-55860-737-8. Michael E. Mortenson (2006). Geometric Modeling (3rd ed.). Industrial Press. ISBN 978-0-8311-3298-9. Ronald Goldman (2009). An Integrated Introduction to Computer Graphics and Geometric Modeling (1st ed.). CRC Press. ISBN 978-1-4398-0334-9. Nikolay N. Golovanov (2014). Geometric Modeling: The mathematics of shapes. CreateSpace Independent Publishing Platform. ISBN 978-1497473195. For multi-resolution (multiple level of detail) geometric modeling : Armin Iske; Ewald Quak; Michael S. Floater (2002). Tutorials on Multiresolution in Geometric Modelling: Summer School Lecture Notes. Springer Science & Business Media. ISBN 978-3-540-43639-3. Neil Dodgson; Michael S. Floater; Malcolm Sabin (2006). Advances in Multiresolution for Geometric Modelling. Springer Science & Business Media. ISBN 978-3-540-26808-6. Subdivision methods (such as subdivision surfaces): Joseph D. Warren; Henrik Weimer (2002). Subdivision Methods for Geometric Design: A Constructive Approach. Morgan Kaufmann. ISBN 978-1-55860-446-9. Jörg Peters; Ulrich Reif (2008). Subdivision Surfaces. Springer Science & Business Media. ISBN 978-3-540-76405-2. Lars-Erik Andersson; Neil Frederick Stewart (2010). Introduction to the Mathematics of Subdivision Surfaces. SIAM. ISBN 978-0-89871-761-7. == External links == Geometry and Algorithms for CAD (Lecture Note, TU Darmstadt)
Wikipedia/Geometric_model
A graphics tablet (also known as a digitizer, digital graphic tablet, pen tablet, drawing tablet, external drawing pad or digital art board) is a computer input device that enables a user to hand draw or paint images, animations and graphics, with a special pen-like stylus, similar to the way a person draws pictures with a pencil and paper by hand. Graphics tablets may also be used to capture data or handwritten signatures. They can also be used to trace an image from a piece of paper that is taped or otherwise secured to the tablet surface. Capturing data in this way, by tracing or entering the corners of linear polylines or shapes, is called digitizing. The device consists of a rough surface upon which the user may "draw" or trace an image using the attached stylus, a pen-like drawing apparatus. The image is shown on the computer monitor, though some graphic tablets now also incorporate an LCD screen for more realistic or natural experience and usability. Some tablets are intended as a replacement for the computer mouse as the primary pointing and navigation device for desktop computers. == History == The first electronic handwriting device was the Telautograph, patented by Elisha Gray in 1888. The first graphic tablet resembling contemporary tablets and used for handwriting recognition by a computer was the Stylator in 1957. Better known (and often misstated as the first digitizer tablet) is the RAND Tablet also known as the Grafacon (for Graphic Converter), introduced in 1964. The RAND Tablet employed a grid of wires under the surface of the pad that encoded horizontal and vertical coordinates in a small electrostatic signal. The stylus received the signal by capacitive coupling, which could then be decoded back as coordinate information. The acoustic tablet, or spark tablet, used a stylus that generated clicks with a spark plug. The clicks were then triangulated by a series of microphones to locate the pen in space. The system was fairly complex and expensive, and the sensors were susceptible to interference by external noise. Digitizers were popularized in the mid-1970s and early 1980s by the commercial success of the ID (Intelligent Digitizer) and BitPad manufactured by the Summagraphics Corp. The Summagraphics digitizers were sold under the company's name but were also private labeled for HP, Tektronix, Apple, Evans and Sutherland and several other graphic system manufacturers. The ID model was the first graphics tablet to make use of what was at the time, the new Intel microprocessor technology. This embedded processing power allowed the ID models to have twice the accuracy of previous models while still making use of the same foundation technology. Key to this accuracy improvement were two US Patents issued to Stephen Domyan, Robert Davis, and Edward Snyder. The Bit Pad model was the first attempt at a low cost graphics tablet with an initial selling price of $555 when other graphics tablets were selling in the $2,000 to $3,000 price range. This lower cost opened up the opportunities for would be entrepreneurs to be able to write graphics software for a multitude of new applications. These digitizers were used as the input device for many high-end CAD (Computer Aided Design) systems as well as bundled with PCs and PC-based CAD software like AutoCAD. These tablets used a magnetostriction technology which used wires made of a special alloy stretched over a solid substrate to accurately locate the tip of a stylus or the center of a digitizer cursor on the surface of the tablet. This technology also allowed Proximity or "Z" axis measurement. In 1981, musician Todd Rundgren created the first color graphic tablet software for personal computers, which was licensed to Apple as the Utopia Graphic Tablet System. In 1981, the Quantel Paintbox color graphic workstation was released; This model was equipped with the first pressure sensitive tablet. The first home computer graphic tablet was the KoalaPad, released in 1983. Though originally designed for the Apple II, the Koala eventually broadened its applicability to other home computers including the TRS-80 Color Computer, Commodore 64, and Atari 8-bit computers. In the 1980s, several vendors of graphic tablets began to include additional functions, such as handwriting recognition and on-tablet menus. == Characteristics == Typically tablets are characterized by size of the device, drawing area, its resolution size ("active area", which is measured in lpi), pressure sensitivity (level of varying the size of strokes with pressure), number of buttons and types and number of interfaces: Bluetooth, USB; etc. The actual drawing accuracy is restricted to pen's nib size. == Types == There have been many attempts to categorize the technologies that have been used for graphic tablets: Passive tablets Passive tablets make use of electromagnetic induction technology, where the horizontal and vertical wires of the tablet operate as both transmitting and receiving coils (as opposed to the wires of the RAND Tablet which only transmit). The tablet generates an electromagnetic signal, which is received by the LC circuit in the stylus. The wires in the tablet then change to a receiving mode and read the signal generated by the stylus. Modern arrangements also provide pressure sensitivity and one or more buttons, with the electronics for this information present in the stylus. On older tablets, changing the pressure on the stylus nib or pressing a button changed the properties of the LC circuit, affecting the signal generated by the pen, which modern ones often encode into the signal as a digital data stream. By using electromagnetic signals, the tablet is able to sense the stylus position without the stylus having to even touch the surface, and powering the pen with this signal means that devices used with the tablet never need batteries. Activslate 50, the model used with Promethean white boards, also uses a hybrid of this technology. Active tablets Active tablets differ in that the stylus used contains self-powered electronics that generate and transmit a signal to the tablet. These styluses rely on an internal battery rather than the tablet for their power, resulting in a bulkier stylus. Eliminating the need to power the pen means that such tablets may listen for pen signals constantly, as they do not have to alternate between transmit and receive modes, which can result in less jitter. Optical tablets Optical tablets operate by a very small digital camera in the stylus and then doing pattern matching on the image of the paper. The most successful example is the technology developed by Anoto. Acoustic tablets Early models were described as spark tablets—a small sound generator was mounted in the stylus, and the acoustic signal picked up by two microphones placed near the writing surface. Some modern designs are able to read positions in three dimensions. Capacitive tablets These tablets have also been designed to use an electrostatic or capacitive signal. Scriptel's designs are one example of a high-performance tablet detecting an electrostatic signal. Unlike the type of capacitive design used for touchscreens, the Scriptel design is able to detect the position of the pen while it is in proximity to or hovering above the tablet. Many multi-touch tablets use capacitive sensing. For all these technologies, the tablet can use the received signal to also determine the distance of the stylus from the surface of the tablet, the tilt (angle from vertical) of the stylus, and other information in addition to the horizontal and vertical positions, such as clicking buttons of the stylus or the rotation of the stylus. Compared to touchscreens, a graphic tablet generally offers much higher precision, the ability to track an object which is not touching the tablet, and can gather much more information about the stylus, but is typically more expensive, and can only be used with the special stylus or other accessories. Some tablets, especially inexpensive ones aimed at young children, come with a corded stylus, using technology similar to older RAND tablets. == Pucks == After styluses, pucks are the most commonly used tablet accessory. A puck is a mouse-like device that can detect its absolute position and rotation. This is opposed to a mouse, which can only sense its relative velocity on a surface (most tablet drivers are capable of allowing a puck to emulate a mouse in operation, and many pucks are marketed as a "mouse"). Pucks range in size and shape; some are externally indistinguishable from a mouse, while others are a fairly large device with dozens of buttons and controls. Professional pucks often have a reticle or loupe which allows the user to see the exact point on the tablet's surface targeted by the puck, for detailed tracing and computer aided design (CAD) work. Pucks are used on the Microsoft Surface range and were recently used on the Dell Canvas. However, they have been largely discontinued by most manufactures in favour of physical hotkeys and dials. == Embedded LCD tablets == Some graphics tablets incorporate an LCD into the tablet itself, allowing the user to draw or paint directly on the screen. Graphics tablet/screen hybrids offer advantages over both standard PC touchscreens and ordinary graphics tablets. Unlike touchscreens, they offer pressure sensitivity, and their input resolution is generally higher. While their pressure sensitivity and resolution are typically no better than those of ordinary tablets, they offer the additional advantage of directly seeing the location of the physical pen device relatively to the image on the screen. This often allows for increased accuracy and a more tactile, "real" feeling to the use of the device. The graphics tablet manufacturer Wacom holds many patents on key technologies for graphics tablets, which forces competitors to use other technologies or license Wacom's patents. The displays are often sold for thousands of dollars. For instance, the Wacom Cintiq series ranges from just below US$1,000 to over US$2,000. Some commercially available graphics tablet/screen hybrids include: Monoprice 19-Inch Interactive Display Cintiq from Wacom Kamvas (e.g. Kamvas Studio 22) from Huion XP-PEN GAOMON Parblo ugee Xencelabs VEIKK There have also been do-it-yourself projects where conventional used LCD monitors and graphics tablets have been converted to a graphics tablet-screen hybrid. == Uses == Graphic tablets, because of their stylus-based interface and ability to detect some or all of pressure, tilt, and other attributes of the stylus and its interaction with the tablet, are widely considered to offer a very natural way to create computer graphics, especially two-dimensional computer graphics. Indeed, many graphic packages can make use of the pressure (and, sometimes, stylus tilt or rotation) information generated by a tablet, by modifying the brush size, shape, opacity, color, or other attributes based on data received from the graphic tablet. In East Asia, graphic tablets, known as "pen tablets", are widely used in conjunction with input-method editor software (IMEs) to write Chinese, Japanese, and Korean characters (CJK). The technology is popular and inexpensive and offers a method for interacting with the computer in a more natural way than typing on the keyboard, with the pen tablet supplanting the role of the computer mouse. Uptake of handwriting recognition among users who use alphabetic scripts has been slower. Graphic tablets are commonly used in the artistic world. Using a pen-like stylus on a graphic tablet combined with a graphics-editing program, such as Illustrator, Photoshop by Adobe Systems, Corelpainter, or Krita gives artists a lot of precision when creating digital drawings or artwork. Photographers can also find working with a graphic tablet during their post processing can really speed up tasks like creating a detailed layer mask or dodging and burning. Educators make use of tablets in classrooms to project handwritten notes or lessons and to allow students to do the same, as well as providing feedback on student work submitted electronically. Online teachers may also use a tablet for marking student work, or for live tutorials or lessons, especially where complex visual information or mathematical equations are required. Students are also increasingly using them as note-taking devices, especially during university lectures while following along with the lecturer. They facilitate smooth online teaching process and are popularly used along with face-cam to mimic classroom experience. Tablets are also popular for technical drawings and CAD, as one can typically put a piece of paper on them without interfering with their function. Finally, tablets are gaining popularity as a replacement for the computer mouse as a pointing device. They can feel more intuitive to some users than a mouse, as the position of a pen on a tablet typically corresponds to the location of the pointer on the GUI shown on the computer screen. Those artists using a pen for graphic work may, as a matter of convenience, use a tablet and pen for standard computer operations rather than put down the pen and find a mouse. Popular rhythm game osu! allows utilizing a tablet as a way of playing. Graphic tablets are available in various sizes and price ranges; A6-sized tablets being relatively inexpensive and A3-sized tablets far more expensive. Modern tablets usually connect to the computer via a USB or HDMI interface. == Similar devices == Interactive whiteboards offer high-resolution wall size graphic tablets up to 95" (241,3 cm) along with options for pressure and multiple input. These are becoming commonplace in schools and meeting rooms around the world. Earlier resistive touch screen devices (like PDAs, early smartphones, tablet PCs, and the Nintendo DS) were typically equipped with styluses, but accuracy of stylus input was very limited. The more modern capacitive touch screens such as those found on some table computers, tablet computers and laptops operate in similar ways, but they usually use either optical grids or a pressure-sensitive film instead so do not need a special pointing device. Some of the latest models with capacitive input can be equipped with specialized styluses, and then these input devices can be used similar to full-function graphics tablet. A graphic tablet is also used for Audio-Haptic products where blind or visually impaired people touch swelled graphics on a graphic tablet and get audio feedback from that. The product that is using this technology is called Tactile Talking Tablet or T3. == See also == Handwriting movement analysis Digital art education Digital image Light pen Pantograph Pen computing == References ==
Wikipedia/Graphics_tablet
Activity-centered design (ACD) is an extension of the Human-centered design paradigm in interaction design. ACD features heavier emphasis on the activities that a user would perform with a given piece of technology. ACD has its theoretical underpinnings in activity theory, from which activities can be defined as actions taken by a user to achieve a goal. When working with activity-centered design, the designers use research to get insights of the users. Observations and interviews are typical approaches to learn more about the users' behavior. By mapping users' activities and tasks, the designer may notice missing tasks for the activity to become more easy to perform, and thus design solutions to accomplish those tasks. == References == Saffer, Dan. 2010. Designing for interaction. Gay, Geri and Helene Hembrooke. 2004. Activity-Centered Design: An Ecological Approach to Designing Smart Tools and Usable Systems. Norman, Don. 2015. The Design of Everyday Things: Revised and Expanded Edition. Niaz Mahmud " Activity Center Design" == Notes ==
Wikipedia/Activity-centered_design
Framework Oriented Design (FOD) is a programming paradigm that uses existing frameworks as the basis for an application design. The framework can be thought of as fully functioning template application. The application development consists of modifying callback procedure behaviour and modifying object behaviour using inheritance. This paradigm provides the patterns for understanding development with Rapid Application Development (RAD) systems such as Delphi, where the Integrated Development Environment (IDE) provides the template application and the programmer fills in the appropriate event handlers. The developer has the option of modifying existing objects via inheritance. == References == C++ Hierarchy Design Idioms by Stephen C. Dewhurst of www.semantics.org.
Wikipedia/Framework-oriented_design
Motorcycle design can be described as activities that define the appearance, function and engineering of motorcycles. Professionally it is a branch of industrial design, similar to automotive design using identical techniques and methodology, but confined by a set of conventions about what is acceptable to the buying public. These conventions have been defined by the acceptance of the industry and media as a whole to the assumption that the public will only purchase machines that bear more than a passing resemblance to competition machines of whatever kind. In some large OEM motorcycle manufacturers, the term designer can also be applied to the project leader or chief engineer charged with laying down the principal architecture of the vehicle. In recent years, it has also become associated with custom or "chopper" builder culture. == Professional design == Professional motorcycle designers almost always hold degrees in industrial design, industrial design engineering or similar, and have training in styling, modeling, as well as knowledge in aspects of technology associated with single track vehicles. Although no degree as a specialisation exists per se, the majority of candidates graduate through colleges and universities with established transportation design courses, and are trained as automotive designers. Most OEM motorcycle manufacturers, such as Honda, Suzuki, Kawasaki, BMW, Ducati, Piaggio and others have in-house design studios dedicated to this purpose, while others such as Yamaha and KTM depend on specialised independent design studios. == Methodology == === Design and engineering relationships === Due to the high importance of mechanical components or even exposed engines to motorcycle styling, almost always designers will have a greater sensitivity to and awareness of engineering than will typical car designers. In OEM situations, large teams of professional engineers and specialists will collaborate on each project development, allowing the designer to focus on the more intangible or subjective aspects of design, such as styling, human-machine interface psychology, and market and cultural relationships. In other matters such as pure mechanical ergonomics (such as seat height, handlebar placement, etc.), or basic layout (the location of major components, storage, etc.) there is usually considerable overlap between the designer and engineer. The designer will nominally approach each problem from a human interface, or "feel" or "irrational" point of view (example : "Does this material feel cold or warm, and is this feeling appropriate to this vehicle's target consumer?"), while the engineer will attack each problem with the "rational" or clinical approach of empirically weighing the cause and effect of each design decision against the project's technical and economic design targets (example : "Can this material be moulded into the designer's desired shape? Will that be too expensive to produce?") === Research and Concept Design === In OEM motorcycle design, the normal procedure of developing a new motorcycle involves the same steps as in other professional design disciplines : identifying a target consumer, researching them to identify benchmarks and project targets, then proposing concept directions in a written form known as a Design Brief or QFD. From this point, artwork is developed to visually communicate the designer's ideas. These are presented in 2Db drawing or illustrated form, from which a winning direction is down selected for further development. Once a satisfactory design is established on paper (the term paper is a generalization that can include traditional hand renderings, digital artwork or CAD drawings), then full scale modeling begins to realise the design in tangible 3D form. === Styling === Often used as an interchangeable term with "design", styling is in fact just one component of the design process. Typically, styling is developed through sketches, renderings and illustrations then realised in 3D form using automotive styling clay, specialised industrial modeling foams such as Sibatool, Renshape or Epiwood, or in increasingly limited cases plaster or body filler. As the most subjective part of the design process, the various members of the development team must depend heavily on the judgment, skill and experience of the appointed designer to create an appropriate look. The most misunderstood element and the most dangerous to the success of a product, is the idea that team members should evaluate the design based on personal tastes or preferences. Industrial design is not an art form, but a focused creative expression using the scientific data and analysis in the Design Brief and QFD as ultimate guidelines. The target user, their needs and tastes should be reflected in the final design, not necessarily exclusively those of the design team. Of course, many complex variables such as the OEM brand identity, past successes and failures, and whimsical trends often skew or distort styling decisions. In instances where the factors are overwhelming, OEM's may err on the side of cautious conservative design. === Parallel development === Because of the need to reduce development time and costs, the "styling" design model is usually developed in parallel with the engineering 3D design. While there is an increasing amount of digital design input in the modern OEM design process, nearly all major motorcycle manufacturers still rely on full scale clay models to render the master style model, then scan and import the styling surfaces into suitable 3D software packages (Alias, CATIA, ISEM Surf) for integration into the 3D engineering CAD platform (CATIA, ProEngineer, etc.). Once combined, the design team can virtually refine the motorcycle by optimising component assembly, checking for any undesirable interferences between parts, and predict and eliminate possible engineering problems. Typically, designers and engineers will have the greatest number of conflicts during this phase of development, as designers will fight to maintain the original styling and design of the clay model and artwork into the production vehicle, while the engineer will eliminate all problems in the most efficient manner possible. The success of the final product depends heavily on the level of cooperation between these often conflicting needs. == Amateur and specialists == === Custom builders === In recent years, largely due to the popularity of television programs like Orange County Chopper and Biker Build-off, the building of one of a kind "chopper" or "cruiser" type motorcycles has become more mainstream, leading to a flourishing builder industry. As a whole, these vehicles are not designed in the professional sense, but rather crafted by hand by metal workers and artisans using traditional skills. The resulting vehicles tend to be very elaborate, expensive and difficult or impossible to reproduce in mass production, but are highly valued for the same reasons. Among custom motorcycle culture, certain names have become famous for their creations and have led to mainstream acceptance of previously unacceptable design solutions such as extreme ergonomics, totally rigid rear wheels without the benefit of suspension, minimal lighting and limited ground clearance for cornering. These design characteristics are purely emotional in nature, being led by styling and image rather than technical or performance considerations. === "Specials" === Custom and specials motorcycles are similar to the above but tend to be super sport type motorcycles, or at least high-performance based, using many special add-on parts, one-of-a-kind or limited series frames, racing wheels and parts or hand-made components to maximise performance. While modifying motorcycles is an activity as old as the motorcycle itself, the "special" culture or "streetfighter" began to flourish in the mid-1970s as a response to the myriad high performance Japanese motorcycles then available, but whose power far exceeded their handling. Individuals would choose premanufactured parts from catalogs or from other bikes and redesign their particular machine to suit their desires. In general this activity is limited to one-of-a-kind vehicles and, as with custom motorcycles, uses very little genuine engineering or design methodology, although some small-scale manufacturers exist who make limited runs of a given model. In some cases, these tiny specialists were successful enough to grow into full-scale OEM companies such as the Buell Motorcycle Company and Bimota of Italy. == References == Cocco, G.; Motorcycle Design and Technology, 1999, Italy, ISBN 88-7911-189-2 Heskett, J.; J. Heskett, 1980, Great Britain, ISBN 0-500-20181-1 Royal College of Art, Moving Objects, Great Britain, ISBN 0-9536281-0-8 == External links == Motorcycle Design Association
Wikipedia/Motorcycle_design
Geometrical design (GD) is a branch of computational geometry. It deals with the construction and representation of free-form curves, surfaces, or volumes and is closely related to geometric modeling. Core problems are curve and surface modelling and representation. GD studies especially the construction and manipulation of curves and surfaces given by a set of points using polynomial, rational, piecewise polynomial, or piecewise rational methods. The most important instruments here are parametric curves and parametric surfaces, such as Bézier curves, spline curves and surfaces. An important non-parametric approach is the level-set method. Application areas include shipbuilding, aircraft, and automotive industries, as well as architectural design. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by shipbuilders of 1960s. Geometric models can be built for objects of any dimension in any geometric space. Both 2D and 3D geometric models are extensively used in computer graphics. 2D models are important in computer typography and technical drawing. 3D models are central to computer-aided design and manufacturing, and many applied technical fields such as geology and medical image processing. Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an algorithm. They are also contrasted with digital images and volumetric models; and with mathematical models such as the zero set of an arbitrary polynomial. However, the distinction is often blurred: for instance, geometric shapes can be represented by objects; a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, the modeling of fractal objects often requires a combination of geometric and procedural techniques. Geometric problems originating in architecture can lead to interesting research and results in geometry processing, computer-aided geometric design, and discrete differential geometry. In architecture, geometric design is associated with the pioneering explorations of Chuck Hoberman into transformational geometry as a design idiom, and applications of this design idiom within the domain of architectural geometry. == See also == Architectural geometry Computational topology CAD/CAM/CAE Digital geometry Geometric design of roads List of interactive geometry software Parametric curves Parametric surfaces Solid modeling Space partitioning Wikiversity:Topic:Computational geometry Progressive-iterative approximation method == References == == External links == Evolute Research and Consulting Computer Aided Geometric Design
Wikipedia/Geometric_design
Web design encompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic design; user interface design (UI design); authoring, including standardised code and proprietary software; user experience design (UX design); and search engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all. The term "web design" is normally used to describe the design process relating to the front-end (client side) design of a website including writing markup. Web design partially overlaps web engineering in the broader scope of web development. Web designers are expected to have an awareness of usability and be up to date with web accessibility guidelines. == History == === 1988–2001 === Although web design has a fairly recent history, it can be linked to other areas such as graphic design, user experience, and multimedia arts, but is more aptly seen from a technological standpoint. It has become a large part of people's everyday lives. It is hard to imagine the Internet without animated graphics, different styles of typography, backgrounds, videos and music. The web was announced on August 6, 1991; in November 1992, CERN was the first website to go live on the World Wide Web. During this period, websites were structured by using the <table> tag which created numbers on the website. Eventually, web designers were able to find their way around it to create more structures and formats. In early history, the structure of the websites was fragile and hard to contain, so it became very difficult to use them. In November 1993, ALIWEB was the first ever search engine to be created (Archie Like Indexing for the WEB). ==== The start of the web and web design ==== In 1989, whilst working at CERN in Switzerland, British scientist Tim Berners-Lee proposed to create a global hypertext project, which later became known as the World Wide Web. From 1991 to 1993 the World Wide Web was born. Text-only HTML pages could be viewed using a simple line-mode web browser. In 1993 Marc Andreessen and Eric Bina, created the Mosaic browser. At the time there were multiple browsers, however the majority of them were Unix-based and naturally text-heavy. There had been no integrated approach to graphic design elements such as images or sounds. The Mosaic browser broke this mould. The W3C was created in October 1994 to "lead the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability." This discouraged any one company from monopolizing a proprietary browser and programming language, which could have altered the effect of the World Wide Web as a whole. The W3C continues to set standards, which can today be seen with JavaScript and other languages. In 1994 Andreessen formed Mosaic Communications Corp. that later became known as Netscape Communications, the Netscape 0.9 browser. Netscape created its HTML tags without regard to the traditional standards process. For example, Netscape 1.1 included tags for changing background colours and formatting text with tables on web pages. From 1996 to 1999 the browser wars began, as Microsoft and Netscape fought for ultimate browser dominance. During this time there were many new technologies in the field, notably Cascading Style Sheets, JavaScript, and Dynamic HTML. On the whole, the browser competition did lead to many positive creations and helped web design evolve at a rapid pace. ==== Evolution of web design ==== In 1996, Microsoft released its first competitive browser, which was complete with its features and HTML tags. It was also the first browser to support style sheets, which at the time was seen as an obscure authoring technique and is today an important aspect of web design. The HTML markup for tables was originally intended for displaying tabular data. However, designers quickly realized the potential of using HTML tables for creating complex, multi-column layouts that were otherwise not possible. At this time, as design and good aesthetics seemed to take precedence over good markup structure, little attention was paid to semantics and web accessibility. HTML sites were limited in their design options, even more so with earlier versions of HTML. To create complex designs, many web designers had to use complicated table structures or even use blank spacer .GIF images to stop empty table cells from collapsing. CSS was introduced in December 1996 by the W3C to support presentation and layout. This allowed HTML code to be semantic rather than both semantic and presentational and improved web accessibility, see tableless web design. In 1996, Flash (originally known as FutureSplash) was developed. At the time, the Flash content development tool was relatively simple compared to now, using basic layout and drawing tools, a limited precursor to ActionScript, and a timeline, but it enabled web designers to go beyond the point of HTML, animated GIFs and JavaScript. However, because Flash required a plug-in, many web developers avoided using it for fear of limiting their market share due to lack of compatibility. Instead, designers reverted to GIF animations (if they did not forego using motion graphics altogether) and JavaScript for widgets. But the benefits of Flash made it popular enough among specific target markets to eventually work its way to the vast majority of browsers, and powerful enough to be used to develop entire sites. ==== End of the first browser wars ==== In 1998, Netscape released Netscape Communicator code under an open-source licence, enabling thousands of developers to participate in improving the software. However, these developers decided to start a standard for the web from scratch, which guided the development of the open-source browser and soon expanded to a complete application platform. The Web Standards Project was formed and promoted browser compliance with HTML and CSS standards. Programs like Acid1, Acid2, and Acid3 were created in order to test browsers for compliance with web standards. In 2000, Internet Explorer was released for Mac, which was the first browser that fully supported HTML 4.01 and CSS 1. It was also the first browser to fully support the PNG image format. By 2001, after a campaign by Microsoft to popularize Internet Explorer, Internet Explorer had reached 96% of web browser usage share, which signified the end of the first browser wars as Internet Explorer had no real competition. === 2001–2012 === Since the start of the 21st century, the web has become more and more integrated into people's lives. As this has happened, the technology of the web has also continued to evolve. There have also been significant changes in the way people use and access the web, and this has changed how sites are designed. Since the end of the browsers wars new browsers have been released. Many of these are open source, meaning that they tend to have faster development and are more supportive of new standards. The new options are considered by many to be better than Microsoft's Internet Explorer. The W3C has released new standards for HTML (HTML5) and CSS (CSS3), as well as new JavaScript APIs, each as a new but individual standard. While the term HTML5 is only used to refer to the new version of HTML and some of the JavaScript APIs, it has become common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript). === 2012 and later === With the advancements in 3G and LTE internet coverage, a significant portion of website traffic shifted to mobile devices. This shift influenced the web design industry, steering it towards a minimalist, lighter, and simpler style. The "mobile first" approach emerged as a result, emphasizing the creation of website designs that prioritize mobile-oriented layouts first, before adapting them to larger screen dimensions. == Tools and technologies == Web designers use a variety of different tools depending on what part of the production process they are involved in. These tools are updated over time by newer standards and software but the principles behind them remain the same. Web designers use both vector and raster graphics editors to create web-formatted imagery or design prototypes. A website can be created using WYSIWYG website builder software or a content management system, or the individual web pages can be hand-coded in just the same manner as the first web pages were created. Other tools web designers might use include markup validators and other testing tools for usability and accessibility to ensure their websites meet web accessibility guidelines. === UX Design === One popular tool in web design is UX Design. A popular modality of modern web design art, it features a user-friendly interface and appropriate presentation. == Skills and techniques == === Marketing and communication design === Marketing and communication design on a website may identify what works for its target market. This can be an age group or particular strand of culture; thus the designer may understand the trends of its audience. Designers may also understand the type of website they are designing, meaning, for example, that business-to-business (B2B) website design considerations might differ greatly from a consumer-targeted website such as a retail or entertainment website. Careful consideration might be made to ensure that the aesthetics or overall design of a site do not clash with the clarity and accuracy of the content or the ease of web navigation, especially on a B2B website. Designers may also consider the reputation of the owner or business the site is representing to make sure they are portrayed favorably. Web designers normally oversee the development of sites with respect to their functioning, often initiating changes as business needs require. They may change elements including text, photos, graphics, and layout. Before beginning work on a website, web designers normally set an appointment with their clients to discuss layout, colour, graphics, and design. Web designers spend the majority of their time designing sites and ensuring their satisfactory performance. They typically engage in testing and communication with other designers about marketing issues and the layout and composition of websites. === User experience design and interactive design === User understanding of the content of a website often depends on user understanding of how the website works. This is part of the user experience design. User experience is related to layout, clear instructions, and labeling on a website. How well a user understands how they can interact on a site may also depend on the interactive design of the site. If a user perceives the usefulness of the website, they are more likely to continue using it. Users who are skilled and well versed in website use may find a more distinctive, yet less intuitive or less user-friendly website interface useful nonetheless. However, users with less experience are less likely to see the advantages or usefulness of a less intuitive website interface. This drives the trend for a more universal user experience and ease of access to accommodate as many users as possible regardless of user skill. Much of the user experience design and interactive design are considered in the user interface design. Advanced interactive functions may require plug-ins if not advanced coding language skills. Choosing whether or not to use interactivity that requires plug-ins is a critical decision in user experience design. If the plug-in doesn't come pre-installed with most browsers, there's a risk that the user will have neither the know-how nor the patience to install a plug-in just to access the content. If the function requires advanced coding language skills, it may be too costly in either time or money to code compared to the amount of enhancement the function will add to the user experience. There's also a risk that advanced interactivity may be incompatible with older browsers or hardware configurations. Publishing a function that doesn't work reliably is potentially worse for the user experience than making no attempt. It depends on the target audience if it's likely to be needed or worth any risks. === Progressive enhancement === Progressive enhancement is a strategy in web design that puts emphasis on web content first, allowing everyone to access the basic content and functionality of a web page, whilst users with additional browser features or faster Internet access receive the enhanced version instead. In practice, this means serving content through HTML and applying styling and animation through CSS to the technically possible extent, then applying further enhancements through JavaScript. Pages' text is loaded immediately through the HTML source code rather than having to wait for JavaScript to initiate and load the content subsequently, which allows content to be readable with minimum loading time and bandwidth, and through text-based browsers, and maximizes backwards compatibility. As an example, MediaWiki-based sites including Wikipedia use progressive enhancement, as they remain usable while JavaScript and even CSS is deactivated, as pages' content is included in the page's HTML source code, whereas counter-example Everipedia relies on JavaScript to load pages' content subsequently; a blank page appears with JavaScript deactivated. === Page layout === Part of the user interface design is affected by the quality of the page layout. For example, a designer may consider whether the site's page layout should remain consistent on different pages when designing the layout. Page pixel width may also be considered vital for aligning objects in the layout design. The most popular fixed-width websites generally have the same set width to match the current most popular browser window, at the current most popular screen resolution, on the current most popular monitor size. Most pages are also center-aligned for concerns of aesthetics on larger screens. Fluid layouts increased in popularity around 2000 to allow the browser to make user-specific layout adjustments to fluid layouts based on the details of the reader's screen (window size, font size relative to window, etc.). They grew as an alternative to HTML-table-based layouts and grid-based design in both page layout design principles and in coding technique but were very slow to be adopted. This was due to considerations of screen reading devices and varying window sizes which designers have no control over. Accordingly, a design may be broken down into units (sidebars, content blocks, embedded advertising areas, navigation areas) that are sent to the browser and which will be fitted into the display window by the browser, as best it can. Although such a display may often change the relative position of major content units, sidebars may be displaced below body text rather than to the side of it. This is a more flexible display than a hard-coded grid-based layout that doesn't fit the device window. In particular, the relative position of content blocks may change while leaving the content within the block unaffected. This also minimizes the user's need to horizontally scroll the page. Responsive web design is a newer approach, based on CSS3, and a deeper level of per-device specification within the page's style sheet through an enhanced use of the CSS @media rule. In March 2018 Google announced they would be rolling out mobile-first indexing. Sites using responsive design are well placed to ensure they meet this new approach. === Typography === Web designers may choose to limit the variety of website typefaces to only a few which are of a similar style, instead of using a wide range of typefaces or type styles. Most browsers recognize a specific number of safe fonts, which designers mainly use in order to avoid complications. Font downloading was later included in the CSS3 fonts module and has since been implemented in Safari 3.1, Opera 10, and Mozilla Firefox 3.5. This has subsequently increased interest in web typography, as well as the usage of font downloading. Most site layouts incorporate negative space to break the text up into paragraphs and also avoid center-aligned text. === Motion graphics === The page layout and user interface may also be affected by the use of motion graphics. The choice of whether or not to use motion graphics may depend on the target market for the website. Motion graphics may be expected or at least better received with an entertainment-oriented website. However, a website target audience with a more serious or formal interest (such as business, community, or government) might find animations unnecessary and distracting if only for entertainment or decoration purposes. This doesn't mean that more serious content couldn't be enhanced with animated or video presentations that is relevant to the content. In either case, motion graphic design may make the difference between more effective visuals or distracting visuals. Motion graphics that are not initiated by the site visitor can produce accessibility issues. The World Wide Web consortium accessibility standards require that site visitors be able to disable the animations. === Quality of code === Website designers may consider it to be good practice to conform to standards. This is usually done via a description specifying what the element is doing. Failure to conform to standards may not make a website unusable or error-prone, but standards can relate to the correct layout of pages for readability as well as making sure coded elements are closed appropriately. This includes errors in code, a more organized layout for code, and making sure IDs and classes are identified properly. Poorly coded pages are sometimes colloquially called tag soup. Validating via W3C can only be done when a correct DOCTYPE declaration is made, which is used to highlight errors in code. The system identifies the errors and areas that do not conform to web design standards. This information can then be corrected by the user. === Generated content === There are two ways websites are generated: statically or dynamically. ==== Static websites ==== A static website stores a unique file for every one of its pages. Each time a page is requested, the same content is returned. This content is created once, during the design of the website. It is usually manually authored, although some sites use an automated creation process, similar to a dynamic website, whose results are stored long-term as completed pages. These automatically created static sites became more popular around 2015, with generators such as Jekyll and Adobe Muse. The benefits of a static website are that they were simpler to host, as their server only needed to serve static content, not execute server-side scripts. This required less server administration and had less chance of exposing security holes. They could also serve pages more quickly, on low-cost server hardware. This advantage became less important as cheap web hosting expanded to also offer dynamic features, and virtual servers offered high performance for short intervals at low cost. Almost all websites have some static content, as supporting assets such as images and style sheets are usually static, even on a website with highly dynamic pages. ==== Dynamic websites ==== Dynamic websites are generated on the fly and use server-side technology to generate web pages. They typically extract their content from one or more back-end databases: some are database queries across a relational database to query a catalog or to summarise numeric information, and others may use a document database such as MongoDB or NoSQL to store larger units of content, such as blog posts or wiki articles. In the design process, dynamic pages are often mocked-up or wireframed using static pages. The skillset needed to develop dynamic web pages is much broader than for a static page, involving server-side and database coding as well as client-side interface design. Even medium-sized dynamic projects are thus almost always a team effort. When dynamic web pages first developed, they were typically coded directly in languages such as Perl, PHP or ASP. Some of these, notably PHP and ASP, used a 'template' approach where a server-side page resembled the structure of the completed client-side page, and data was inserted into places defined by 'tags'. This was a quicker means of development than coding in a purely procedural coding language such as Perl. Both of these approaches have now been supplanted for many websites by higher-level application-focused tools such as content management systems. These build on top of general-purpose coding platforms and assume that a website exists to offer content according to one of several well-recognised models, such as a time-sequenced blog, a thematic magazine or news site, a wiki, or a user forum. These tools make the implementation of such a site very easy, and a purely organizational and design-based task, without requiring any coding. Editing the content itself (as well as the template page) can be done both by means of the site itself and with the use of third-party software. The ability to edit all pages is provided only to a specific category of users (for example, administrators, or registered users). In some cases, anonymous users are allowed to edit certain web content, which is less frequent (for example, on forums – adding messages). An example of a site with an anonymous change is Wikipedia. == Homepage design == Usability experts, including Jakob Nielsen and Kyle Soucy, have often emphasised homepage design for website success and asserted that the homepage is the most important page on a website.Nielsen, Jakob; Tahir, Marie (October 2001), Homepage Usability: 50 Websites Deconstructed, New Riders Publishing, ISBN 978-0-7357-1102-0 However, practitioners into the 2000s were starting to find that a growing amount of website traffic was bypassing the homepage, going directly to internal content pages through search engines, e-newsletters and RSS feeds. This led many practitioners to argue that homepages are less important than most people think. Jared Spool argued in 2007 that a site's homepage was actually the least important page on a website. In 2012 and 2013, carousels (also called 'sliders' and 'rotating banners') have become an extremely popular design element on homepages, often used to showcase featured or recent content in a confined space. Many practitioners argue that carousels are an ineffective design element and hurt a website's search engine optimisation and usability. == Occupations == There are two primary jobs involved in creating a website: the web designer and web developer, who often work closely together on a website. The web designers are responsible for the visual aspect, which includes the layout, colouring, and typography of a web page. Web designers will also have a working knowledge of markup languages such as HTML and CSS, although the extent of their knowledge will differ from one web designer to another. Particularly in smaller organizations, one person will need the necessary skills for designing and programming the full web page, while larger organizations may have a web designer responsible for the visual aspect alone. Further jobs which may become involved in the creation of a website include: Graphic designers to create visuals for the site such as logos, layouts, and buttons Internet marketing specialists to help maintain web presence through strategic solutions on targeting viewers to the site, by using marketing and promotional techniques on the internet SEO writers to research and recommend the correct words to be incorporated into a particular website and make the website more accessible and found on numerous search engines Internet copywriter to create the written content of the page to appeal to the targeted viewers of the site User experience (UX) designer incorporates aspects of user-focused design considerations which include information architecture, user-centred design, user testing, interaction design, and occasionally visual design. == Artificial intelligence and web design == Chat GPT and other AI models are being used to write and code websites, making their creation faster and easier. There are still discussions about the ethical implications of using artificial intelligence for design as the world becomes more familiar with using AI for time-consuming tasks used in design processes. == See also == === Related disciplines === == Notes == == References == == External links == W3C consortium for web standards
Wikipedia/Web_design
The table below provides an overview of notable computer-aided design (CAD) software. It does not judge power, ease of use, or other user-experience aspects. The table does not include software that is still in development (beta software). For all-purpose 3D programs, see Comparison of 3D computer graphics software. CAD refers to a specific type of drawing and modelling software application that is used for creating designs and technical drawings. These can be 3D drawings or 2D drawings (like floor plans). == See also == 3D scanning CAD/CAM in the footwear industry Comparison of 3D computer graphics software Comparison of CAD, CAM, and CAE file viewers Comparison of EDA software Comparison of free software for audio List of 3D computer graphics software List of CAx companies List of computer-aided engineering software List of free and open-source software packages List of video editing software == References ==
Wikipedia/Comparison_of_computer-aided_design_software
Diffuse design refers to the designing capability of individuals who are not formally trained as designers. Drawing on the natural human ability to adopt a design approach, nonexpert designers bring diffuse design into the world via a combination of critical sense, creativity, and practical sense. Diffuse design was coined by Italian design scholar Ezio Manzini and was a central theme of his 2015 book Design, When Everybody Designs. Manzini asserts that everybody is endowed with the ability to design, though not everyone is a competent designer and fewer still become professional designers. He also suggests it is the role of expert designers in social innovation contexts to improve the conditions by which different social actors can take part in co-design processes in a more expert fashion. == References ==
Wikipedia/Diffuse_design
User experience design (UX design, UXD, UED, or XD), upon which is the centralized requirements for "User Experience Design Research" (also known as UX Design Research), defines the experience a user would go through when interacting with a company, its services, and its products. User experience design is a user centered design approach because it considers the user's experience when using a product or platform. Research, data analysis, and test results drive design decisions in UX design rather than aesthetic preferences and opinions, for which is known as UX Design Research. Unlike user interface design, which focuses solely on the design of a computer interface, UX design encompasses all aspects of a user's perceived experience with a product or website, such as its usability, usefulness, desirability, brand perception, and overall performance. UX design is also an element of the customer experience (CX), and encompasses all design aspects and design stages that are around a customer's experience. == History == User experience design is a conceptual design discipline rooted in human factors and ergonomics. This field, since the late 1940s, has focused on the interaction between human users, machines, and contextual environments to design systems that address the user's experience. User experience became a positive insight for designers in the early 1990s with the proliferation of workplace computers. Don Norman, a professor and researcher in design, usability, and cognitive science, coined the term "user experience", and brought it to a wider audience that is inside our modernized society. I invented the term because I thought human interface and usability were too narrow. I wanted to cover all aspects of the person's experience with the system including industrial design graphics, the interface, the physical interaction and the manual. Since then the term has spread widely, so much so that it is starting to lose its meaning. == Elements == === Research === User experience design draws from design approaches like human-computer interaction and user-centered design, and includes elements from similar disciplines like interaction design, visual design, information architecture, user research, and others. Another portion of the research is understanding the end-user and the purpose of the application. Though this might seem clear to the designer, stepping back and empathizing with the user will yield the best results. It helps to identify and prove or disprove assumptions, find commonalities across target audience members, and recognize their needs, goals, and mental models. === Visual design === Visual design, also commonly known as graphic design, user interface design, communication design, and visual communication, represents the aesthetics or look-and-feel of the front end of any user interface. Graphic treatment of interface elements is often perceived as the visual design. The purpose of visual design is to use visual elements like colors, images, and symbols to convey a message to its audience. Fundamentals of Gestalt psychology and visual perception give a cognitive perspective on how to create effective visual communication. === Information architecture === Information architecture is the art and science of structuring and organizing the information in products and services to support usability and findability. In the context of information architecture, information is separate from both knowledge and data, and lies nebulously between them. It is information about objects. The objects can range from websites, to software applications, to images et al. It is also concerned with metadata: terms used to describe and represent content objects such as documents, people, process, and organizations. Information architecture also encompasses how the pages and navigation are structured. === Interaction design === It is well recognized that the component of interaction design is an essential part of user experience (UX) design, centering on the interaction between users and products. The goal of interaction design is to create a product that produces an efficient and delightful end-user experience by enabling users to achieve their objectives in the best way possible The growing emphasis on user-centered design and the strong focus on enhancing user experience have made interaction designers essential in shaping products that align with user expectations and adhere to the latest UI patterns and components. In the last few years, the role of interaction designer has shifted from being just focused on specifying UI components and communicating them to the engineers to a situation in which designers have more freedom to design contextual interfaces based on helping meet the user's needs. Therefore, User Experience Design evolved into a multidisciplinary design branch that involves multiple technical aspects from motion graphics design and animation to programming. === Usability === Usability is the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Usability is attached to all tools used by humans and is extended to both digital and non-digital devices. Thus, it is a subset of user experience but not wholly contained. The section of usability that intersects with user experience design is related to humans' ability to use a system or application. Good usability is essential to positive user experience but does not alone guarantee it. === Accessibility === Accessibility of a system describes its ease of reach, use, and understanding. In terms of user experience design, it can also be related to the overall comprehensibility of the information and features. It helps shorten the learning curve associated with the system. Accessibility in many contexts can be related to the ease of use for people with disabilities and comes under usability. In addition, accessible design is the concept of services, products, or facilities in which designers should accommodate and consider for the needs of people with disabilities. The Web Content Accessibility Guidelines (WCAG) state that all content must adhere to the four main principles of POUR: Perceivable, Operable, Understandable, and Robust. ==== WCAG compliance ==== Web Content Accessibility Guidelines (WCAG) 2.0 covers a wide range of recommendations for making Web content more accessible. This makes web content more usable to users in general. Making content more usable and readily accessible to all types of users enhances a user's overall user experience. === Human–computer interaction === Human–computer interaction is concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. ==== Getting ready to design ==== After research, the designer uses the modeling of the users and their environments. User modeling or personas are composite archetypes based on behavior patterns uncovered during research. Personas provide designers a precise way of thinking and communicating about how groups of users behave, how they think, what they want to accomplish and why. Once created, personas help the designer to understand the users' goals in specific contexts, which is particularly useful during ideation and for validating design concepts. Other types of models include workflow models, artifact models, and physical models. ==== Design ==== When the designer has a solid understanding of the user's needs and goals, they begin to sketch out the interaction framework (also known as wireframes). This stage defines the high-level structure of screen layouts, as well as the product's flow, behavior, and organization. There are many kinds of materials that can be involved during this iterative phase, from whiteboards to paper prototypes. As the interaction framework establishes an overall structure for product behavior, a parallel process focused on the visual and industrial designs. The visual design framework defines the experience attributes, visual language, and the visual style. Once a solid and stable framework is established, wireframes are translated from sketched storyboards to full-resolution screens that depict the user interface at the pixel level. At this point, it is critical for the programming team to collaborate closely with the designer. Their input is necessary to create a finished design that can and will be built while remaining true to the concept. ==== Test and iterate ==== Usability testing is carried out by giving users various tasks to perform on the prototypes. Any issues or problems faced by the users are collected as field notes and these notes are used to make changes in the design and reiterate the testing phase. Aside from monitoring issues, questions asked by users are also noted in order to identify potential points of confusion. Usability testing is, at its core, a means to "evaluate, not create". == UX deliverables == UX designers perform a number of different tasks and, therefore, use a range of deliverables to communicate their design ideas and research findings to stakeholders. Regarding UX specification documents, these requirements depend on the client or the organization involved in designing a product. The four major deliverables are: a title page, an introduction to the feature, wireframes, and a version history. Depending on the type of project, the specification documents can also include flow models, cultural models, personas, user stories, scenarios, and any prior user research. The deliverables that UX designers will produce as part of their job include wireframes, prototypes, user flow diagrams, specification and tech docs, websites and applications, mockups, presentations, personas, user profiles, videos, and, to a lesser degree, reports. Documenting design decisions, in the form of annotated wireframes, gives the developer the necessary information they may need to successfully code the project. === After launching a project === Requires: User testing/usability testing A/B testing Information architecture Sitemaps and user flows Additional wireframing as a result of test results and fine-tuning == UX stakeholders == A user experience designer is considered a UX practitioner, along with the following job titles: user experience researcher, information architect, interaction designer, human factors engineer, business analyst, consultant, creative director, interaction architect, and usability specialist. === Interaction designers === Interaction designers (IxD) are responsible for understanding and specifying how the product should behave. This work overlaps with the work of both visual and industrial designers in a couple of important ways. When designing physical products, interaction designers must work with industrial designers early on to specify the requirements for physical inputs and to understand the behavioral impacts of the mechanisms behind them. Interaction designers cross paths with visual designers throughout the project. Visual designers guide the discussions of the brand and emotive aspects of the experience, Interaction designers communicate the priority of information, flow, and functionality in the interface. === Technical communicators === Historically, technical and professional communication (TPC) has been as an industry that practices writing and communication. However, recently UX design has become more prominent in TPC as companies look to develop content for a wide range of audiences and experiences. It is now an expectation that technical and professional skills should be coupled with UX design. According to Verhulsdonck, Howard, and Tham, "...it is not enough to write good content. According to industry expectations, next to writing good content, it is now also crucial to design good experiences around that content." Technical communicators must now consider different platforms such as social media and apps, as well as different channels like web and mobile. ==== UX writers ==== In a similar manner, coupling TPC with UX design allows technical communicators to garner evidence on target audiences. UX writers, a branch of technical communicators, specialize in crafting content for mobile platforms while executing a user-centered approach. UX writers focus on developing content to guide users through interfaces, applications, and websites. Their responsibilities include maintaining UI text, conducting user research for usability testing, and developing the tone for a product's communication. UX writers maintain the practices of technical communicators, by developing documentation that establishes consistency in terminology and tone, promoting a cohesive user experience. However, beyond the writing, UX writers maintain UI text by ensuring that microscopy, such as button labels, error messages, and tooltips, remains user-friendly, as well. In doing this, the writers are also tasked with ensuring accessibility—considering issues like screen reader compatibility or providing non-text elements, such as icons. UX writers conduct extensive research to understand the behaviors and preferences of the target audience through user testing and feedback analysis. These methods of research can include user persona creation and user surveys. Lastly, when setting the tone in a product's communication, UX writers highlight factors that affect user engagement and perception. In short, the writers consider the product's emotional impact on the users, and align the tone with brand's personality. Within the field of UX design, UX writers bridge the gaps between various fields to create a cohesive and user-centric experience. Their expertise in language and communication work to unify design, development, and user research teams by ensuring that the user interface's content aligns with the broader objectives of the product or service. By focusing on clarity, consistency, and empathy, UX writers contribute to the integration of design elements, technical functionality, and user preferences, while following a design process to ensure products with intuitive, accessible, and responsive behavior to user needs. === User interface designers === User interface (UI) design is the process of making interfaces in software or computerized devices with a focus on looks or style. Designers aim to create designs users will find easy to use and pleasurable. UI design typically refers to graphical user interfaces but also includes others, such as voice-controlled ones. === Visual designers === The visual designer ensures that the visual representation of the design effectively communicates the data and hints at the expected behavior of the product. At the same time, the visual designer is responsible for conveying the brand ideals in the product and for creating a positive first impression; this responsibility is shared with the industrial designer if the product involves hardware. In essence, a visual designer must aim for maximum usability combined with maximum desirability. Visual designer need not be good in artistic skills but must deliver the theme in a desirable manner. == UX design testing == Usability testing is the most common method designers use to test their designs. The basic idea behind conducting a usability test is to check whether the design of a product or brand works well with the target users. Usability testing is about testing whether the product's design is successful and, if not, how it can be improved. While designers conduct tests, they are not testing for the user but for the design. Further, every design is evolving, with both UX design and design thinking moving in the direction of Agile software development. The designers carry out usability testing as early and often as possible, ensuring that every aspect of the final product has been tested. Usability tests play an important role in the delivery of a cohesive final product; however, a variety of factors influence the testing process. Evaluating qualitative and quantitative methods provides an adequate picture of UX designs, and one of these quantitative methods is A/B testing (see Usability testing). Another key concept in the efficacy of UX design testing is the idea of a persona or the representation of the most common user of a certain website or program, and how these personas would interact with the design in question. At the core of UX design usability testing is the user; however, steps in automating design testing have been made, with Micron developing the Advanced Test Environment (ATE), which automates UX tests on Android-powered smartphones. While quantitative software tools that collect actionable data, such as loggers and mobile agents, provide insight into a user's experience, the qualitative responses that arise from live, user based UX design testing are lost. The ATE serves to simulate a devices movement that affects design orientation and sensor operation in order to estimate the actual experience of the user based on previously collected user testing data. == See also == Action research Activity-centered design Agile software development Attentive user interface Customer experience Design thinking Paper prototyping Participatory design Process-centered design User advocacy User experience User experience evaluation == References == == Further reading == Buxton, Bill (2010). Sketching User Experiences: Getting the Design Right and the Right Design. Elsevier Science. pp. 436. ISBN 978-0-12-374037-3. Cooper, Alan (1999). The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity. p. 261. ISBN 978-0-672-31649-4. Cooper, Alan; Reimann, Robert; Cronin, David; Noessel, Christopher (2014). About Face: The Essentials of Interaction Design (4th ed.). John Wiley & Sons. ISBN 978-1-118-76657-6. Curedale, Robert (2018). Mapping Methods 2: Step-by-step guide Experience Maps Journey Maps Service Blueprints Affinity Diagrams Empathy Maps Business Model Canvas (2nd ed.). Design Community College Incorporated. ISBN 978-1-940805-37-5. Moggridge, Bill (2006). Designing Interactions. MIT Press. pp. 766. ISBN 978-0-262-13474-3. Moser, Christian (2008). User Experience Design: Mit erlebniszentrierter Softwareentwicklung zu Produkten, die begeistern. Springer. p. 252. ISBN 978-3-642-13362-6. Norman, Donald (2013). The Design of Everyday Things. p. 351. ISBN 978-0-465-06710-7. Tidwell, Jenifer (2005). Designing Interfaces. p. 332. ISBN 978-1-4493-7970-4.
Wikipedia/Experience_design
The Design Quality Indicator (DQI) is a toolkit to measure, evaluate and improve the design quality of buildings. Development of DQI was started in the United Kingdom by the Construction Industry Council (CIC) in 1999. It was initiated in response to the success of Key Performance Indicators devised for assessing construction process issues such as timely completion, financial control and safety on site by the construction industry's Movement for Innovation (M4I). The aim of the DQI systems was to ensure that the M4I's indicators of construction process were balanced by an assessment of the building as a product. The Science Policy Research Unit at the University of Sussex was commissioned to develop the indicator tool, which was launched as an online resource on 1 October 2003. In 2004 the DQI received recognition from the British Institute of Facilities Management for the role of involving users in the design process. The DQI tool was made available to users in the United States in 2006, and an online American version was launched on 20 October 2008. Unlike its forerunner the Housing Quality Indicator (HQI) system devised for the UK's Department for the Environment, Transport and the Regions (DETR) by the consultancy DEGW and published on open access in February 1999, the DQI system instead could be used only by approved facilitators. The criteria and the method of assessment, which though unacknowledged is a simple form of multi-attribute utility analysis, remained inaccessible to design teams and their clients unless they employed a facilitator licensed to use it. Guidance on using the HQI system can be found on the government website. The DQI version for hospitals is also on open access on the national archive. == Conceptual framework == DQI applies a structured approach to assess design quality based on the model by the architect Vitruvius, the Roman author of the earliest surviving theoretical treatise on building in Western culture, who described design in terms of utilitas, firmitas and venustas, often translated as commodity, firmness and delight. DQI uses a modern-day interpretation of these terms as: Functionality (utilitas) – the arrangement, quality and interrelationship of spaces and how the building is designed to be useful to all. Build Quality (firmitas) – the engineering performance of the building, which includes structural stability and the integration, safety and robustness of the systems, finishes and fittings. Impact (venustas) – the building's ability to create a sense of place and have a positive effect on the local community and environment. == Methodology == DQI is completed by a range of stakeholders in the briefing and design stages of a building project, or on a completed building. Stakeholders who participate include: Architects Building users (or potential users) Building clients Facilities managers (or future facilities managers) Project managers Quantity surveyors (Cost engineer) Structural and building services engineers DQI is applied in a facilitated workshop that is led by a certified DQI facilitator. == Models and related approaches == There are three models of design quality indicator: DQI which is applicable to all building types DQI for schools which is applicable to school buildings. This model of DQI is being used on all current school projects in the UK and forms part of the Department for Children, Schools and Families 'Minimum Design Standard' for new school buildings. DQI for health buildings which was released in beta format in June 2012 on the DQI website. == References == == Other references == Whyte, J and Gann, D (2003), Design Quality Indicators: work in progress: Building Research and Information, London: Spon Press. doi:10.1080/0961321032000107537 Markus, T. (2003), Lessons from the Design Quality Indicator: Building Research and Information, London: Spon Press. doi:10.1080/0961321032000088016 Thomson at al. (2003), Managing value and quality in design: Building Research and Information, London: Spon Press. doi:10.1080/0961321032000087981 Prasad, S. (2004) 'Inclusive maps', in Designing Better Buildings: quality and value in the built environment edited by Macmillan, S. London: Spon Press ISBN 0-415-31525-5 Dickson, M. (2004) 'Achieving quality in building design by intention', in Designing Better Buildings: quality and value in the built environment edited by Macmillan, S. London: Spon Press ISBN 0-415-31525-5 Whyte, J Gann, D and Salter, A (2004) 'Building indicators of design quality', in Designing Better Buildings: quality and value in the built environment edited by Macmillan, S. London: Spon Press ISBN 0-415-31525-5 Prasad, S. (2004), Clarifying intentions: the design quality indicator: Building Research and Information, London: Spon Press. doi:10.1080/0961321042000312376 Cole, R. (2005), Building environmental assessment methods: redefining intentions and roles: Building Research and Information, London: Spon Press. doi:10.1080/09613210500219063 Kaatz, E., Root, D. and Bowen, P (2005), Broadening project participation through a modified building sustainability assessment: Building Research & Information, London: Spon Press. doi:10.1080/09613210500219113 Commission for Architecture and the Built Environment (2009), Case study: International Digital Laboratory, University of Warwick, Coventry Commission for Architecture and the Built Environment (2009), Case study: Maples Respite Centre, Harlow, Essex Commission for Architecture and the Built Environment (2009), Case study: St Nicholas Church of England Primary School, Essex Commission for Architecture and the Built Environment (2009), Case study: British Library Centre for Conservation, London Commission for Architecture and the Built Environment (2009), Case study: Frederick Bremer School, Waltham Forest London
Wikipedia/Design_quality_indicator
A design review is a milestone within a product development process whereby a design is evaluated against its requirements in order to verify the outcomes of previous activities and identify issues before committing to—and, if need be, to re-prioritise—further work. The ultimate design review, if successful, therefore triggers the product launch or product release. The conduct of design reviews is compulsory as part of design controls, when developing products in certain regulated contexts such as medical devices. By definition, a review must include persons who are external to the design team. == Contents of a design review == In order to evaluate a design against its requirements, a number of means may be considered, such as: Physical tests. Engineering simulations. Examinations (Walk-through). == Timing of design reviews == Most formalised systems engineering processes recognise that the cost of correcting a fault increases as it progresses through the development process. Additional effort spent in the early stages of development to discover and correct errors is therefore likely to be worthwhile. Design reviews are example of such an effort. Therefore, a number of design reviews may be carried out, for example to evaluate the design against different sets of criteria (consistency, usability, ease of localisation, environmental) or during various stages of the design process. == See also == Design review (U.S. government) Hazard and operability study == References ==
Wikipedia/Design_review
Textile design, also known as textile geometry, is the creative and technical process by which thread or yarn fibers are interlaced to form a piece of cloth or fabric, which is subsequently printed upon or otherwise adorned. Textile design is further broken down into three major disciplines: printed textile design, woven textile design, and mixed media textile design. Each uses different methods to produce a fabric for variable uses and markets. Textile design as an industry is involved in other disciplines such as fashion, interior design, and fine arts. == Overview == Articles produced using textile design include clothing, carpets, drapes, and towels. Textile design requires an understanding of the technical aspects of the production process, as well as the properties of numerous fibers, yarns, and dyes. == Textile design disciplines == === Printed textile design === Printed textile designs are created by using various printing techniques on fabric, cloth, and other materials. Printed textile designers are mainly involved in designing patterns for home interior products like carpets, wallpapers, and ceramics. They also work in the fashion and clothing industries, the paper industry, and in designing stationery and gift wrap. There are numerous established printed styles and designs that can be broken down into four major categories: floral, geometric, world cultures, and conversational. Floral designs include flowers, plants, or other botanical elements. Geometric designs feature elements, both inorganic and abstract, such as tessellations. World culture designs may be traced to a specific geographic, ethnic, or anthropological source. Finally, conversational designs are designs that fit less easily into the other categories; they may be described as presenting "imagery that references popular icons of a particular period or season, or which is unique and challenges our perceptions in some way." Each category contains subcategories, which include more specific individual styles and designs. Moreover, different fabrics, like silk and wool, require different types of dye. Other protein-based fabrics require acidic dyes, whereas synthetic fabrics require specialized dispersed dyes. The advent of computer-aided design software, such as Adobe Photoshop and Illustrator, has allowed each discipline of textile design to evolve and innovate new practices and processes but has most influenced the production of printed textile designs. Digital tools have influenced the process of creating repeating patterns or motifs, or repeats. Repeats are used to create patterns both visible and invisible to the eye: geometric patterns are intended to depict clear, intentional patterns, whereas floral or organic designs are intended to create unbroken repeats that are ideally undetectable. Digital tools have also aided in making patterns by decreasing the amount of an effect known as "tracking", in which the eye is inadvertently drawn to parts of textiles that expose the discontinuity of the textile and reveal its pattern. These tools, alongside the innovation of digital inkjet printing, have allowed the textile printing process to become faster, more scalable, and more sustainable. === Woven textile design === Woven textile design originates from the practice of weaving, which produces fabric by interlacing a vertical yarn (warp) and a horizontal yarn (weft), most often at right angles. Woven textile designs are created by various types of looms and are now predominantly produced using a mechanized or computerized jacquard loom.Designs within the context of weaving are created using various types of yarns, using variance in texture, size, and color to construct a stylized patterned or monochromatic fabric. There is a large range of yarn types available to the designer, including but not limited to cotton, twill, linen, and synthetic fibers. To produce the woven fabric, the designer first delineates and visualizes the sequence of threading, which is traditionally drawn out on graph paper known as point paper. The designer also will choose a weave structure that governs the aesthetic design that will be produced. The most common process is a plain weave, in which the yarns interlace in an alternating, tight formation, producing a strong and flexible multi-use fabric. Twill weaves, which are also common, alternatively use diagonal lines created by floating the warp or the weft to the left or the right. This process creates a softer fabric favored by designers in the fashion and clothing design industries. Common, recognizable twill styles include patterns like Houndstooth or Herringbone. Beyond weave structure, color is another dominant aspect in woven textile design. Typically, designers choose two or more contrasting colors that will be woven into patterns based on a chosen threading sequence. Color is also dependent on the size of the yarn: fine yarns will produce a fabric that may change colors when it receives light from different angles, whereas larger yarns will generally produce a more monochromatic surface. === Mixed media textile design === Mixed media textile designs are produced using embroidery or other various fabric manipulation processes such as pleating, appliqué, quilting, and laser cutting. Embroidery is traditionally performed by hand, applying myriad stitches of thread to construct designs and patterns on the textile surface. Similar to printed textile design, embroidery affords the designer artistic and aesthetic control. Typical stitches include but are not limited to the cross stitch, the chain stitch, and couching. Although industrial and mechanized embroidery has become the standard, hand stitching still remains a fixture for fine arts textiles. Quilting is traditionally used to enhance the insulation and warmth of a textile. It also provides the designer with the opportunity to apply aesthetic properties. Most commonly, quilts feature geometric and collage designs formed from various textiles of different textures and colors. Quilting also frequently employs the use of recycled scrap or heirloom fabrics. Quilts are also often used as a medium for an artist to depict a personal or communal narrative: for example, the Hmong people have a tradition of creating story quilts or cloths illustrating their experiences with immigration to the United States from Eastern and South-eastern Asia. == Environmental impact == The practice and industry of textile design present environmental concerns. From the production of cloth from raw material to dyeing and finishing, and the ultimate disposal of products, each step of the process produces environmental impacts. They have been further exacerbated with the emergence of fast fashion and other modern industrial practices. Predominantly, these environmental impacts stem from the heavy use of hazardous chemicals involved in the textile creation process which must be properly disposed of. Other considerations involve the amount of waste created by the disposal of textile design products and the reclamation and reuse of recyclable textiles. The Environmental Protection Agency reported that over 15 million tons of textile waste is created annually. This consists of some 5% of all municipal waste generated. Only 15% of that waste is recovered and reused. The existence and awareness of the negative environmental impacts of textile production has resulted in the emergence new technologies and practices. Textile designs involving the use of synthetic dyes and materials can result in harmful effects on the environment. This has caused a shift towards using natural dyes or materials and research towards other mediums that result in less harm to the environment. This research includes testing new ways to collect natural resources and how these natural resources work with other materials. Electronic textiles involve items of clothing with electronic devices or technology woven into the fabric, such as heaters, lights, or sensors. These textiles can potentially have additional harmful environmental effects, such as producing electronic waste. Because of this, these textiles are often made by manufacturers with sustainability in mind. These new approaches to textile design attempt to lessen the negative environmental impact of these textiles. These concerns have led to the birth of sustainable textile design movements and the practice of ecological design within the field. For instance, London's Royal Society of the Arts hosts design competitions that compel all entrants to center their design and manufacturing methods around sustainable practices and materials. == Textile design in different cultures == Textile patterns, designs, weaving methods, and cultural significance vary across the world. African countries use textiles as a form of cultural expression and way of life. They use textiles to liven up the interior of a space or accentuate and decorate the body of an individual. The textile designs of African cultures involve the process of strip-woven fibers that can repeat a pattern or vary from strip to strip. == History == The history of textile design dates back thousands of years. Due to the decomposition of textile fibers, early examples of textile design are rare. However, some of the oldest known and preserved examples of textiles were discovered in the form of nets and basketry, dating from Neolithic cultures in 5000 BCE. When trade networks formed in European countries, textiles like silk, wool, cotton, and flax fibers became valuable commodities. Many early cultures including Egyptian, Chinese, African, and Peruvian practiced early weaving techniques. One of the oldest examples of textile design was found in an ancient Siberian tomb in 1947. The tomb was said to be that of a prince aging back to 464 AD, making the tomb and all of its contents over 2,500 years old. The rug, known as the Pazyryk rug, was preserved inside ice and is detailed with elaborate designs of deer and men riding on horseback. The designs are similar to present-day Anatolian and Persian rugs that apply the directly proportional Ghiordes knot in their weaving. The Pazyryk rug is currently displayed at the Hermitage Museum located in St. Petersburg, Russia. == See also == Clothing technology Fashion design Textile manufacturing == References == == Further reading == Jackson, Lesley. Twentieth-Century Pattern Design, Princeton Architectural Press, New York, 2002. ISBN 1-56898-333-6 Jackson, Lesley. Shirley Craven and Hull Traders: Revolutionary Fabrics and Furniture 1957-1980, ACC Editions, 2009, ISBN 1-85149-608-4 Jenkins, David, ed. The Cambridge History of Cambridge, UK: Cambridge University Press, 2003, ISBN 0-521-34107-8 Kadolph, Sara J., ed. Textiles, 10th edition, Pearson/Prentice-Hall, 2007, ISBN 0-13-118769-4 Labillois, Tabitha M., ed. "the meow institute", Mexico, 1756. ISBN 1-55859-851-0 Miraftab, M., and A R. Horrocks. Ecotextiles The Way Forward for Sustainable Development in Textiles. Burlington: Elsevier Science, 2007. Print. Schevill, Margot. Evolution in Textile Design from the Highlands of Guatemala: Seventeen Male Tzutes, or Headdresses, from Chichicastenango in the Collections of the Lowie Museum of Anthropology, University of California, Berkeley. Berkeley, Calif: Lowie Museum of Anthropology, University of California, Berkeley, 1985. Print. Robinson, Stuart. A History of Printed Textiles: Block, Roller, Screen, Design, Dyes, fibers, Discharge, Resist, Further Sources for Research. London: Studio Vista, 1969. Print. Speelberg, Femke. "Fashion & Virtue: Textile Patterns and the Print Revolution, 1520–1620". Metropolitan Museum of Art Bulletin. New York: Metropolitan Museum of Art, 2015. Print. Perivoliotis, Margaret C. "The Role of Textile History in Design Innovation: A Case Study Using Hellenic Textile History". Textile history 36.1 (2005): 1–19. Web. Grömer, Karina. The Art of Prehistoric Textile Making. Naturhistorisches Museum Wien, 2016. Web. European Textile Forum, In Hopkins, H., In Kania, K., & European Textile Forum. (2019). Ancient textiles, modern science II. In Siennicka, M., In Rahmstorf, L., & In Ulanowska, A. (2018). First textiles: The beginnings of textile manufacture in Europe and the Mediterranean: proceedings of the EAA Session held in Istanbul (2014) and the 'First Textiles' Conference in Copenhagen (2015). Whewell, Charles S. and Abrahart, Edward Noah. "Textile". Encyclopædia Britannica, 4 Jun. 2020, https://www.britannica.com/topic/textile. Accessed 7 March 2021. Gesimondo, Nancy and Postell, Jim. "Materiality and Interior Construction". John Wiley & Sons, 2011, ISBN 978-0-470-44544-0
Wikipedia/Textile_design
Integrated topside design is a design approach used by military ship and ship equipment designers to overcome the challenges of effectively operating shipboard antenna systems and equipment susceptible to electromagnetic fields in the high electromagnetic environment of a warship's topside. The approach primarily uses the well-understood physics of electromagnetism to simulate the topside environment before the equipment is tested for real. Advances in ship design to accommodate ever more high power antenna systems, numbers of parasitic re-radiating metallic structures (such as cranes and masts) and a requirement to have more sensitive sensors for littoral operations has led to a need for greater consideration of the operation of equipment prior to ship build or prior to equipment deployment. Whilst this can be done post-deployment using measurement teams, using a modelling and simulation approach early in the ship design is more cost-effective than making corrections after the ship is built, and so is the preferred option for several of the world's advanced navies. == References == Lubben, J, "MilCis Integrated Top Side Design". October 2007 Stanley, J, "Warship Electromagnetic Modelling". June 2010
Wikipedia/Integrated_topside_design
The slow movement is a cultural initiative that advocates for a reduction in the pace of modern life, encouraging individuals to embrace a more thoughtful and deliberate approach to their daily activities. It was an offshoot of the slow food movement, which began as a protest led by Carlo Petrini in 1986 against the opening of a McDonald's restaurant in Rome's Piazza di Spagna. The key ideas of the slow movement include prioritizing quality over quantity, savoring the present moment, and fostering connections with people and the environment. It encourages a more intentional approach to daily activities, promoting sustainable practices and mindfulness. The movement spans various domains such as food, cities, education, fashion, and more, advocating for a balanced and holistic lifestyle that resists the fast-paced demands of modern society. Initiatives linked to this movement include the Cittaslow organization to promote slowness in cities, most notably Naples, Paris, and Rome. Car-free days and banning Vespas to reduce urban noise are a few initiatives. == Origins == The slow movement is a cultural movement which advocates slowing down the pace of human life. It emerged from the slow food movement, and Carlo Petrini's 1986 protest against the opening of a McDonald's restaurant in the Piazza di Spagna, Rome. Geir Berthelsen's The World Institute of Slowness presented a vision in 1999 for a "slow planet". In Carl Honoré's 2004 book, In Praise of Slow, he describes the slow movement as: "a cultural revolution against the notion that faster is always better. The Slow philosophy is not about doing everything at a snail's pace. It's about seeking to do everything at the right speed. Savoring the hours and minutes rather than just counting them. Doing everything as well as possible, instead of as fast as possible. It's about quality over quantity in everything from work to food to parenting." Norwegian professor Guttorm Fløistad summarises the philosophy, stating: "The only thing for certain is that everything changes. The rate of change increases. If you want to hang on, you better speed up. That is the message of today. It could however be useful to remind everyone that our basic needs never change. The need to be seen and appreciated! It is the need to belong. The need for nearness and care, and for a little love! This is given only through slowness in human relations. In order to master changes, we have to recover slowness, reflection and togetherness. There we will find real renewal." == Beliefs == === Art === Slow Art Day was founded by Phil Terry and officially launched in 2009. During one day in April each year, museums and art galleries around the world host events focused on intentionally experiencing art through "slow looking". The movement aims to help people discover the joy of looking at art, typically through observing a painting or sculpture for 10–15 minutes, often followed by discussion. The Slow Art Day team publishes an Annual Report each year on its website, which features a range of events hosted by art institutions. === Ageing === Slow ageing (or slow aging) is a distinct approach to successful ageing, advocating a personal and holistic positive approach to the process of ageing. Established as part of the broader slow movement in the 1980s, as opposed to the interventionist-based and commercially backed medical anti-aging system, it involves personal ownership and non-medical intervention options in gaining potential natural life extension. === Cinema === Slow cinema is a cinematography style which derives from the art film genre. It aims to convey a sense of calculated slowness to the viewer. Slow films often consist of a resistance to movement and emotion, a lack of causality and a devotion to realism. This is usually obtained through the use of long takes, minimalist acting, slow or inexistent camera movements, unconventional use of music and sparse editing. Slow cinema directors include Béla Tarr, Lav Diaz, Nuri Bilge Ceylan, Abbas Kiarostami, Tsai Ming-liang, Andrei Tarkovsky and Theo Angelopoulos. === Cittaslow === Cittaslow International states its mission as "to enlarge the philosophy of Slow Food to local communities and to government of towns, applying the concepts of ecogastronomy at practice of everyday life". It seeks to improve the quality and enjoyment of living by encouraging happiness and self-determination. Cittaslow cities use the concept of globalization to prevent the impending globalization of their cities. Lisa Servon and Sarah Pink observe that, "The case of the Spanish Cittaslow towns offers a particular example of how towns can actively exploit the interpenetration of the global and the local. In these towns, a local–global relationship has emerged in ways that enable controlled development and the maintenance of local uniqueness." === Consumption === Tim Cooper, author of Longer Lasting Products, is a strong advocate of "slow consumption", and is quoted as saying, "The issue to address is what kind of economy is going to be sustainable in its wider sense, economically, environmentally and socially." Saul Griffith introduced "heirloom design" during a February Greener Gadgets conference in 2009. He notes a lasting design, the ability to repair, and the option of being modernized to advocate slow consumption. Legislation, alternative options, and consumer pressure can encourage manufacturers to design items in a more heirloom fashion. === Counseling === According to some, recent technological advances have resulted in a fast-paced style of living. Slow counselors understand that many clients are seeking ways to reduce stress and cultivate a more balanced approach to life. Developed by Dr. Randy Astramovich and Dr. Wendy Hoskins and rooted in the slow movement, slow counseling offers counselors a wellness focused foundation for addressing the time urgency and stress often reported by clients. === Conversation === According to Fast Company: "An unhurried conversation uses a simple process to allow people to take turns to speak without being interrupted. Everyone agrees at the start that only the person holding a chosen object (often a sugar bowl) is allowed to talk. Once the speaker has finished, they put the object down, signalling that they have said what they want to say. Someone else then picks up the object and takes their turn. Each speaker can respond to some or all of what the previous speaker said, or they can take the conversation in an entirely new direction." Unhurried Conversations is a term used by the author of Unhurried at Work Johnnie Moore, about how people can work together at a speed that makes the most of their human qualities. === Democracy === Slow democracy describes local governance models that are inclusive, empowered, and centered on deliberative democracy. Described by Susan Clark and Woden Teachout in their book Slow Democracy, the concept parallels the Slow Food movement’s call for authenticity in food production, and highlights decision-making models based on authentic community involvement. Clark and Teachout note:“Slow democracy is not a call for longer meetings or more time between decisions. Instead, it is a reminder of the care needed for full-blooded, empowered community decision making.”: xxiii Examples of slow democracy include: Participatory Budgeting; the Swiss and New England (U.S.) town meeting; Dialogue to Change and Study Circles processes when connected with democratic action, such as the Portsmouth, New Hampshire “Portsmouth Listens” model; and many other participatory democracy models. The National Coalition for Dialogue and Deliberation serves as a network for many scholars and practitioners of slow democracy. Slow democracy inspired the Living Room Conversations organization co-founded by Joan Blades, because slowing down to consider how we characterize “the other” is crucial to democratic engagement and to peacebuilding. Harvard Law School professor Lawrence Lessig writes that, like slow food, slow democracy is:“a strategy for resisting what we know would be most tempting but what we have learned is both empty and harmful. … [T]he slow democracy movement says that we should do politics in particular contexts, not because those contexts can’t be hacked or will never be poisonous, but because it’s just harder to hack them or make them poisonous.”Scholars of dialogue and deliberation have expressed concern that increased online and face-to-face communication can lead to information overload, but incorporating slow democracy processes featuring listening and reflection can improve the experience. Proponents of community-led housing cite slow democracy as integral to their place-specific development efforts. === Education === As an alternative approach to modern faster styles of reading, such as speed reading, the concept of slow reading has been reintroduced as an educational branch of the slow movement. For instance, the ancient Greek method of slow reading known as Lectio, now known as Lectio Divina, has become a way of reading that encourages more in-depth analysis and a greater understanding of the text being read. Though the method is originally of Christian monastic origin, and has been used primarily as a tool to better understand the Bible, its technique can be applied in other areas of education besides the study of theology. === Fashion === The term slow fashion was coined by Kate Fletcher in 2007 (Centre for Sustainable Fashion, UK). "Slow fashion is not a seasonal trend that comes and goes like animal print, but a sustainable fashion movement that is gaining momentum." The slow fashion style is based on the same principles of the slow food movement, as the alternative to mass-produced clothing (also known as fast fashion). Initially, the slow clothing movement was intended to reject all mass-produced clothing, referring only to clothing made by hand, but has broadened to include many interpretations and is practiced in various ways. Functional and fashion novelty drives consumers to replace their items faster, causing an increase of imported goods into the United States alone. It was reported by the Economic Policy Institute that in 2007, the U.S. imported six billion dollars' worth in fashion articles. Some examples of slow fashion practices include: Opposing and boycotting mass-produced "fast fashion" or "McFashion" Choosing artisan products to support smaller businesses, fair trade and locally-made clothes Buying secondhand or vintage clothing, and donating unwanted garments Choosing clothing made with sustainable, ethically made or recycled fabrics Choosing quality garments that will last longer, transcend trends (a "classic" style), and be repairable Doing it yourself: making, mending, customising, altering, and up-cycling one's own clothing Slowing the rate of fashion consumption: buying fewer clothes less often The slow fashion ethos is related to the "sustainable", "eco", "green", and "ethical" fashion movements. It encourages education about the garment industry's connection with and impact on the environment, such as depleting resources, slowing of the supply chain to reduce the number of trends and seasons and to encourage quality production, and return greater value to garments, removing the image of disposability of fashion. Hazel Clark states there are "three lines of reflection: the valuing of local resources and distributed economies; transparent production systems with less intermediation between producer and consumer, and sustainable and sensorial products ..." === Food === As opposed to the culture of fast food, the sub-movement known as slow food seeks to encourage the enjoyment of regional produce and traditional foods, which are often grown organically, and to enjoy these foods in the company of others. It aims to defend agricultural biodiversity. The movement claims 83,000 members in 50 countries, which are organised into 800 Convivia or local chapters. Sometimes operating under a logo of a snail, the collective philosophy is to preserve and support traditional ways of life. Today, 42 states in the United States have their own convivium. The movement, while widely celebrated for its emphasis on local, sustainable, and traditional food practices, has faced various criticisms. One significant critique is the potential elitism inherent in its approach. Slow Food's advocacy for artisanal and small-scale production often results in higher prices for its endorsed food products, which may limit the movement's accessibility to wealthier individuals. Founder Carlo Petrini himself has noted this issue, reflecting on his visit to a California farmers market where the clientele appeared predominantly wealthy. Furthermore, Slow Food has been criticized for prioritizing hedonism over substantive political action. The movement's focus on pleasure, taste, and consumption patterns has led to accusations of it being more concerned with gastronomic enjoyment than with addressing broader political and economic injustices. === Gaming === Slow gaming is an approach to video games that is meant to be more slow-paced and more focused on challenging the assumptions and feelings of the player than on their skills and reflexes. A "Slow Games Movement Manifesto" was written by Scottish game designer Mitch Alexander in September 2018, and a "Slow Gaming Manifesto" was independently published on Gamasutra by Polish game designer Artur Ganszyniec in June 2019. Some games that can be considered examples of "slow gaming" include: Firewatch (2016), Heaven's Vault (2019), Journey (2012), Wanderlust Travel Stories (2019), and The Longing (2020). === Gardening === Slow gardening is an approach that helps gardeners savor what they grow using all their senses through all the seasons. === Goods === Slow goods takes its core direction from various elements of the overall slow movement, applying it to the conception, design and manufacturing of physical objects. Its key tenets are: low production runs, the use of craftspeople within the process, on-shore manufacturing, and smaller, local supply and service partners. The rationale for this local engagement facilitates the assurance of quality, the revitalization of local manufacturing industries, and reduces greatly the footprint related to the shipment of goods across regions of land and/or water. Physical goods affected by the slow movement represent much diversity, including slow architecture and building design. The slow movement is affecting the concept and planning stages of commercial buildings, chiefly LEED certified projects. This movement seeks to break current conventions of perpetuating the disposable nature of mass production. By using higher-quality materials and craftsmanship, items attain a longer lifespan, similar to manufacturing eras in the past. === Living === Authors Beth Meredith and Eric Storm summarize slow living as follows: Slow Living means structuring your life around meaning and fulfillment. Similar to "voluntary simplicity" and "downshifting", it emphasizes a less-is-more approach, focusing on the quality of your life. ... Slow Living addresses the desire to lead a more balanced life and to pursue a more holistic sense of well-being in the fullest sense of the word. === Marketing === Slow marketing is a reaction to the perceived "always-on" nature of digital marketing. It emphasizes a customer-centric outlook, sustainability, and ethics. It builds relationships with customers instead of encouraging immediate results, such as a limited time offer. === Media === Slow media and Slow television are movements aiming at sustainable and focused media production as well as media consumption. They formed in the context of a massive acceleration of news distribution ending in almost real-time digital media such as Twitter. Beginning in 2010, many local Slow Media initiatives formed in the USA and Europe (Germany, France, Italy) leading to a high attention in mass-media. Others experiment with a reduction of their daily media intake and log their efforts online ("slow media diet"). === Medicine === Slow medicine fosters taking time in developing a relationship between the practitioner and the patient, and in applying medical knowledge, technology and treatment to the specific and unique character of the patient in his or her overall situation. === Money === Slow Money is a non-profit organization, founded to organize investors and donors to steer new sources of capital to small food enterprises, organic farms, and local food systems. Slow Money takes its name from the Slow Food movement. Slow Money aims to develop the relationship between capital markets and place, including social and soil fertility. It supports grass-roots mobilization through network building, convening, publishing, and incubating intermediary strategies and structures of funding. === Parenting === Slow parenting encourages parents to plan less for their children, allowing them to explore the world at their own pace. It is a response to hyper-parenting and helicopter parenting; the widespread trend for parents to schedule activities and classes after school every day and every weekend, to solve problems on behalf of the children, and to buy commercial services and products. It was described by Carl Honoré in Under Pressure: Rescuing Our Children from the Culture Of Hyper-Parenting. === Photography === The Slow photography movement prioritizes the process and experience of taking photos over mere documentation. It often involves film photography but can be applied using any camera. The movement emerged as a response to the ubiquity of digital photography and snapshot culture, emphasizing manual techniques and a deeper engagement with the physical materials of images. David Campany defined the concept in his 2003 essay "Safety in Numbness: Some remarks on the problems of ‘Late Photography.’" He used Joel Meyerowitz's post-9/11 photography, later published in Aftermath, to highlight the role of photography in public memory. Norwegian photographer Johanne Seines Svendsen, known for using long exposure times and the wetplate collodion process, exemplifies this technique. Her series "The Slow Photography" was showcased at the 67th North Norwegian Art Exhibition in 2013, featuring ambrotypes and alumitypes. === Religion === Slow church is a movement in Christian praxis which integrates slow-movement principles into the structure and character of the local church. The phrase was introduced in 2008 by Christian bloggers working independently who imagined what such a "slow church" might look like. Over the next several years, the concept continued to be discussed online and in print by various writers and ministers. In July 2012, a three-day conference titled Slow Church: Abiding Together in the Patient Work of God was held on the campus of DePaul University in Chicago on the topic of slow church and featured Christian ethicist Stanley Hauerwas and Kyle Childress, among others. An online blog called "Slow Church" written by C. Christopher Smith and John Pattison is hosted by Patheos, and Smith and Pattison have written a book by the same name, published in June 2014. Ethics, ecology, and economy are cited as areas of central concern to slow church. Smith describes slow church as a "conversation", not a movement, and has cited New Monasticism as an influence. In its emphases on non-traditional ways for churches to operate and on "conversation" over dogma and hierarchy, slow church is also related to the broader Christian "emerging church" movement. === Scholarship === Slow scholarship is a response to hasty scholarship and the demands of corporatized neoliberal academic culture, which may compromise the quality and integrity of research, education, and well-being. This movement attempts to counter the erosion of humanistic education, analyze the consequences of the culture of speed, and "explores alternatives to the fast-paced, metric-oriented neoliberal university through a slow-moving conversation on ways to slow down and claim time for slow scholarship and collective action." === Science === The slow science movement's objective is to enable scientists to take the time to think and read. The prevalent culture of science is publish or perish, where scientists are judged to be better if they publish more papers in less time, and only those who do so are able to maintain their careers. Those who practice and promote slow science suggest that "society should give scientists the time they need". === Technology === The slow technology approach aims to emphasize that technology can support reflection rather than efficiency. This approach has been discussed through various examples, for example those in interaction design or virtual environments. It is related to other parallel efforts such as those towards reflective design, critical design, and critical technical practice. === Thought (philosophy) === Slow thought calls for a slow philosophy to ease thinking into a more playful and porous dialogue about what it means to live. Vincenzo Di Nicola's "Slow Thought Manifesto" elucidates and illuminates Slow thought through seven proclamations, published and cited in English, Indonesian, Italian, and Portuguese, and frequently cited in French: Slow thought is marked by peripatetic Socratic walks, the face-to-face encounter of Emmanuel Levinas, and Mikhail Bakhtin's dialogic conversations Slow thought creates its own time and place Slow thought has no other object than itself Slow thought is porous Slow thought is playful Slow thought is a counter-method, rather than a method, for thinking as it relaxes, releases, and liberates thought from its constraints and the trauma of tradition Slow thought is deliberate Notable slow thinkers include Mahatma Gandhi who affirmed that, "There is more to life than simply increasing its speed", Giorgio Agamben (on the philosophy of childhood), Walter Benjamin (on the porosity of Naples), and Johan Huizinga (on play as an interlude in our daily lives). Di Nicola's Slow Thought Manifesto is featured in Julian Hanna's The Manifesto Handbook as a reaction against acceleration, "elucidating seven principles, including the practice of being 'asynchronous' or resisting the speed of modern times in favor of the 'slow logic of thought' and working toward greater focus". The Slow Thought Manifesto is being cited in philosophy, information science, and peacebuilding politics. "Take your time", the slogan of Slow Thought, cited by Di Nicola, is taken from philosopher Ludwig Wittgenstein, himself a slow thinker: "In a wonderful philosophical lesson that is structured like a joke, Wittgenstein admonished philosophers about rushing their thinking: Question: 'How does one philosopher address another?' Answer: 'Take your time.'" === Time poverty === The principal perspective of the slow movement is to experience life in a fundamentally different way. Adherents believe that the experience of being present leads to what Abraham Maslow refers to as peak experience. The International Institute of Not Doing Much is a humorous approach to the serious topic of "time poverty", incivility, and workaholism. The Institute's fictional presence promotes counter-urgency. First created in 2005, SlowDownNow.org is a continually evolving work of art and humor which reports it has over 6,000 members. === Travel === Slow travel is an evolving movement that has taken its inspiration from nineteenth-century European travel writers, such as Théophile Gautier, who reacted against the cult of speed, prompting some modern analysts to ask, "If we have slow food and slow cities, then why not slow travel?". Other literary and exploration traditions, from early Arab travelers to late nineteenth-century Yiddish writers, have also identified with slow travel, usually marking its connection with community as its most distinctive feature. Espousing modes of travel that were the norm in some less developed societies became, for some writers and travelers from western Europe such as Isabelle Eberhardt, a way of engaging more seriously with those societies. Slow travel is not only about traveling from one place to another, it is also about immersing oneself in a destination. It consists of staying in the same place for a while to develop a deep connection with it. Frequenting local places, spending time with locals and discovering their habits and customs can turn a regular trip into a slow travel experience. The key is to take one's time and to let oneself be carried along. Advocates of slow travel argue that all too often the potential pleasure of the journey is lost by too-eager anticipation of arrival. Slow travel, it is asserted, is a state of mind which allows travelers to engage more fully with communities along their route, often favoring visits to spots enjoyed by local residents rather than merely following guidebooks. As such, slow travel shares some common values with ecotourism. Its advocates and devotees generally look for low-impact travel styles, even to the extent of eschewing flying. The future of Slow Travel is aiming toward reducing greenhouse gas emissions by reducing car and air travel because the rate we are using planes and cars is not sustainable for our atmosphere. Advocates believe that the combination of environmental awareness and cost efficient traveling will move people towards Slow Travel. Aspects of slow travel, including some of the principles detailed in the "Manifesto for Slow Travel", are now increasingly featured in travel writing. The magazine Hidden Europe, which first published the "Manifesto for Slow Travel", has particularly showcased slow travel, featuring articles that focus on unhurried, low-impact journeys, and advocating a stronger engagement with visited communities. A new book series launched in May 2010 by Bradt Travel Guides explicitly espouses slow travel ideas with volumes that focus very much on local communities within a tightly defined area, often advocating the use of public transport along the way. Titles include Bus-pass Britain, Slow Norfolk and Suffolk, Slow Devon and Exmoor, Slow Cotswolds, Slow North Yorkshire, and Slow Sussex and South Downs National Park. == Criticism == Despite its positive intentions, the slow movement faces criticism for its potential elitism and inaccessibility. Critics argue that the movement's emphasis on artisanal and small-scale production can result in higher costs, making it difficult for individuals with lower incomes to participate. Additionally, some view the movement as overly idealistic and impractical in the context of the fast-paced realities of modern life. There are concerns that it may prioritize personal enjoyment and aesthetic values over addressing broader social and economic issues. == See also == African time Critique of work Degrowth Downshifting (lifestyle) Durable good Patience Product tracing systems, which allow people to see the source factory of a product Simple living Sleep tourism Slow architecture Slow living Universal basic income Work-life balance == References ==
Wikipedia/Slow_design
Retail design is a creative and commercial discipline that combines several different areas of expertise together in the design and construction of retail space. Retail design is primarily a specialized practice of architecture and interior design; however, it also incorporates elements of industrial design, graphic design, ergonomics, and advertising. Retail design is a very specialized discipline due to the heavy demands placed on retail space. Because the primary purpose of retail space is to stock and sell product to consumers, the spaces must be designed in a way that promotes an enjoyable and hassle-free shopping experience for the consumer. For example, research shows that male and female shoppers who were accidentally touched from behind by other shoppers left a store earlier than people who had not been touched and evaluated brands more negatively. The space must be specially-tailored to the kind of product being sold in that space; for example, a bookstore requires many large shelving units to accommodate small products that can be arranged categorically while a clothing store requires more open space to fully display product. Retail spaces, especially when they form part of a retail chain, must also be designed to draw people into the space to shop. The storefront must act as a billboard for the store, often employing large display windows that allow shoppers to see into the space and the product inside. In the case of a retail chain, the individual spaces must be unified in their design. == History == Retail design first began to grow in the middle of the 19th century, with stores such as Bon Marche and Printemps in Paris, "followed by Marshall Fields in Chicago, Selfridges in London and Macy's in New York." These early retail design stores were swiftly continued with an innovation called the chain store. The first known chain department stores were established in Belgium in 1868, when Isidore, Benjamin and Modeste Dewachter incorporated Dewachter frères (Dewachter Brothers) selling ready-to-wear clothing for men and children and specialty clothing such as riding apparel and beachwear. The firm opened with four locations and, by 1904, Maison Dewachter (House of Dewachter) had stores in 20 cities and towns in Belgium and France, with multiple stores in some cities. Isidore's eldest son, Louis Dewachter, managed the chain at its peak and also became an internationally known landscape artist, painting under the pseudonym Louis Dewis. The first retail chain store in the United States was opened in the early 20th century by Frank Winfield Woolworth, which quickly became a franchise across the US. Other chain stores began growing in places like the UK a decade or so later, with stores like Boots. After World War II, a new type of retail design building known as the shopping centre came into being. This type of building took two different paths in comparison between the US and Europe. Shopping centres began being built out of town within the United States to benefit the suburban family, while Europe began putting shopping centres in the middle of town. The first shopping centre in the Netherlands was built in the 1950s, as retail design ideas began spreading east. The next evolution of retail design was the creation of the boutique in the 1960s, which emphasized retail design run by individuals. Some of the earliest examples of boutiques are the Biba boutique created by Barbara Hulanicki and the Habitat line of stores made by Terence Conran. The rise of the boutique was followed, in the next two decades, with an overall increase in consumer spending across the developed world. This rise made retail design shift to compensate for increased customers and alternative focuses. Many retail design stores redesigned themselves over the period to keep up with changing consumer tastes. These changes resulted on one side with the creation of multiple "expensive, one-off designer shops" catering to specific fashion designers and retailers. The rise of the internet and internet retailing in the latter part of the 20th century and into the 21st century saw another change in retail design to compensate. Many different sectors not related to the internet reached out to retail design and its practices to lure online shoppers back to physical shops, where retail design can be properly utilized. == Usage == === Role === A retail designer must create a thematic experience for the consumer, by using spatial cues to entertain as well as entice the consumer to purchase goods and interact with the space. The success of their designs are not measured by design critics but rather the records of the store which compare amount of foot traffic against the overall productivity. Retail designers have an acute awareness that the store and their designs are the background to the merchandise and are only there to represent and create the best possible environment in which to reflect the merchandise to the target consumer group. === Design elements === Since the evolution of retail design and its impact on productivity have become clear, a series of standardizations in the techniques and design qualities has been determined. These standardizations range from alterations to the perspective of the structure of the space, entrances, circulation systems, atmospheric qualities (light and sound) and materiality. By exploring these standardizations in retail design the consumer will be given a thematic experience that entices them to purchase the merchandise. It is also important to acknowledge that a retail space must combine both permanent and non permanent features that allow it to change as the needs of the consumer and merchandise change (e.g. per season). The structure of retail space creates the constraints of the overall design; often the spaces already exist, and have had many prior uses. It is at this stage that logistics must be determined; structural features like columns, stairways, ceiling height, windows and emergency exits all must be factored into the final design. In retail one hundred percent of the space must be utilised and have a purpose. The floor plan creates the circulation which then directly controls the direction of the traffic flow based on the studied psychology of consumer movement pattern within a retail space. Circulation is important because it ensures that the consumer moves through the store from front to back, guiding them to important displays and, in the end, to the cashier. There are six basic store layouts and circulation plans that all provide a different experience: Straight plan: this plan divides transitional areas from one part of the store to the other by using walls to display merchandise. It also leads the consumer to the back of the store. This design can be used for a variety of stores ranging from pharmacies to apparel. Pathway plan: is most suitable for large stores that are single level. In this plan there is a path unobstructed by shop fixtures that smoothly guides the consumer through to the back of the store. This is well suited for apparel department stores, as the clothes will be easily accessible. Diagonal plan: uses perimeter design which cause angular traffic flow. The cashier is in a central location and easily accessible. This plan is most suited for self-service retail. Curved plan: aims to create an intimate environment that is inviting. In this plan there is an emphasis on the structure of the space including the walls, corners and ceiling. This is achieved by making the structure curved and is enhanced by circular floor fixtures. Although this is a more expensive layout, it is more suited to smaller spaces like salons and boutiques. Varied plan: in this plan attention is drawn to special focus areas, as well as having storage areas that line the wall. This is best suited for footwear and jewellery retail stores. Geometric plan: uses the racks and the retail floor fixtures to create a geometric floor plan and circulation movement. By lowering parts of the ceiling certain areas can create defined retail spaces. This is well suited for apparel stores. Once the overall structure and circulation of the space has been determined, the atmosphere and thematics of the space must be created through lighting, sound, materials and visual branding. These design elements will cohesively have the greatest impact on the consumer and thus the level of productivity that could be achieved. Lighting can have a dramatic impact on the space. It needs to be functional, but also complement the merchandise and emphasize key points throughout the store. The lighting should be layered and of a variety of intensities and fixtures. Firstly, examine the natural light and what impact it has in the space. Natural light adds interest and clarity to the space; consumers also prefer to examine the quality of merchandise in natural light. If no natural light exists, a sky light can be used to introduce it to the retail space. The lighting of the ceiling and roof is the next thing to consider. This lighting should wash the structural features while creating vectors that direct the consumer to key merchandise selling areas. The next layer should emphasize the selling areas. These lights should be direct but not too bright and harsh. Poor lighting can cause eye strain and an uncomfortable experience for the consumer. To minimize the possibility of eye strain, the ratio of luminance should decrease between merchandise selling areas. The next layer will complement and bring focus onto the merchandise; this lighting should be flattering for the merchandise and consumer. The final layer is to install functional lighting such as clear exit signs. Ambiance can then be developed within the atmosphere through sound and audio. The music played within the store should reflect what the store's target market would be drawn to. This would also be developed through the merchandise that is being marketed. In a lingerie store the music should be soft, feminine and romanticized, whereas in a technology department the music would be more upbeat and more masculine. Materiality is another key selling tool. The choices made must not only be aesthetically pleasing and persuasive but also functional with a minimal need for maintenance. Retail spaces are high traffic areas and are thus exposed to a lot of wear. This means that possible finishes of the materials should be durable. The warmth of a material will make the space more inviting; a floor that is firm and somewhat buoyant will be more comfortable for that consumer to walk on and thus will allow them to take longer when exploring the store. By switching materials throughout the store, zones/areas can be defined. For example, by making the path one material and contrasting it against another for the selling areas, consumers are guided through the store. Colour is also important to consider; it must not over power or clash against the merchandise but rather create a complementary background for the merchandise. As merchandise will change seasonally the interior colours should not be trend based but rather have timeless appeal like neutral based colours. Visual branding of the store will ensure a memorable experience for the consumer to take with them once they leave the store, ensuring that they will want to return. The key factor is consistency exterior branding and signage should continue into the interior; they should attract, stimulate and dramatise the store. To ensure consistency the font should be consistent with the font size altering. The interior branding should allow the consumer to easily direct themselves through the store. Proper placement of sales signs will draw consumers in and show exactly where the cashier is located. The branding should reflect what the merchandise is and what the target market would be drawn to. === Perspective === The final element of a well-executed retail space is the staging of the consumer's perspective. It is the role of retail design to have total control of the view that the consumer will have of the retail space. From the exterior of a retail store the consumer should have a clear unobstructed view into the interior. == See also == Architecture Brand Branded environments Brand implementation Customer engagement Display case Display window Ergonomics Interior design Marketing Merchandising Planogram Retail chain Retailing Visual merchandising == References == == Further reading == Israel, L.J., 1994. "Store Planning/ Design". United States of America: John Wiley & sons, INC. Lopez, M. J., 2003. "Retail Store Planning And Design Manual". 2nd ed. Cincinnati, Ohio: ST Publications. Barr, V. and Broudy, C.E., 1990. "Designing to Sell". 2nd ed. New York: McGraw-Hill, INC. Curtis, E. and Watson, H., 2007. "Fashion Retail". 2nd ed. West Sussex, England: John Wiley and Sons Ltd. Rodney Fitch & Lance Knobel (1990). Fitch on retail design. Phaidon. ISBN 978-0-7148-2562-5. Catherine McDermott (2007). "Retail Design". Design: the key concepts. Routledge key guides. Routledge. pp. 195–197. ISBN 978-0-415-32016-0. Stephen Doyle (2004). "Retail store design". In Margaret Bruce; Christopher Moore; Grete Birtwistle (eds.). International retail marketing: a case study approach. Butterworth-Heinemann. ISBN 978-0-7506-5748-8. Sean Nixon (1996). "Menswear and retailing practices: the case of retail design". Hard looks: masculinities, spectatorship and contemporary consumption. Palgrave Macmillan. ISBN 978-0-312-16333-4.
Wikipedia/Retail_design
Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering practice of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. DFM describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing. Depending on various types of manufacturing processes there are set guidelines for DFM practices. These DFM guidelines help to precisely define various tolerances, rules and common manufacturing checks related to DFM. While DFM is applicable to the design process, a similar concept called DFSS (design for Six Sigma) is also practiced in many organizations. == For printed circuit boards (PCB) == In the PCB design process, DFM leads to a set of design guidelines that attempt to ensure manufacturability. By doing so, probable production problems may be addressed during the design stage. Ideally, DFM guidelines take into account the processes and capabilities of the manufacturing industry. Therefore, DFM is constantly evolving. As manufacturing companies evolve and automate more and more stages of the processes, these processes tend to become cheaper. DFM is usually used to reduce these costs. For example, if a process may be done automatically by machines (i.e. SMT component placement and soldering), such process is likely to be cheaper than doing so by hand. == For integrated circuits (IC) == Semiconductor Design for Manufacturing (DFM) Semiconductor Design for Manufacturing (DFM) is a comprehensive set of principles and techniques used in integrated circuit (IC) design to ensure that those designs transition smoothly into high-volume manufacturing with optimal yield and reliability. DFM focuses on anticipating potential fabrication issues and proactively modifying chip layouts and circuits to mitigate their impact. Background As semiconductor technology scales to smaller nodes, transistors and interconnects become incredibly dense and sensitive to subtle variations in the manufacturing process. These variations can lead to defects that cause chips to malfunction or degrade their performance. DFM aims to minimize the impact of these variations, improving yield and making chip manufacturing more cost-effective. Key Concepts in DFM Design Rules: Foundries provide detailed design rules that specify minimum dimensions, spacing, and other geometrical constraints that must be adhered to for successful fabrication. DFM-aware design tools automatically check designs against these rules, flagging potential violations for correction. Process Variability: DFM techniques account for inherent variability in manufacturing processes such as lithography, etching, and deposition. By simulating how variations might affect specific design structures, designers can modify layouts to minimize sensitivity to these variations. Yield Optimization: DFM aims to maximize yield, the percentage of chips that function correctly out of a manufactured wafer. This involves identifying critical areas of the design, adding redundancy, and implementing layout strategies that improve the likelihood of successful fabrication. Reliability: DFM encompasses techniques to ensure chips are reliable throughout their expected lifespan. This involves analyzing how design choices impact electromigration, hot carrier injection, and other potential failure mechanisms, and designing accordingly. DFM Techniques Some common DFM techniques used in semiconductor design include: Redundancy: Adding extra transistors or circuit elements to critical paths, so if one element fails, the chip can still function. Fill Patterns: Adding non-functional geometrical shapes to empty areas of a layout to improve pattern density and minimize local manufacturing variations. Optical Proximity Correction (OPC): Modifying mask patterns to compensate for distortions that occur during the lithography process. Restricted Design Rules (RDR): A subset of design rules that are more conservative than standard rules, offering higher manufacturability. Yield Simulations: Using statistical models to predict how design and process variations impact yield, allowing for informed design modification. DFM and Design Flow DFM is integrated throughout the semiconductor design flow: Design: Designers use DFM-aware tools that automatically check for rule violations and potential manufacturability issues. Verification: Verification processes include extensive DFM checks to ensure the design meets all manufacturing requirements. Physical Implementation: During this stage, techniques like fill insertion and OPC are applied to the design for manufacturing optimization. Signoff: A thorough design rule check (DRC) and layout vs. schematic (LVS) verification is performed to ensure the design is ready for fabrication. Importance of DFM DFM is essential for the successful and cost-effective production of advanced semiconductor devices. By proactively addressing manufacturability issues during the design stage, DFM leads to: Higher yields Faster time-to-market Reduced risk of design re-spins Lower manufacturing costs == For CNC machining == === Objective === The objective is to design for lower cost. The cost is driven by time, so the design must minimize the time required to not just machine (remove the material), but also the set-up time of the CNC machine, NC programming, fixturing and many other activities that are dependent on the complexity and size of the part. === Set-Up time of operations (flip of the part) === Unless a 4th and/or 5th axis is used, a CNC can only approach the part from a single direction. One side must be machined at a time (called an operation or op). Then the part must be flipped from side to side to machine all of the features. The geometry of the features dictates whether the part must be flipped over or not. The more ops (flip of the part), the more expensive the part because it incurs substantial set-up and load/unload time. Each operation (flip of the part) has set-up time, machine time, time to load/unload tools, time to load/unload parts, and time to create the NC program for each operation. If a part has only 1 operation, then parts only have to be loaded/unloaded once. If it has 5 operations, then load/unload time is significant. The low hanging fruit is minimizing the number of operations (flip of the part) to create significant savings. For example, it may take only 2 minutes to machine the face of a small part, but it will take an hour to set the machine up to do it. Or, if there are 5 operations at 1.5 hours each, but only 30 minutes total machine time, then 7.5 hours is charged for just 30 minutes of machining. Lastly, the volume (number of parts to machine) plays a critical role in amortizing the set-up time, programming time and other activities into the cost of the part. In the example above, the part in quantities of 10 could cost 7–10 times the cost in quantities of 100. Typically, the law of diminishing returns presents itself at volumes of 100–300 because set-up times, custom tooling and fixturing can be amortized into the noise. === Material type === The most easily machined types of metals include aluminum, brass, and softer metals. As materials get harder, denser and stronger, such as steel, stainless steel, titanium, and exotic alloys, they become much harder to machine and take much longer, thus being less manufacturable. Most types of plastic are easy to machine, although additions of fiberglass or carbon fiber can reduce the machinability. Plastics that are particularly soft and gummy may have machinability problems of their own. === Material form === Metals come in all forms. In the case of aluminum as an example, bar stock and plate are the two most common forms from which machined parts are made. The size and shape of the component may determine which form of material must be used. It is common for engineering drawings to specify one form over the other. Bar stock is generally close to 1/2 of the cost of plate on a per pound basis. So although the material form isn't directly related to the geometry of the component, cost can be removed at the design stage by specifying the least expensive form of the material. === Tolerances === A significant contributing factor to the cost of a machined component is the geometric tolerance to which the features must be made. The tighter the tolerance required, the more expensive the component will be to machine. When designing, specify the loosest tolerance that will serve the function of the component. Tolerances must be specified on a feature by feature basis. There are creative ways to engineer components with lower tolerances that still perform as well as ones with higher tolerances. === Design and shape === As machining is a subtractive process, the time to remove the material is a major factor in determining the machining cost. The volume and shape of the material to be removed as well as how fast the tools can be fed will determine the machining time. When using milling cutters, the strength and stiffness of the tool which is determined in part by the length to diameter ratio of the tool will play the largest role in determining that speed. The shorter the tool is relative to its diameter the faster it can be fed through the material. A ratio of 3:1 (L:D) or under is optimum. If that ratio cannot be achieved, a solution like this depicted here can be used. For holes, the length to diameter ratio of the tools are less critical, but should still be kept under 10:1. There are many other types of features which are more or less expensive to machine. Generally chamfers cost less to machine than radii on outer horizontal edges. 3D interpolation is used to create radii on edges that are not on the same plane which incur 10X the cost. Undercuts are more expensive to machine. Features that require smaller tools, regardless of L:D ratio, are more expensive. == Design for inspection == The concept of design for inspection (DFI) should complement and work in collaboration with design for manufacturability (DFM) and design for assembly (DFA) to reduce product manufacturing cost and increase manufacturing practicality. There are instances when this method could cause calendar delays since it consumes many hours of additional work such as the case of the need to prepare for design review presentations and documents. To address this, it is proposed that instead of periodic inspections, organizations could adopt the framework of empowerment, particularly at the stage of product development, wherein the senior management empowers the project leader to evaluate manufacturing processes and outcomes against expectations on product performance, cost, quality and development time. Experts, however, cite the necessity for the DFI because it is crucial in performance and quality control, determining key factors such as product reliability, safety, and life cycles. For an aerospace components company, where inspection is mandatory, there is the requirement for the suitability of the manufacturing process for inspection. Here, a mechanism is adopted such as an inspectability index, which evaluates design proposals. Another example of DFI is the concept of cumulative count of conforming chart (CCC chart), which is applied in inspection and maintenance planning for systems where different types of inspection and maintenance are available. == Design for additive manufacturing == Additive manufacturing broadens the ability of a designer to optimize the design of a product or part (to save materials for example). Designs tailored for additive manufacturing are sometimes very different from designs tailored for machining or forming manufacturing operations. In addition, due to some size constraints of additive manufacturing machines, sometimes the related bigger designs are split into smaller sections with self-assembly features or fasteners locators. A common characteristic of additive manufacturing methods, such as fused deposition modeling, is the need for temporary support structures for overhanging part features. Post-processing removal of these temporary support structures increases the overall cost of fabrication. Parts can be designed for additive manufacturing by eliminating or reducing the need for temporary support structures. This can be done by limiting the angle of overhanging structures to less than the limit of the given additive manufacturing machine, material, and process (for example, less than 70 degrees from vertical). == See also == Design for X Electronic design automation Reliability engineering Six Sigma Statistical process control DFMA Rule-based DFM analysis for direct metal laser sintering Rule based analysis of extrusion process Rule based DFM analysis for metal spinning Rule based DFM analysis for deep drawing Rule based DFM analysis for forging DFM analysis for stereolithography Rule-based DFM analysis for electric discharge machining == References == == Sources == Mentor Graphics - DFM: What is it and what will it do? (must fill request form). Mentor Graphics - DFM: Magic Bullet or Marketing Hype (must fill request form). Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field of EDA. The above summary was derived, with permission, from Volume II, Chapter 19, Design for Manufacturability in the Nanometer Era, by Nicola Dragone, Carlo Guardiani, and Andrzej J. Strojwas. Design for Manufacturability And Statistical Design: A Constructive Approach, by Michael Orshansky, Sani Nassif, Duane Boning ISBN 0-387-30928-4 Estimating Space ASICs Using SEER-IC/H, by Robert Cisneros, Tecolote Research, Inc. (2008) Complete Presentation Archived 2012-02-20 at the Wayback Machine == External links == Why DFM/DFMA is Business Critical Design for manufacturing checklist – DFM,DFA(Design for assembly checklist from Quick-teck PCB manufacturer Arc Design for Manufacturability Tips Design for Manufacturing and Assembly Turning designs into reality: The Manufacturability paradigm
Wikipedia/Design_for_manufacturability
A design rationale is an explicit documentation of the reasons behind decisions made when designing a system or artifact. As initially developed by W.R. Kunz and Horst Rittel, design rationale seeks to provide argumentation-based structure to the political, collaborative process of addressing wicked problems. == Overview == A design rationale is the explicit listing of decisions made during a design process, and the reasons why those decisions were made. Its primary goal is to support designers by providing a means to record and communicate the argumentation and reasoning behind the design process. It should therefore include: the reasons behind a design decision, the justification for it, the other alternatives considered, the trade offs evaluated, and the argumentation that led to the decision. Several science areas are involved in the study of design rationales, such as computer science cognitive science, artificial intelligence, and knowledge management. For supporting design rationale, various frameworks have been proposed, such as QOC, DRCS, IBIS, and DRL. == History == While argumentation formats can be traced back to Stephen Toulmin's work in the 1950s datums, claims, warrants, backings and rebuttals, the origin of design rationale can be traced back to W.R. Kunz and Horst Rittel's development of the Issue-Based Information System (IBIS) notation in 1970. Several variants on IBIS have since been proposed. The first was Procedural Hierarchy of Issues (PHI), first described in Ray McCall's PhD Dissertation although not named at the time. IBIS was also modified, in this case to support Software Engineering, by Potts & Bruns. The Potts & Bruns approach was then extended by the Decision Representation Language (DRL). which itself was extended by RATSpeak. Questions Options and Criteria (QOC), also known as Design Space Analysis is an alternative representation for argumentation-based rationale, as are Win-Win and the Decision Recommendation and Intent Model (DRIM). The first Rationale Management System (RMS) was PROTOCOL, which supported PHI, which was followed by other PHI-based systems MIKROPOLIS and PHIDIAS. The first system providing IBIS support was Hans Dehlinger's STIEC. Rittel developed a small system in 1983 (also not published) and the better known gIBIS (graphical IBIS) was developed in 1987. Not all successful DR approaches involve structured argumentation. For example, Carroll and Rosson's Scenario-Claims Analysis approach captures rationale in scenarios that describe how the system is used and how well the system features support the user goals. Carroll and Rosson's approach to design rationale is intended to help designers of computer software and hardware identify underlying design tradeoffs and make inferences about the impact of potential design interventions. == Key concepts in design rationale == There are a number of ways to characterize DR approaches. Some key distinguishing features are how it is captured, how it is represented, and how it can be used. === Rationale capture === Rationale capture is the process of acquiring rationale information to a rationale management Capture methods A method called "Reconstruction" captures rationales in a raw form such as video, and then reconstruct them into a more structured form. The advantage of Reconstruction method is that rationales can be carefully captured and capturing process won't disrupt the designer. But this method might result in high cost and biases of the person producing the rationales The "Record-and-replay" method simply captures rationales as they unfold. Rationales are synchronously captured in a video conference or asynchronously captured via bulletin board or email-based discussion. If the system has informal and semi-formal representation, the method will be helpful. The "Methodological byproduct" method captures rationales during the process of design following a schema. But it's hard to design such a schema. The advantage of this method is its low cost. With a rich knowledge base (KB) created in advance, the "Apprentice" method captures rationales by asking questions when confusing or disagreeing with the designer's action. This method benefits not only the user but the system. In "Automatic Generation" method, design rationales are automatically generated from an execution history at low cost. It has the ability in maintaining consistent and up-to-date rationales. But the cost of compiling the execution history is high due to the complexity and difficulty of some machine-learning problems. The "Historian" method let a person or computer program watches all designer's actions but does not make suggestions. Rationales are captured during the design process. === Rationale representation === The choice of design rationale representation is very important to make sure that the rationales we capture is what we desire and we can use efficiently. According to the degree of formality, the approaches that are used to represent design rationale can be divided into three main categories: informal, semiformal, or formal. In the informal representation, rationales can be recorded and captured by just using our traditionally accepted methods and media, such as word processors, audio and video records or even hand writings. However, these descriptions make it hard for automatic interpretation or other computer-based supports. In the formal representation, the rationale must be collected under a strict format so that the rationale can be interpreted and understood by computers. However, due to the strict format of rationale defined by formal representations, the contents can hardly be understood by human being and the process of capturing design rationale will require more efforts to finish, and therefore becomes more intrusive. Semiformal representations try to combine the advantages of informal and formal representations. On one hand, the information captured should be able to be processed by computers so that more computer based support can be provided. On the other hand, the procedure and method used to capture information of design rationale should not be very intrusive. In the system with a semiformal representation, the information expected is suggested and the users can capture rationale by following the instructions to either fill out the attributes according to some templates or just type into natural language descriptions. === Argumentation-based models === The Toulmin model One commonly accepted way for semiformal design rationale representation is structuring design rationale as argumentation. The earliest argumentation-based model used by many design rationale systems is the Toulmin model. The Toulmin model defines the rules of design rationale argumentation with six steps: Claim is made; Supporting data are provided; Warrant provides evidence to the existing relations; Warrant can be supported by a backing; Model qualifiers (some, many, most, etc.) are provided; Possible rebuttals are also considered. One advantage of Toulmin model is that it uses words and concepts which can be easily understood by most people. Issue-Based Information System (IBIS) Another important approach to argumentation of design rationale is Rittel and Kunz's IBIS (Issue-Based Information System), which is actually not a software system but an argumentative notation. It has been implemented in software form by gIBIS (graphical IBIS), itIBIS (test-based IBIS), Compendium, and other software. IBIS uses some rationale elements (denoted as nodes) such as issues, positions, arguments, resolutions and several relationships such as more general than, logical successor to, temporal successor to, replaces and similar to, to link the issue discussions. Procedural Hierarchy of Issues (PHI) PHI (Procedural Hierarchy of Issues) extended IBIS to noncontroversial issues and redefined the relationships. PHI adds the subissue relationship which means one issue's resolution depends on the resolution of another issue. Questions, Options, and Criteria (QOC) QOC (Questions, Options, and Criteria) is used for design space analysis. Similar to IBIS, QOC identifies the key design problems as questions and possible answers to questions as options. In addition, QOC uses criteria to explicitly describe the methods to evaluate the options, such as the requirements to be satisfied or the properties desired. The options are linked with criteria positively or negatively and these links are defined as assessments. Decision Representation Language (DRL) DRL (Decision Representation Language) extends the Potts and Bruns model of DR and defines the primary elements as decision problems, alternatives, goals, claims and groups. Lee (1991) has argued that DRL is more expressive than other languages. DRL focuses more on the representation of decision making and its rationale instead of on design rationale. RATSpeak Based on DRL, RATSpeak is developed and used as the representation language in SEURAT (Software Engineering Using RATionale). RATSpeak takes into account requirements (functional and non-functional) as part of the arguments for alternatives to the decision problems. SEURAT also includes an Argument Ontology which is a hierarchy of argument types and includes the types of claims used in the system. WinWin Spiral Model The WinWin Spiral Model, which is used in the WinWin approach, adds the WinWin negotiation activities, including identifying key stakeholders of the systems, and identifying the win conditions of each stakeholder and negotiation, into the front of each cycle of the spiral software development model in order to achieve a mutually satisfactory (winwin) agreement for all stakeholders of the project. In the WinWin Spiral Model, the goals of each stakeholder are defined as Win conditions. Once there is a conflict between win conditions, it is captured as an Issue. Then the stakeholders invent Options and explore trade-offs to resolve the issue. When the issue is solved, an Agreement which satisfies the win conditions of stakeholders and captures the agreed option is achieved. Design rationale behind the decisions is captured during the process of the WinWin model and will be used by stakeholders and the designers to improve their later decision making. The WinWin Spiral model reduces the overheads of the capture of design rationale by providing stakeholders a well-defined process to negotiate. In an ontology of decision rationale is defined and their model utilizes the ontology to address the problem of supporting decision maintenance in the WinWin collaboration framework. Design Recommendation and Intent Model (DRIM) DRIM (Design Recommendation and Intent Model) is used in SHARED-DRIM. The main structure of DRIM is a proposal which consists of the intents of each designer, the recommendations that satisfy the intents and the justifications of the recommendations. Negotiations are also needed when conflicts exist between the intents of different designers. The recommendation accepted becomes a design decision, and the rationale of the unaccepted but proposed recommendations are also recorded during this process, which can be useful during the iterative design and/or system maintenance. == Applications == Design rationale has the potential to be used in many different ways. One set of uses, defined by Burge and Brown (1998), are: Design verification — The design rationale can be used to verify if the design decisions and the product itself are the reflection of what the designers and the users actually wanted. Design evaluation — The design rationale is used to evaluate the various design alternatives discussed during the design process. Design maintenance — The design rationale helps to determine the changes that are necessary to modify the design. Design reuse — The design rationale is used to determine how the existing design could be reused for a new requirement with or without any changes in it. If there is a need to modify the design, then the DR also suggests what needs to be modified in the design. Design teaching — The design rationale could be used as a resource to teach people who are unfamiliar with the design and the system. Design communication — The design rationale facilitates better communication among people who are involved in the design process and thus helps to come up with a better design. Design assistance — The design rationale could be used to verify the design decisions made during the design process. Design documentation — The design rationale is used to document the entire design process which involves the meeting room deliberations, alternatives discussed, reasons behind the design decisions and the product overview. DR is used by research communities in software engineering, mechanical design, artificial intelligence, civil engineering, and human-computer interaction research. In software engineering, it could be used to support the designers ideas during requirement analysis, capturing and documenting design meetings and predicting possible issues due to new design approach. In software architecture and outsourcing solution design, it can justify the outcome of architectural decisions and serve as a design guide. In civil engineering, it helps to coordinate the variety of work that the designers do at the same time in different areas of a construction project. It also help the designers to understand and respect each other's ideas and resolve any possible issues. The DR can also be used by the project managers to maintain their project plan and the project status up to date. Also, the project team members who missed a design meeting can refer back the DR to learn what was discussed on a particular topic. The unresolved issues captured in DR could be used to organize further meetings on those topics. Design rationale helps the designers to avoid the same mistakes made in the previous design. This can also be helpful to avoid duplication of work. In some cases DR could save time and money when a software system is upgraded from its previous versions. There are several books and articles that provide excellent surveys of rationale approaches applied to HCI, Engineering Design and Software Engineering. == See also == Goal structuring notation IDEF6 Method engineering Problem structuring methods Theory of justification == References == == Further reading == Books Burge, JE; Carroll, JM; McCall R; Mistrík I (2008). Rationale-Based Software Engineering. Heidelberg: Springer-Verlag. Dutoit, AH; McCall R; Mistrík I; Paech B (2006). Rationale Management in Software Engineering. Heidelberg: Springer-Verlag. Conklin, J (2005). Dialogue Mapping. Weinheim: Wiley-VCH Verlag. Kirschner, PA; Buckingham-Shum SJ; Carr CS (2003). Visualizing Argumentation: Software Tools for Collaborative and Educational Sense-Making. London: Springer-Verlag. Moran, T; Carroll J (1996). Design Rationale Concepts, Techniques, and Use. NJ: Lawrence Erlbaum Associates. Special Issues Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM), Special Issue: Fall 2008, Vol.22 No.4 Design Rationale http://web.cs.wpi.edu/~aiedam/SpecialIssues/Burge-Bracewell.html Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM), Special Issue on Representing and Using Design Rationale, 1997, Vol.11 No.2, Cambridge University Press Workshops Second Workshop on SHAring and Reusing architectural Knowledge - Architecture, rationale, and Design Intent (SHARK/ADI 2007), (RC.rug.nl) as part of the 29th Int. Conf. on Software Engineering (ICSE 2007) (CS.ucl.ac.uk) Workshop on Design Rationale: Problems and Progress (Muohio.edu) Workshop Chairs: Janet Burge and Rob Bracewell, Held 9 July 2006 in conjunction with Design, Computing, and Cognition '06. Eindhoven, (wwwfaculty.arch.usyd.edu.au) Netherlands == External links == Bcisive.austhink.com: A commercial software package designed for design rationale and decision rationale more broadly. Graphical interface, sharing capabilities. Compendium: A hypermedia tool that provides visual knowledge management capabilities based around IBIS. Free Java application, binary and source, with an active user community who meet annually. designVUE: A tool for visual knowledge capture based on IBIS and other methods. Free Java application. SEURAT: An Eclipse plug-in that integrates rationale capture and use with a software development environment. SEURAT is available as an open source project in GitHub ([1]).
Wikipedia/Design_rationale
Design for assembly (DFA) is a process by which products are designed with ease of assembly in mind. If a product contains fewer parts it will take less time to assemble, thereby reducing assembly costs. In addition, if the parts are provided with features which make it easier to grasp, move, orient and insert them, this will also reduce assembly time and assembly costs. The reduction of the number of parts in an assembly has the added benefit of generally reducing the total cost of parts in the assembly. This is usually where the major cost benefits of the application of design for assembly occur. == Approaches == Design for assembly can take different forms. In the 1960s and 1970s various rules and recommendations were proposed in order to help designers consider assembly problems during the design process. Many of these rules and recommendations were presented together with practical examples showing how assembly difficulty could be improved. However, it was not until the 1970s that numerical evaluation methods were developed to allow design for assembly studies to be carried out on existing and proposed designs. The first evaluation method was developed at Hitachi and was called the Assembly Evaluation Method (AEM). This method is based on the principle of "one motion for one part." For more complicated motions, a point-loss standard is used and the ease of assembly of the whole product is evaluated by subtracting points lost. The method was originally developed in order to rate assemblies for ease of automatic assembly. Starting in 1977, Geoff Boothroyd, supported by an NSF grant at the University of Massachusetts Amherst, developed the Design for Assembly method (DFA), which could be used to estimate the time for manual assembly of a product and the cost of assembling the product on an automatic assembly machine. Recognizing that the most important factor in reducing assembly costs was the minimization of the number of separate parts in a product, he introduced three simple criteria which could be used to determine theoretically whether any of the parts in the product could be eliminated or combined with other parts. These criteria, together with tables relating assembly time to various design factors influencing part grasping, orientation and insertion, could be used to estimate total assembly time and to rate the quality of a product design from an assembly viewpoint. For automatic assembly, tables of factors could be used to estimate the cost of automatic feeding and orienting and automatic insertion of the parts on an assembly machine. In the 1980s and 1990s, variations of the AEM and DFA methods have been proposed, namely: the GE Hitachi method which is based on the AEM and DFA; the Lucas method, the Westinghouse method and several others which were based on the original DFA method. All methods are now referred to as design for assembly methods. == Implementation == Most products are assembled manually and the original DFA method for manual assembly is the most widely used method and has had the greatest industrial impact throughout the world. The DFA method, like the AEM method, was originally made available in the form of a handbook where the user would enter data on worksheets to obtain a rating for the ease of assembly of a product. Starting in 1981, Geoffrey Boothroyd and Peter Dewhurst developed a computerized version of the DFA method which allowed its implementation in a broad range of companies. For this work they were presented with many awards including the National Medal of Technology. There are many published examples of significant savings obtained through the application of DFA. For example, in 1981, Sidney Liebson, manager of manufacturing engineering for Xerox, estimated that his company would save hundreds of millions of dollars through the application of DFA. In 1988, Ford Motor Company credited the software with overall savings approaching $1 billion. In many companies DFA is a corporate requirement and DFA software is continually being adopted by companies attempting to obtain greater control over their manufacturing costs. There are many key principles in design for assembly. == Notable examples == Two notable examples of good design for assembly are the Sony Walkman and the Swatch watch. Both were designed for fully automated assembly. The Walkman line was designed for "vertical assembly", in which parts are inserted in straight-down moves only. The Sony SMART assembly system, used to assemble Walkman-type products, is a robotic system for assembling small devices designed for vertical assembly. The IBM Proprinter used design for automated assembly (DFAA) rules. These DFAA rules help design a product that can be assembled automatically by robots, but they are useful even with products assembled by manual assembly. == See also == Design for inspection Design for manufacturability Design for X Design for verification DFMA == Notes == == Further information == For more information on Design for Assembly and the subject of Design for Manufacture and Assembly see: Boothroyd, G. "Assembly Automation and Product Design, 2nd Edition", Taylor and Francis, Boca Raton, Florida, 2005. Boothroyd, G., Dewhurst, P. and Knight, W., "Product Design for Manufacture and Assembly, 2nd Edition", Marcel Dekker, New York, 2002. == External links == "Successful Design for Assembly" - February 26, 2007 article from Assembly Magazine
Wikipedia/Design_for_assembly
Feminist design refers to connections between feminist perspectives and design. Feminist design can include feminist perspectives applied to design disciplines like industrial design, graphic design and fashion design, and parallels work like feminist urbanism, feminist HCI and feminist technoscience. Feminist perspectives can touch any aspect of the design project including processes, artifacts and practitioners. == History == There is a long history of feminist activity in design. Early examples include movements for dress reform (mid–19th century) and concepts for utopian feminist cities (late 19th century to the early 20th century). Over time this work has explored topics like beauty, DIY, feminine approaches to architecture, community-based and grassroots projects, among many examples. Some iconic writing includes Cheryl Buckley's essays on design and patriarchy and Judith Rothschild's Design and feminism: Re-visioning spaces, places, and everyday things. == Scope == Some scholars suggest that all designers should be feminists, as drawn by Chimamanda Ngozi Adichie, approaching feminism not only through gender but through power. “Not surprisingly, feminist approaches to design have generally been concerned with the relationship between women and design-how they are affected by it and how their contributions to it are regarded. These two inquiries have been thoroughly investigated in existing literature. Historically, they tended to be based on universal accounts of women, which assumed a cisgender, white, heterosexual, able-bodied woman. Only recently has the work expanded to advance our under- standing of the ways in which the impacts of design are felt at the intersections of gender and race, class, and other identities. Most feminist discourse in design seems to imply that the problems raised would not be problems if more designers were women and if their perspectives were valued.” == Feminist insights for design == Isabel Prochner's research explored how feminist perspectives can support positive change in industrial design. She stressed the diversity of feminist perspectives, but also argued that they can help identify systemic social problems and inequities in design and guide socially sustainable and grassroots design solutions. She wrote that feminist perspectives in industrial design often support: "Emphasizing human life and flourishing over output and growth Following best practices in labor/ international production /trade Choosing an empowering workspace Engaging in non-hierarchical/ interdisciplinary/ collaborative work Addressing user needs at multiple levels, including support for pleasure/ fun/ happiness Creating thoughtful products for female users Creating good jobs through production/ execution/ sale of the design solution" == Related pages == Inclusive design Dolores Hayden Intersectionality Data Feminism Nina Paim Futuress Cyberfeminism == Bibliography == Lupton, Ellen; Kafei, Farah; Tobias, Jennifer; Halstead, Josh A.; Sales, Kaleena; Xia, Leslie; Vergara, Valentina (2020-11-05). "Extra Bold". PA Press. Retrieved 2024-04-27. Otto, Elizabeth; Rossler, Patrick (2019). Bauhaus Women: A Global Perspective. Bloomsbury Publishing. ISBN 1912217961. == References ==
Wikipedia/Feminist_design
Fault tolerance is the ability of a system to maintain proper operation despite failures or faults in one or more of its components. This capability is essential for high-availability, mission-critical, or even life-critical systems. Fault tolerance specifically refers to a system's capability to handle faults without any degradation or downtime. In the event of an error, end-users remain unaware of any issues. Conversely, a system that experiences errors with some interruption in service or graceful degradation of performance is termed 'resilient'. In resilience, the system adapts to the error, maintaining service but acknowledging a certain impact on performance. Typically, fault tolerance describes computer systems, ensuring the overall system remains functional despite hardware or software issues. Non-computing examples include structures that retain their integrity despite damage from fatigue, corrosion or impact. == History == The first known fault-tolerant computer was SAPO, built in 1951 in Czechoslovakia by Antonín Svoboda.: 155  Its basic design was magnetic drums connected via relays, with a voting method of memory error detection (triple modular redundancy). Several other machines were developed along this line, mostly for military use. Eventually, they separated into three distinct categories: Machines that would last a long time without any maintenance, such as the ones used on NASA space probes and satellites; Computers that were very dependable but required constant monitoring, such as those used to monitor and control nuclear power plants or supercollider experiments; and Computers with a high amount of runtime that would be under heavy use, such as many of the supercomputers used by insurance companies for their probability monitoring. Most of the development in the so-called LLNM (Long Life, No Maintenance) computing was done by NASA during the 1960s, in preparation for Project Apollo and other research aspects. NASA's first machine went into a space observatory, and their second attempt, the JSTAR computer, was used in Voyager. This computer had a backup of memory arrays to use memory recovery methods and thus it was called the JPL Self-Testing-And-Repairing computer. It could detect its own errors and fix them or bring up redundant modules as needed. The computer is still working, as of early 2022. Hyper-dependable computers were pioneered mostly by aircraft manufacturers,: 210  nuclear power companies, and the railroad industry in the United States. These entities needed computers with massive amounts of uptime that would fail gracefully enough during a fault to allow continued operation, while relying on constant human monitoring of computer output to detect faults. Again, IBM developed the first computer of this kind for NASA for guidance of Saturn V rockets, but later on BNSF, Unisys, and General Electric built their own.: 223  In the 1970s, much work happened in the field. For instance, F14 CADC had built-in self-test and redundancy. In general, the early efforts at fault-tolerant designs were focused mainly on internal diagnosis, where a fault would indicate something was failing and a worker could replace it. SAPO, for instance, had a method by which faulty memory drums would emit a noise before failure. Later efforts showed that to be fully effective, the system had to be self-repairing and diagnosing – isolating a fault and then implementing a redundant backup while alerting a need for repair. This is known as N-model redundancy, where faults cause automatic fail-safes and a warning to the operator, and it is still the most common form of level one fault-tolerant design in use today. Voting was another initial method, as discussed above, with multiple redundant backups operating constantly and checking each other's results. For example, if four components reported an answer of 5 and one component reported an answer of 6, the other four would "vote" that the fifth component was faulty and have it taken out of service. This is called M out of N majority voting. Historically, the trend has been to move away from N-model and toward M out of N, as the complexity of systems and the difficulty of ensuring the transitive state from fault-negative to fault-positive did not disrupt operations. Tandem Computers, in 1976 and Stratus were among the first companies specializing in the design of fault-tolerant computer systems for online transaction processing. == Examples == Hardware fault tolerance sometimes requires that broken parts be taken out and replaced with new parts while the system is still operational (in computing known as hot swapping). Such a system implemented with a single backup is known as single point tolerant and represents the vast majority of fault-tolerant systems. In such systems the mean time between failures should be long enough for the operators to have sufficient time to fix the broken devices (mean time to repair) before the backup also fails. It is helpful if the time between failures is as long as possible, but this is not specifically required in a fault-tolerant system. Fault tolerance is notably successful in computer applications. Tandem Computers built their entire business on such machines, which used single-point tolerance to create their NonStop systems with uptimes measured in years. Fail-safe architectures may encompass also the computer software, for example by process replication. Data formats may also be designed to degrade gracefully. HTML for example, is designed to be forward compatible, allowing Web browsers to ignore new and unsupported HTML entities without causing the document to be unusable. Additionally, some sites, including popular platforms such as Twitter (until December 2020), provide an optional lightweight front end that does not rely on JavaScript and has a minimal layout, to ensure wide accessibility and outreach, such as on game consoles with limited web browsing capabilities. == Terminology == A highly fault-tolerant system might continue at the same level of performance even though one or more components have failed. For example, a building with a backup electrical generator will provide the same voltage to wall outlets even if the grid power fails. A system that is designed to fail safe, or fail-secure, or fail gracefully, whether it functions at a reduced level or fails completely, does so in a way that protects people, property, or data from injury, damage, intrusion, or disclosure. In computers, a program might fail-safe by executing a graceful exit (as opposed to an uncontrolled crash) to prevent data corruption after an error occurs. A similar distinction is made between "failing well" and "failing badly". A system designed to experience graceful degradation, or to fail soft (used in computing, similar to "fail safe") operates at a reduced level of performance after some component fails. For example, if grid power fails, a building may operate lighting at reduced levels or elevators at reduced speeds. In computing, if insufficient network bandwidth is available to stream an online video, a lower-resolution version might be streamed in place of the high-resolution version. Progressive enhancement is another example, where web pages are available in a basic functional format for older, small-screen, or limited-capability web browsers, but in an enhanced version for browsers capable of handling additional technologies or that have a larger display. In fault-tolerant computer systems, programs that are considered robust are designed to continue operation despite an error, exception, or invalid input, instead of crashing completely. Software brittleness is the opposite of robustness. Resilient networks continue to transmit data despite the failure of some links or nodes. Resilient buildings and infrastructure are likewise expected to prevent complete failure in situations like earthquakes, floods, or collisions. A system with high failure transparency will alert users that a component failure has occurred, even if it continues to operate with full performance, so that failure can be repaired or imminent complete failure anticipated. Likewise, a fail-fast component is designed to report at the first point of failure, rather than generating reports when downstream components fail. This allows easier diagnosis of the underlying problem, and may prevent improper operation in a broken state. A single fault condition is a situation where one means for protection against a hazard is defective. If a single fault condition results unavoidably in another single fault condition, the two failures are considered one single fault condition. A source offers the following example: A single-fault condition is a condition when a single means for protection against hazard in equipment is defective or a single external abnormal condition is present, e.g. short circuit between the live parts and the applied part. == Criteria == Providing fault-tolerant design for every component is normally not an option. Associated redundancy brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to design, verify, and test. Therefore, a number of choices have to be examined to determine which components should be fault tolerant: How critical is the component? In a car, the radio is not critical, so this component has less need for fault tolerance. How likely is the component to fail? Some components, like the drive shaft in a car, are not likely to fail, so no fault tolerance is needed. How expensive is it to make the component fault tolerant? Requiring a redundant car engine, for example, would likely be too expensive both economically and in terms of weight and space, to be considered. An example of a component that passes all the tests is a car's occupant restraint system. While the primary occupant restraint system is not normally thought of, it is gravity. If the vehicle rolls over or undergoes severe g-forces, then this primary method of occupant restraint may fail. Restraining the occupants during such an accident is absolutely critical to safety, so the first test is passed. Accidents causing occupant ejection were quite common before seat belts, so the second test is passed. The cost of a redundant restraint method like seat belts is quite low, both economically and in terms of weight and space, so the third test is passed. Therefore, adding seat belts to all vehicles is an excellent idea. Other "supplemental restraint systems", such as airbags, are more expensive and so pass that test by a smaller margin. Another excellent and long-term example of this principle being put into practice is the braking system: whilst the actual brake mechanisms are critical, they are not particularly prone to sudden (rather than progressive) failure, and are in any case necessarily duplicated to allow even and balanced application of brake force to all wheels. It would also be prohibitively costly to further double-up the main components and they would add considerable weight. However, the similarly critical systems for actuating the brakes under driver control are inherently less robust, generally using a cable (can rust, stretch, jam, snap) or hydraulic fluid (can leak, boil and develop bubbles, absorb water and thus lose effectiveness). Thus in most modern cars the footbrake hydraulic brake circuit is diagonally divided to give two smaller points of failure, the loss of either only reducing brake power by 50% and not causing as much dangerous brakeforce imbalance as a straight front-back or left-right split, and should the hydraulic circuit fail completely (a relatively very rare occurrence), there is a failsafe in the form of the cable-actuated parking brake that operates the otherwise relatively weak rear brakes, but can still bring the vehicle to a safe halt in conjunction with transmission/engine braking so long as the demands on it are in line with normal traffic flow. The cumulatively unlikely combination of total foot brake failure with the need for harsh braking in an emergency will likely result in a collision, but still one at lower speed than would otherwise have been the case. In comparison with the foot pedal activated service brake, the parking brake itself is a less critical item, and unless it is being used as a one-time backup for the footbrake, will not cause immediate danger if it is found to be nonfunctional at the moment of application. Therefore, no redundancy is built into it per se (and it typically uses a cheaper, lighter, but less hardwearing cable actuation system), and it can suffice, if this happens on a hill, to use the footbrake to momentarily hold the vehicle still, before driving off to find a flat piece of road on which to stop. Alternatively, on shallow gradients, the transmission can be shifted into Park, Reverse or First gear, and the transmission lock / engine compression used to hold it stationary, as there is no need for them to include the sophistication to first bring it to a halt. On motorcycles, a similar level of fail-safety is provided by simpler methods; first, the front and rear brake systems are entirely separate, regardless of their method of activation (that can be cable, rod or hydraulic), allowing one to fail entirely while leaving the other unaffected. Second, the rear brake is relatively strong compared to its automotive cousin, being a powerful disc on some sports models, even though the usual intent is for the front system to provide the vast majority of braking force; as the overall vehicle weight is more central, the rear tire is generally larger and has better traction, so that the rider can lean back to put more weight on it, therefore allowing more brake force to be applied before the wheel locks. On cheaper, slower utility-class machines, even if the front wheel should use a hydraulic disc for extra brake force and easier packaging, the rear will usually be a primitive, somewhat inefficient, but exceptionally robust rod-actuated drum, thanks to the ease of connecting the footpedal to the wheel in this way and, more importantly, the near impossibility of catastrophic failure even if the rest of the machine, like a lot of low-priced bikes after their first few years of use, is on the point of collapse from neglected maintenance. == Requirements == The basic characteristics of fault tolerance require: No single point of failure – If a system experiences a failure, it must continue to operate without interruption during the repair process. Fault isolation to the failing component – When a failure occurs, the system must be able to isolate the failure to the offending component. This requires the addition of dedicated failure detection mechanisms that exist only for the purpose of fault isolation. Recovery from a fault condition requires classifying the fault or failing component. The National Institute of Standards and Technology (NIST) categorizes faults based on locality, cause, duration, and effect. Fault containment to prevent propagation of the failure – Some failure mechanisms can cause a system to fail by propagating the failure to the rest of the system. An example of this kind of failure is the "rogue transmitter" that can swamp legitimate communication in a system and cause overall system failure. Firewalls or other mechanisms that isolate a rogue transmitter or failing component to protect the system are required. Availability of reversion modes In addition, fault-tolerant systems are characterized in terms of both planned service outages and unplanned service outages. These are usually measured at the application level and not just at a hardware level. The figure of merit is called availability and is expressed as a percentage. For example, a five nines system would statistically provide 99.999% availability. Fault-tolerant systems are typically based on the concept of redundancy. == Fault tolerance techniques == Research into the kinds of tolerances needed for critical systems involves a large amount of interdisciplinary work. The more complex the system, the more carefully all possible interactions have to be considered and prepared for. Considering the importance of high-value systems in transport, public utilities and the military, the field of topics that touch on research is very wide: it can include such obvious subjects as software modeling and reliability, or hardware design, to arcane elements such as stochastic models, graph theory, formal or exclusionary logic, parallel processing, remote data transmission, and more. === Replication === Spare components address the first fundamental characteristic of fault tolerance in three ways: Replication: Providing multiple identical instances of the same system or subsystem, directing tasks or requests to all of them in parallel, and choosing the correct result on the basis of a quorum; Redundancy: Providing multiple identical instances of the same system and switching to one of the remaining instances in case of a failure (failover); Diversity: Providing multiple different implementations of the same specification, and using them like replicated systems to cope with errors in a specific implementation. All implementations of RAID, redundant array of independent disks, except RAID 0, are examples of a fault-tolerant storage device that uses data redundancy. A lockstep fault-tolerant machine uses replicated elements operating in parallel. At any time, all the replications of each element should be in the same state. The same inputs are provided to each replication, and the same outputs are expected. The outputs of the replications are compared using a voting circuit. A machine with two replications of each element is termed dual modular redundant (DMR). The voting circuit can then only detect a mismatch and recovery relies on other methods. A machine with three replications of each element is termed triple modular redundant (TMR). The voting circuit can determine which replication is in error when a two-to-one vote is observed. In this case, the voting circuit can output the correct result, and discard the erroneous version. After this, the internal state of the erroneous replication is assumed to be different from that of the other two, and the voting circuit can switch to a DMR mode. This model can be applied to any larger number of replications. Lockstep fault-tolerant machines are most easily made fully synchronous, with each gate of each replication making the same state transition on the same edge of the clock, and the clocks to the replications being exactly in phase. However, it is possible to build lockstep systems without this requirement. Bringing the replications into synchrony requires making their internal stored states the same. They can be started from a fixed initial state, such as the reset state. Alternatively, the internal state of one replica can be copied to another replica. One variant of DMR is pair-and-spare. Two replicated elements operate in lockstep as a pair, with a voting circuit that detects any mismatch between their operations and outputs a signal indicating that there is an error. Another pair operates exactly the same way. A final circuit selects the output of the pair that does not proclaim that it is in error. Pair-and-spare requires four replicas rather than the three of TMR, but has been used commercially. === Failure-oblivious computing === Failure-oblivious computing is a technique that enables computer programs to continue executing despite errors. The technique can be applied in different contexts. It can handle invalid memory reads by returning a manufactured value to the program, which in turn, makes use of the manufactured value and ignores the former memory value it tried to access, this is a great contrast to typical memory checkers, which inform the program of the error or abort the program. The approach has performance costs: because the technique rewrites code to insert dynamic checks for address validity, execution time will increase by 80% to 500%. === Recovery shepherding === Recovery shepherding is a lightweight technique to enable software programs to recover from otherwise fatal errors such as null pointer dereference and divide by zero. Comparing to the failure oblivious computing technique, recovery shepherding works on the compiled program binary directly and does not need to recompile to program. It uses the just-in-time binary instrumentation framework Pin. It attaches to the application process when an error occurs, repairs the execution, tracks the repair effects as the execution continues, contains the repair effects within the application process, and detaches from the process after all repair effects are flushed from the process state. It does not interfere with the normal execution of the program and therefore incurs negligible overhead. For 17 of 18 systematically collected real world null-dereference and divide-by-zero errors, a prototype implementation enables the application to continue to execute to provide acceptable output and service to its users on the error-triggering inputs. === Circuit breaker === The circuit breaker design pattern is a technique to avoid catastrophic failures in distributed systems. == Redundancy == Redundancy is the provision of functional capabilities that would be unnecessary in a fault-free environment. This can consist of backup components that automatically "kick in" if one component fails. For example, large cargo trucks can lose a tire without any major consequences. They have many tires, and no one tire is critical (with the exception of the front tires, which are used to steer, but generally carry less load, each and in total, than the other four to 16, so are less likely to fail). The idea of incorporating redundancy in order to improve the reliability of a system was pioneered by John von Neumann in the 1950s. Two kinds of redundancy are possible: space redundancy and time redundancy. Space redundancy provides additional components, functions, or data items that are unnecessary for fault-free operation. Space redundancy is further classified into hardware, software and information redundancy, depending on the type of redundant resources added to the system. In time redundancy the computation or data transmission is repeated and the result is compared to a stored copy of the previous result. The current terminology for this kind of testing is referred to as 'In Service Fault Tolerance Testing or ISFTT for short. == Disadvantages == Fault-tolerant design's advantages are obvious, while many of its disadvantages are not: Interference with fault detection in the same component. To continue the above passenger vehicle example, with either of the fault-tolerant systems it may not be obvious to the driver when a tire has been punctured. This is usually handled with a separate "automated fault-detection system". In the case of the tire, an air pressure monitor detects the loss of pressure and notifies the driver. The alternative is a "manual fault-detection system", such as manually inspecting all tires at each stop. Interference with fault detection in another component. Another variation of this problem is when fault tolerance in one component prevents fault detection in a different component. For example, if component B performs some operation based on the output from component A, then fault tolerance in B can hide a problem with A. If component B is later changed (to a less fault-tolerant design) the system may fail suddenly, making it appear that the new component B is the problem. Only after the system has been carefully scrutinized will it become clear that the root problem is actually with component A. Reduction of priority of fault correction. Even if the operator is aware of the fault, having a fault-tolerant system is likely to reduce the importance of repairing the fault. If the faults are not corrected, this will eventually lead to system failure, when the fault-tolerant component fails completely or when all redundant components have also failed. Test difficulty. For certain critical fault-tolerant systems, such as a nuclear reactor, there is no easy way to verify that the backup components are functional. The most infamous example of this is Chernobyl, where operators tested the emergency backup cooling by disabling primary and secondary cooling. The backup failed, resulting in a core meltdown and massive release of radiation. Cost. Both fault-tolerant components and redundant components tend to increase cost. This can be a purely economic cost or can include other measures, such as weight. Crewed spaceships, for example, have so many redundant and fault-tolerant components that their weight is increased dramatically over uncrewed systems, which do not require the same level of safety. Inferior components. A fault-tolerant design may allow for the use of inferior components, which would have otherwise made the system inoperable. While this practice has the potential to mitigate the cost increase, use of multiple inferior components may lower the reliability of the system to a level equal to, or even worse than, a comparable non-fault-tolerant system. == Related terms == There is a difference between fault tolerance and systems that rarely have problems. For instance, the Western Electric crossbar systems had failure rates of two hours per forty years, and therefore were highly fault resistant. But when a fault did occur they still stopped operating completely, and therefore were not fault tolerant. == See also == == References ==
Wikipedia/Fault-tolerant_design
Vector graphics are a form of computer graphics in which visual images are created directly from geometric shapes defined on a Cartesian plane, such as points, lines, curves and polygons. The associated mechanisms may include vector display and printing hardware, vector data models and file formats, as well as the software based on these data models (especially graphic design software, computer-aided design, and geographic information systems). Vector graphics are an alternative to raster or bitmap graphics, with each having advantages and disadvantages in specific situations. While vector hardware has largely disappeared in favor of raster-based monitors and printers, vector data and software continue to be widely used, especially when a high degree of geometric precision is required, and when complex information can be decomposed into simple geometric primitives. Thus, it is the preferred model for domains such as engineering, architecture, surveying, 3D rendering, and typography, but is entirely inappropriate for applications such as photography and remote sensing, where raster is more effective and efficient. Some application domains, such as geographic information systems (GIS) and graphic design, use both vector and raster graphics at times, depending on purpose. Vector graphics are based on the mathematics of analytic or coordinate geometry, and is not related to other mathematical uses of the term vector. This can lead to some confusion in disciplines in which both meanings are used. == Data model == The logical data model of vector graphics is based on the mathematics of coordinate geometry, in which shapes are defined as a set of points in a two- or three-dimensional cartesian coordinate system, as p = (x, y) or p = (x, y, z). Because almost all shapes consist of an infinite number of points, the vector model defines a limited set of geometric primitives that can be specified using a finite sample of salient points called vertices. For example, a square can be unambiguously defined by the locations of three of its four corners, from which the software can interpolate the connecting boundary lines and the interior space. Because it is a regular shape, a square could also be defined by the location of one corner, a size (width=height), and a rotation angle. The fundamental geometric primitives are: A single point. A line segment, defined by two end points, allowing for a simple linear interpolation of the intervening line. A polygonal chain or polyline, a connected set of line segments, defined by an ordered list of points. A polygon, representing a region of space, defined by its boundary, a polyline with coincident starting and ending vertices. A variety of more complex shapes may be supported: Parametric curves, in which polylines or polygons are augmented with parameters to define a non-linear interpolation between vertices, including circular arcs, cubic splines, Catmull–Rom splines, Bézier curves and bezigons. Standard parametric shapes in two or three dimensions, such as circles, ellipses, squares, superellipses, spheres, tetrahedrons, superellipsoids, etc. Irregular three-dimensional surfaces and solids, are usually defined as a connected set of polygons (e.g., a polygon mesh) or as parametric surfaces (e.g., NURBS). Fractals, often defined as an iterated function system. In many vector datasets, each shape can be combined with a set of properties. The most common are visual characteristics, such as color, line weight, or dash pattern. In systems in which shapes represent real-world features, such as GIS and BIM, a variety of attributes of each represented feature can be stored, such as name, age, size, and so on. In some Vector data, especially in GIS, information about topological relationships between objects may be represented in the data model, such as tracking the connections between road segments in a transport network. If a dataset stored in one vector file format is converted to another file format that supports all the primitive objects used in that particular image, then the conversion can be lossless. == Vector display hardware == Vector-based devices, such as the vector CRT and the pen plotter, directly control a drawing mechanism to produce geometric shapes. Since vector display devices can define a line by dealing with just two points (that is, the coordinates of each end of the line), the device can reduce the total amount of data it must deal with by organizing the image in terms of pairs of points. Vector graphic displays were first used in 1958 by the US SAGE air defense system. Vector graphics systems were retired from the U.S. en route air traffic control in 1999. Vector graphics were also used on the TX-2 at the Massachusetts Institute of Technology Lincoln Laboratory by computer graphics pioneer Ivan Sutherland to run his program Sketchpad in 1963. Subsequent vector graphics systems, most of which iterated through dynamically modifiable stored lists of drawing instructions, include the IBM 2250, Imlac PDS-1, and DEC GT40. There was a video game console that used vector graphics called Vectrex as well as various arcade games like Asteroids, Space Wars, Tempest and many cinematronics titles such as Rip Off, and Tail Gunner using vector monitors. Storage scope displays, such as the Tektronix 4014, could display vector images but not modify them without first erasing the display. However, these were never as widely used as the raster-based scanning displays used for television, and had largely disappeared by the mid-1980s except for specialized applications. Plotters used in technical drawing still draw vectors directly to paper by moving a pen as directed through the two-dimensional space of the paper. However, as with monitors, these have largely been replaced by the wide-format printer that prints a raster image (which may be rendered from vector data). == Software == Because this model is useful in a variety of application domains, many different software programs have been created for drawing, manipulating, and visualizing vector graphics. While these are all based on the same basic vector data model, they can interpret and structure shapes very differently, using very different file formats. Graphic design and illustration, using a vector graphics editor or graphic art software such as Adobe Illustrator. See Comparison of vector graphics editors for capabilities. Geographic information systems (GIS), which can represent a geographic feature by a combination of a vector shape and a set of attributes. GIS includes vector editing, mapping, and vector spatial analysis capabilities. Computer-aided design (CAD), used in engineering, architecture, and surveying. Building information modeling (BIM) models add attributes to each shape, similar to a GIS. 3D computer graphics software, including computer animation. == File formats == Vector graphics are commonly found today in the SVG, WMF, EPS, PDF, CDR or AI types of graphic file formats, and are intrinsically different from the more common raster graphics file formats such as JPEG, PNG, APNG, GIF, WebP, BMP and MPEG4. The World Wide Web Consortium (W3C) standard for vector graphics is Scalable Vector Graphics (SVG). The standard is complex and has been relatively slow to be established at least in part owing to commercial interests. Many web browsers now have some support for rendering SVG data but full implementations of the standard are still comparatively rare. In recent years, SVG has become a significant format that is completely independent of the resolution of the rendering device, typically a printer or display monitor. SVG files are essentially printable text that describes both straight and curved paths, as well as other attributes. Wikipedia prefers SVG for images such as simple maps, line illustrations, coats of arms, and flags, which generally are not like photographs or other continuous-tone images. Rendering SVG requires conversion to a raster format at a resolution appropriate for the current task. SVG is also a format for animated graphics. There is also a version of SVG for mobile phones called SVGT (SVG Tiny version). These images can count links and also exploit anti-aliasing. They can also be displayed as wallpaper. CAD software uses its own vector data formats, usually proprietary formats created by software vendors, such as Autodesk's DWG and public exchange formats such as DXF. Hundreds of distinct vector file formats have been created for GIS data over its history, including proprietary formats like the Esri file geodatabase, proprietary but public formats like the Shapefile and the original KML, open source formats like GeoJSON, and formats created by standards bodies like Simple Features and GML from the Open Geospatial Consortium. === Conversion === ==== To raster ==== Modern displays and printers are raster devices; vector formats have to be converted to a raster format (bitmaps – pixel arrays) before they can be rendered (displayed or printed). The size of the bitmap/raster-format file generated by the conversion will depend on the resolution required, but the size of the vector file generating the bitmap/raster file will always remain the same. Thus, it is easy to convert from a vector file to a range of bitmap/raster file formats but it is much more difficult to go in the opposite direction, especially if subsequent editing of the vector picture is required. It might be an advantage to save an image created from a vector source file as a bitmap/raster format, because different systems have different (and incompatible) vector formats, and some might not support vector graphics at all. However, once a file is converted from the vector format, it is likely to be bigger, and it loses the advantage of scalability without loss of resolution. It will also no longer be possible to edit individual parts of the image as discrete objects. The file size of a vector graphic image depends on the number of graphic elements it contains; it is a list of descriptions. ==== From raster ==== === Printing === Vector art is ideal for printing since the art is made from a series of mathematical curves; it will print very crisply even when resized. For instance, one can print a vector logo on a small sheet of copy paper, and then enlarge the same vector logo to billboard size and keep the same crisp quality. A low-resolution raster graphic would blur or pixelate excessively if it were enlarged from business card size to billboard size. (The precise resolution of a raster graphic necessary for high-quality results depends on the viewing distance; e.g., a billboard may still appear to be of high quality even at low resolution if the viewing distance is great enough.) If we regard typographic characters as images, then the same considerations that we have made for graphics apply even to the composition of written text for printing (typesetting). Older character sets were stored as bitmaps. Therefore, to achieve maximum print quality they had to be used at a given resolution only; these font formats are said to be non-scalable. High-quality typography is nowadays based on character drawings (fonts) which are typically stored as vector graphics, and as such are scalable to any size. Examples of these vector formats for characters are Postscript fonts and TrueType fonts. == Operation == Advantages of this style of drawing over raster graphics: Because vector graphics consist of coordinates with lines/curves between them, the size of the representation does not depend on the dimensions of the object. This minimal amount of information translates to a much smaller file size compared to large raster images which are defined pixel by pixel. This said, a vector graphic with a small file size is often said to lack detail compared with a real-world photo. Correspondingly, one can infinitely zoom in on e.g., a circle arc, and it remains smooth. On the other hand, a polygon representing a curve will reveal being not really curved. On zooming in, lines and curves need not get wider proportionally. Often the width is either not increased or less than proportional. On the other hand, irregular curves represented by simple geometric shapes may be made proportionally wider when zooming in, to keep them looking smooth and not like these geometric shapes. The parameters of objects are stored and can be later modified. This means that moving, scaling, rotating, filling, etc. does not degrade the quality of a drawing. Moreover, it is usual to specify the dimensions in device-independent units, which results in the best possible rasterization on raster devices. From a 3-D perspective, rendering shadows is also much more realistic with vector graphics, as shadows can be abstracted into the rays of light from which they are formed. This allows for photorealistic images and renderings. For example, consider a circle of radius r. The main pieces of information a program needs in order to draw this circle are An indication that what is to be drawn is a circle the radius r the location of the center point of the circle stroke line style and color (possibly transparent) fill style and color (possibly transparent) Vector formats are not always appropriate in graphics work and also have numerous disadvantages. For example, devices such as cameras and scanners produce essentially continuous-tone raster graphics that are impractical to convert into vectors, and so for this type of work, an image editor will operate on the pixels rather than on drawing objects defined by mathematical expressions. Comprehensive graphics tools will combine images from vector and raster sources, and may provide editing tools for both, since some parts of an image could come from a camera source, and others could have been drawn using vector tools. Some authors have criticized the term vector graphics as being confusing. In particular, vector graphics does not simply refer to graphics described by Euclidean vectors. Some authors have proposed to use object-oriented graphics instead. However this term can also be confusing as it can be read as any kind of graphics implemented using object-oriented programming. == Vector operations == Vector graphics editors typically allow translation, rotation, mirroring, stretching, skewing, affine transformations, changing of z-order (loosely, what's in front of what) and combination of primitives into more complex objects. More sophisticated transformations include set operations on closed shapes (union, difference, intersection, etc.). In SVG, the composition operations are based on alpha composition. Vector graphics are ideal for simple or composite drawings that need to be device-independent, or do not need to achieve photo-realism. For example, the PostScript and PDF page description languages use a vector graphics model. == Vector image repositories == Many stock photo websites provide vectorized versions of hosted images, while specific repositories specialize in vector images given their growing popularity among graphic designers. == See also == == Notes == == References == Barr, Alan H. (July 1984). "Global and local deformations of solid primitives" (PDF). Proceedings of the 11th annual conference on Computer graphics and interactive techniques. Vol. 18. pp. 21–30. CiteSeerX 10.1.1.67.6046. doi:10.1145/800031.808573. ISBN 0897911385. S2CID 16162806. Retrieved July 31, 2020. Gharachorloo, Nader; Gupta, Satish; Sproull, Robert F.; Sutherland, Ivan E. (July 1989). "A characterization of ten rasterization techniques" (PDF). Proceedings of the 16th annual conference on Computer graphics and interactive techniques. Vol. 23. pp. 355–368. CiteSeerX 10.1.1.105.461. doi:10.1145/74333.74370. ISBN 0201504340. S2CID 8253227. Retrieved July 28, 2020. Murray, Stephen (2002). "Graphic Devices". In Roger R. Flynn (ed.). Computer Sciences, Vol 2: Software and Hardware, Macmillan Reference USA. Gale eBooks. Retrieved August 3, 2020. == External links == Media related to Vector graphics at Wikimedia Commons
Wikipedia/Vector_graphics
Freeform surface modelling is a technique for engineering freeform surfaces with a CAD or CAID system. The technology has encompassed two main fields. Either creating aesthetic surfaces (class A surfaces) that also perform a function; for example, car bodies and consumer product outer forms, or technical surfaces for components such as gas turbine blades and other fluid dynamic engineering components. CAD software packages use two basic methods for the creation of surfaces. The first begins with construction curves (splines) from which the 3D surface is then swept (section along guide rail) or meshed (lofted) through. The second method is direct creation of the surface with manipulation of the surface poles/control points. From these initially created surfaces, other surfaces are constructed using either derived methods such as offset or angled extensions from surfaces; or via bridging and blending between groups of surfaces. == Surfaces == Freeform surface, or freeform surfacing, is used in CAD and other computer graphics software to describe the skin of a 3D geometric element. Freeform surfaces do not have rigid radial dimensions, unlike regular surfaces such as planes, cylinders and conic surfaces. They are used to describe forms such as turbine blades, car bodies and boat hulls. Initially developed for the automotive and aerospace industries, freeform surfacing is now widely used in all engineering design disciplines from consumer goods products to ships. Most systems today use nonuniform rational B-spline (NURBS) mathematics to describe the surface forms; however, there are other methods such as Gordon surfaces or Coons surfaces . The forms of freeform surfaces (and curves) are not stored or defined in CAD software in terms of polynomial equations, but by their poles, degree, and number of patches (segments with spline curves). The degree of a surface determines its mathematical properties, and can be seen as representing the shape by a polynomial with variables to the power of the degree value. For example, a surface with a degree of 1 would be a flat cross section surface. A surface with degree 2 would be curved in one direction, while a degree 3 surface could (but does not necessarily) change once from concave to convex curvature. Some CAD systems use the term order instead of degree. The order of a polynomial is one greater than the degree, and gives the number of coefficients rather than the greatest exponent. The poles (sometimes known as control points) of a surface define its shape. The natural surface edges are defined by the positions of the first and last poles. (Note that a surface can have trimmed boundaries.) The intermediate poles act like magnets drawing the surface in their direction. The surface does not, however, go through these points. The second and third poles as well as defining shape, respectively determine the start and tangent angles and the curvature. In a single patch surface (Bézier surface), there is one more pole than the degree values of the surface. Surface patches can be merged into a single NURBS surface; at these points are knot lines. The number of knots will determine the influence of the poles on either side and how smooth the transition is. The smoothness between patches, known as continuity, is often referred to in terms of a C value: C0: just touching, could have a nick C1: tangent, but could have sudden change in curvature C2: the patches are curvature continuous to one another Two more important aspects are the U and V parameters. These are values on the surface ranging from 0 to 1, used in the mathematical definition of the surface and for defining paths on the surface: for example, a trimmed boundary edge. Note that they are not proportionally spaced along the surface. A curve of constant U or constant V is known as an isoperimetric curve, or U (V) line. In CAD systems, surfaces are often displayed with their poles of constant U or constant V values connected together by lines; these are known as control polygons. == Modelling == When defining a form, an important factor is the continuity between surfaces - how smoothly they connect to one another. One example of where surfacing excels is automotive body panels. Just blending two curved areas of the panel with different radii of curvature together, maintaining tangential continuity (meaning that the blended surface doesn't change direction suddenly, but smoothly) won't be enough. They need to have a continuous rate of curvature change between the two sections, or else their reflections will appear disconnected. The continuity is defined using the terms: G0 – position (touching) G1 – tangent (angle) G2 – curvature (radius) G3 – acceleration (rate of change of curvature) To achieve a high quality NURBS or Bézier surface, degrees of 5 or greater are generally used. == Freeform surface modelling software == == See also == == References ==
Wikipedia/Freeform_surface_modelling
A design brief is a document for a design project developed by a designer in consultation with a client. The brief outlines the deliverables and scope of the project, including any products or works, function and aesthetics, as well as timing and budget. They can be used in many fields, including architecture, interior design and industrial design. Design briefs are also used to evaluate the effectiveness of a design after it has been produced and during the creation process to keep the project on track. They usually change over time and are adjusted as the project scope evolves. In the project management frameworks PRINCE2, a project brief is a document established in the startup process of the project and before the project starts, and is used as a foundation for the project initiation documentation. == See also == Creative brief Product design specification, a document that describes design specifications == External links == Design Brief examples, templates and video guides
Wikipedia/Design_brief
A design sprint is a time-constrained, five-phase process that uses design thinking with the aim of reducing the risk when bringing a new product, service or a feature to the market. The process aims to help teams to clearly define goals, validate assumptions and decide on a product roadmap before starting development. It seeks to address strategic issues using interdisciplinary expertise, rapid prototyping, and usability testing. This design process is similar to Sprints in an Agile development cycle. == How it started == There are multiple origins to the concept of mixing Agile and Design Thinking. The most popular was developed by a multi-disciplinary team working out of Google Ventures. The initial iterations of the approach were created by Jake Knapp, and popularised by a series of blog articles outlining the approach and reporting on its successes within Google. As it gained industry recognition, the approach was further refined and added to by other Google staff including Braden Kowitz, Michael Margolis, John Zeratsky and Daniel Burka. It was later published in a book published by Google Ventures called "Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days".. == Possible uses == Claimed uses of the approach include Launching a new product or a service. Extending an existing experience to a new platform. Existing MVP needing revised User experience design and/or UI Design. Adding new features and functionality to a digital product. Opportunities for improvement of a product (e.g. a high rate of cart abandonment) Opportunities for improvement of a service. Supporting organizations in their transformation towards new technologies (e.g., AI). == Phases == The creators of the design sprint approach, recommend preparation by picking the proper team, environment, materials and tools working with six key 'ingredients'. Understand: Discover the business opportunity, the audience, the competition, the value proposition, and define metrics of success. Diverge: Explore, develop and iterate creative ways of solving the problem, regardless of feasibility. Converge: Identify ideas that fit the next product cycle and explore them in further detail through storyboarding. Prototype: Design and prepare prototype(s) that can be tested with people. Test: Conduct 1:1 usability testing with 5-6 people from the product's primary target audience. Ask good questions. == Deliverables == The main deliverables after the Design sprint: Answers to a set of vital questions Findings from the sprint (notes, user journey maps, storyboards, information architecture diagrams, etc.) Prototypes Report from the usability testing with the findings (backed by testing videos) A plan for next steps Validate or invalidate hypotheses before committing resources to build the solution == Team == The suggested ideal number of people involved in the sprint is 4-7 people and they include the facilitator, designer, a decision maker (often a CEO if the company is a startup), product manager, engineer and someone from companies core business departments (Marketing, Content, Operations, etc.). == Variants == The concept sprint is a fast five-day process for cross-functional teams to brainstorm, define, and model new approaches to business issue. Another common variant is the Service Design Sprint, an approach to Design Sprints created in 2014 that uses Service Design tools and mechanics to tackle service innovation. == References ==
Wikipedia/Design_sprint
Exhibit design (or exhibition design) is the process of developing an exhibit—from a concept through to a physical, three-dimensional exhibition. It is a continually evolving field, drawing on innovative, creative, and practical solutions to the challenge of developing communicative environments that 'tell a story' in a three-dimensional space. There are many people who collaborate to design exhibits such as directors, curators, exhibition designers, and technicians. These positions have great importance because how they design will affects how people learn. Learning is a byproduct of attention, so first the designers must capture the visitors attention. A good exhibition designer will consider the whole environment in which a story is being interpreted rather than just concentrating on individual exhibits. Some other things designers must consider are the space allotted for the display, precautions to protect what is being displayed, and what they are displaying. For example a painting, a mask, and a diamond will not be displayed the same way. Taking into account with artifacts culture and history is also important because every time the artifact is displayed in a new context it reinterprets them. == Description == Exhibit design is a collaborative process, integrating the disciplines of architecture, landscape architecture, graphic design, audiovisual engineering, digital media, lighting, interior design, and content development to develop an audience experience that interprets information, involves and engages a user and influences their understanding of a subject. There are many different types of exhibit, ranging from museum exhibitions, to retail and trades show spaces, to themed attractions, zoos, and visitor centers. All types of exhibits aim to communicate a message through engaging their audiences in meaningful and compelling interactions. Exhibit designers (or exhibition designers) use a wide range of technologies and techniques to develop experiences that will resonate with diverse audiences–enabling these targeted audiences to access the messages, stories and objects of an exhibit. The exhibit design process builds on a conceptual or interpretive plan for an exhibit, determining the most effective, engaging and appropriate methods of communicating a message or telling a story. The process will often mirror the architectural process or schedule, moving from conceptual plan, through schematic design, design development, contract document, fabrication, and installation. The first phases establish a thematic direction and develop creative and appropriate design solutions to achieve the interpretive and communication goals of the exhibit. The latter phases employ technical expertise in translating the visual language of the designs into detailed documents that provide all the specifications required to fabricate and install an exhibit. Exhibition design in different parts of the world are influenced by the local culture as well as the availability of materials. Exhibition design in Europe is considered as a meeting place for relationship building while in North America energy is spent on creating a sense of place and building community. One of the major shifts in museum and exhibit design in the last decade has been a focus on visitor experience. By identifying the five types of museum visitors and their needs and expectations, museums can design their exhibits to give a positive visitor experience. Participatory activities are also becoming more popular, Nina Simon has done research describing and identifying themes and trends in museums that will attract visitors and educate them in fun and engaging ways. How an exhibit is designed can greatly persuade the visitors comprehension of artifacts. By using colors, lighting, graphics, guidance systems or materials can dramatize the display or help create a central theme which helps the narrative being presented. The use of new interactive technology can increase the comprehension of facts. New full-body or multi-user interactive technology can help engage visitors in fun activities that support exploratory learning. Utilizing this technology can make museums more fun and less intimating. It also encourages learning new ideas while working with others in a social setting. The use of technology in a museum setting goes further than the four wall of the museum itself. By adding the exhibits to a digital platform it allows others who can not visit the museum in person to still learn from the display. We saw proof of this working particularly during the Covid-19 lockdown when no one could go to museums. Another way this practice could be used is creating digital display for the artifacts sitting in storage due to lack of physical space in the display area. == Designers == There are many steps leading up to getting a museum job. First you must decide what your strengths are and what kind of job you want. Being a designer will require the same strengths as being a researcher. This would also be the time to being doing to research on what the requirements is for the job. The next step would be to network amongst friends and acquaintances and if possible set up some exploratory interviews. This is a good step to start familiarizing yourself with other you may work with and hear about some first have experience. The final step would be to take stock. This means volunteer, go back to school for a higher degree or a new certificate, or take a smaller step in your career towards the job you want. Doing all of these things are just preparing you to apply for the job. Once these are complete interviewers will still be looking for other things too. There are many requirements to becoming an exhibit designer. Some positions require a certain level of education such as a postgraduate qualification/museum diploma. Also, getting the degree does not guarantee the job. Some positions also require certain skills such as collections management, administration, or research and publication experience. Once all of these are met and the position is acquired the designers still may not get to always design whatever they want. Designers are also constricted at times with what they can and cannot do. This is because museums are conservative at nature and therefore the professional's who aid with designing exhibits are limited by the core mission as well as audience's expectations. As briefly discussed earlier there are many people who help aid the exhibit designers or oversee the process within the museum. Throughout the planning and design process, exhibit designers work closely with graphic designers, content specialists, architects, fabricators, technical specialists, audiovisual experts, and, in the case of museums and other mission-based institutions, stakeholders like community members, government agencies, and other partner organizations. There are certain elements designers must also take into account such as safety for the artifact. This can come in many different forms such as using markers on the floor to have visitors keep a certain distance, using glass cases to enclose artifacts, and relying on museum workers walking around and watching the artifacts. Taking these into account is when collaborating with other department is very important. The job of exhibit designers was a declining one between the years of 1990 to 2005 based on a study that shows a drop of six percent of jobs. The questions that surround the decline include; are the jobs still declining, are the jobs being out sourced, are other jobs taking over the design responsibility, and is the job still dramatically declining? These questions have yet to be fully answered. == References ==
Wikipedia/Exhibit_design
Design science refers to a scientific, i.e. rational and systematic, approach to designing. An early concept of design science was introduced in 1957 by R. Buckminster Fuller who defined it as a systematic form of designing which he applied especially in innovative engineering design. The concept has been more broadly defined by the Design Science journal as “quantitative and qualitative research in the creation of artifacts and systems, and their embedding in our physical, virtual, psychological, economic, and social environment”. == Design-science relationship == There has been recurrent concern to differentiate design from science. Nigel Cross differentiated between scientific design, design science and a science of design. A science of design (the scientific study of design) does not require or assume that the acts of designing are themselves scientific, and an increasing number of research programs take this view. To some extent the two uses of the term design science (systematic designing and the study of designing) have co-mingled to the point where there can be some confusion, and design science sometimes may be referred to either as meaning a science of design or design as a science. == A science of design == Simon's The Sciences of the Artificial, first published in 1969, built on previous developments and motivated the further development of systematic and formalized design methodologies relevant to many design disciplines, for example architecture, engineering, urban planning, computer science, and management studies. Simon's ideas about the science of design also encouraged the development of design research and the scientific study of designing. == Design as a science == The design-science relationship continues to be debated and there continue to be many efforts to reframe or reform design as science. For example, the axiomatic theory of design by Suh presents a domain independent theory that can explain or prescribe the design process. The Function-Behavior-Structure (FBS) ontology by Gero, presenting a domain independent ontology of design and designing, is another example. There have also been many domain-specific developments of design science, for example in architectural design, product design and information systems design. === Design science in information systems === There has been a particular emphasis on design as a science within information systems. Hevner and Chatterjee provide a reference on design science research (DSR) in Information Systems, including a selection of papers from the DESRIST conferences, a look at key principles of DSR, and the integration of action research with design research. Vaishnavi and Kuechler offer a resource on design science research in information systems that outlines the origins and philosophical grounding for design science research, explains the design science methodology, and offers a bibliography of articles that discuss design science methods or offer exemplars of design science. In 2010, 122 German professors promoted design science in information system research by signing a memorandum subsequently submitted in english to the European Journal of Information Systems. In the same issue the then Editor-in-Chief of the European Journal of Information Systems (EJIS) Rickard Baskerville, along with the then Editor-in-Chief of the Information Systems Research (ISR) Vallabh Sambamurthy, with the then Editor-in-Chief of Management Information Systems Quarterly (MISQ) Detmar Straub, and the former Editor-in-Chief of the Journal of the Association for Information Systems (JAIS) Kalle Lyytinen together authored a rebuttal to some of the claims made in the memorandum regarding bias against DSR. Hevner et al. provide a set of seven guidelines which help information systems researchers conduct, evaluate and present design-science research. The seven guidelines address design as an artifact, problem relevance, design evaluation, research contributions, research rigor, design as a search process, and research communication. Later extensions of the design science research approach detail how design and research problems can be rationally decomposed by means of nested problem solving. It is also explained how the regulative cycle (problem investigation, solution design, design validation, solution implementation, and implementation evaluation) fits in the framework. Peffers et al. developed a model for producing and presenting information systems research that they called the design science research process. The Peffers et al. model has been used extensively and Adams provides an example of the process model being applied to create a digital forensic process model. == See also == Citizen Design Science Descriptive science § Descriptive versus design sciences Design methods Design research Design science research Design thinking == References ==
Wikipedia/Design_science
Defensive design is the practice of planning for contingencies in the design stage of a project or undertaking. Essentially, it is the practice of anticipating all possible ways that an end-user could misuse a device, and designing the device so as to make such misuse impossible, or to minimize the negative consequences. For example, if it is important that a plug is inserted into a socket in a particular orientation, the socket and plug should be designed so that it is physically impossible to insert the plug incorrectly. Power sockets are often keyed in such a manner, to prevent the transposition of live and neutral. They are also recessed in the wall in a way that makes it impossible to touch connectors once they become live. Defensive design in software engineering is called defensive programming. Murphy's law is a well-known statement of the need for defensive design, and also of its ultimate limitations. == Applications == === Computer software === Implementation decisions and software design approaches can make software safer and catch user errors. Code that implements this is termed a sanity check. Data entry screens can "sanitize" inputs, e.g. numeric fields contain only digits, signs and a single decimal point if appropriate. Inputs can be checked for legitimate values, e.g. for counts of workplace injuries (or number of people injured) the number can be 0 but can't be negative and must be a whole number; for number of hours worked in one week the amount for any specified employee can be 0, can be fractional, but can't be negative and can't be greater than 168, nor more than 24 times the number of days they were in attendance. A word processor requested to load a saved document should scan it to ensure it is in good form and not corrupted. If it is corrupted, the program should say so, then either accept the partial document that was valid, or refuse the entire document. In either case it should remain running and not quit. === Electronics === Many electrical connectors apply this principle by being asymmetric. Alternatively, USB-C plugs are mechanically but not electrically symmetric, but achieve an illusion of symmetry resulting from how devices respond to the cable, and hence can be plugged in either of two ways. Accompanying circuitry makes the plugs and cables behave as though they are symmetric. == See also == Defensible space theory Fail-safe Idiot-proof Inherent safety Poka-yoke Usability testing == References ==
Wikipedia/Defensive_design
Modern C++ Design: Generic Programming and Design Patterns Applied is a book written by Andrei Alexandrescu, published in 2001 by Addison-Wesley. It has been regarded as "one of the most important C++ books" by Scott Meyers. The book makes use of and explores a C++ programming technique called template metaprogramming. While Alexandrescu didn't invent the technique, he has popularized it among programmers. His book contains solutions to practical problems which C++ programmers may face. Several phrases from the book are now used within the C++ community as generic terms: modern C++ (as opposed to C/C++ style), policy-based design and typelist. All of the code described in the book is freely available in his library Loki. The book has been republished and translated into several languages since 2001. == Policy-based design == Policy-based design, also known as policy-based class design or policy-based programming, is the term used in Modern C++ Design for a design approach based on an idiom for C++ known as policies. It has been described as a compile-time variant of the strategy pattern, and has connections with C++ template metaprogramming. It was first popularized in C++ by Andrei Alexandrescu with Modern C++ Design and with his column Generic<Programming> in the C/C++ Users Journal, and it is currently closely associated with C++ and D as it requires a compiler with highly robust support for templates, which was not common before about 2003. Previous examples of this design approach, based on parameterized generic code, include parametric modules (functors) of the ML languages, and C++ allocators for memory management policy. The central idiom in policy-based design is a class template (called the host class), taking several type parameters as input, which are instantiated with types selected by the user (called policy classes), each implementing a particular implicit interface (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. By supplying a host class combined with a set of different, canned implementations for each policy, a library or module can support an exponential number of different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor. Even in cases where no more than one implementation of each policy will ever be used, decomposing a class into policies can aid the design process, by increasing modularity and highlighting exactly where orthogonal design decisions have been made. While assembling software components out of interchangeable modules is a far-fetched concept, policy-based design represents an innovation in a way it applies that concept at the (relatively low) level of defining the behavior of an individual class. Policy classes are similar to callbacks, but differ in that, rather than consisting of a single function, a policy class will typically contain several related functions (methods), often combined with state variables or other facilities such as nested types. A policy-based host class can be thought of as a type of metafunction, taking a set of behaviors represented by types as input, and returning as output a type representing the result of combining those behaviors into a functioning whole. (Unlike MPL metafunctions, however, the output is usually represented by the instantiated host class itself, rather than a nested output type.) A key feature of the policy idiom is that, usually (though it is not strictly necessary), the host class will derive from (make itself a child class of) each of its policy classes using (public) multiple inheritance. (Alternatives are for the host class to merely contain a member variable of each policy class type, or else to inherit the policy classes privately; however, inheriting the policy classes publicly has the major advantage that a policy class can add new methods, inherited by the instantiated host class and accessible to its users, which the host class itself need not even know about.) A notable feature of this aspect of the policy idiom is that, relative to object-oriented programming, policies invert the relationship between base class and derived class - whereas in OOP interfaces are traditionally represented by (abstract) base classes and implementations of interfaces by derived classes, in policy-based design the derived (host) class represents the interfaces and the base (policy) classes implement them. In the case of policies, the public inheritance does not represent an is-a relationship between the host and the policy classes. While this would traditionally be considered evidence of a design defect in OOP contexts, this doesn't apply in the context of the policy idiom. A disadvantage of policies in their current incarnation is that the policy interface doesn't have a direct, explicit representation in code, but rather is defined implicitly, via duck typing, and must be documented separately and manually, in comments. The main idea is to use commonality-variability analysis to divide the type into the fixed implementation and interface, the policy-based class, and the different policies. The trick is to know what goes into the main class, and what policies should one create. The article mentioned above gives the following answer: wherever we would need to make a possible limiting design decision, we should postpone that decision, we should delegate it to an appropriately named policy. Policy classes can contain implementation, type definitions and so forth. Basically, the designer of the main template class will define what the policy classes should provide, what customization points they need to implement. It may be a delicate task to create a good set of policies, just the right number (e.g., the minimum necessary). The different customization points, which belong together, should go into one policy argument, such as storage policy, validation policy and so forth. Graphic designers are able to give a name to their policies, which represent concepts, and not those which represent operations or minor implementation details. Policy-based design may incorporate other useful techniques. For example, the template method pattern can be reinterpreted for compile time, so that a main class has a skeleton algorithm, which – at customization points – calls the appropriate functions of some of the policies. This will be achieved dynamically by concepts in future versions of C++. === Simple example === Presented below is a simple (contrived) example of a C++ hello world program, where the text to be printed and the method of printing it are decomposed using policies. In this example, HelloWorld is a host class where it takes two policies, one for specifying how a message should be shown and the other for the actual message being printed. Note that the generic implementation is in Run and therefore the code is unable to be compiled unless both policies (Print and Message) are provided. Designers can easily write more OutputPolicys by adding new classes with the member function Print and take those as new OutputPolicys. == Loki library == Loki is a C++ software library written by Andrei Alexandrescu as part of his book Modern C++ Design. The library makes extensive use of C++ template metaprogramming and implements several commonly used tools: typelist, functor, singleton, smart pointer, object factory, visitor and multimethods. Originally the library was only compatible with two of the most standard conforming C++ compilers (CodeWarrior and Comeau C/C++): later efforts have made it usable with a wide array of compilers (including older Visual C++ 6.0, Borland C++ Builder 6.0, Clang and GCC). Compiler vendors used Loki as a compatibility benchmark, further increasing the number of compliant compilers. Maintenance and further development of Loki has been continued through an open-source community led by Peter Kümmel and Richard Sposato as a SourceForge project. Ongoing contributions by many people have improved the overall robustness and functionality of the library. Loki is not tied to the book anymore as it already has a lot of new components (e.g. StrongPtr, Printf, and Scopeguard). Loki inspired similar tools and functionality now also present in the Boost library collection. == See also == Mixin == References == == External links == Alexandrescu's website (with book errata [1]) Smart Pointers (sample chapter from the book) Loki on SourceForge Original source code from the book publisher
Wikipedia/Policy-based_design
Empathic design is a user-centered design approach that pays attention to the user's feelings toward a product. The empathic design process is sometimes mistakenly referred to as empathetic design. == Characteristics == The foundation of empathic design is observation and the goal to identify latent customer needs in order to create products that the customers don't even know they desire, or, in some cases, solutions that customers have difficulty envisioning due to lack of familiarity with the possibilities offered by new technologies or because they are locked in a specific mindset. Empathic design relies on observation of consumers as opposed to traditional market research which relies on consumer inquiry with the intention to avoid possible biases in surveys and questions, and minimizes the chance that consumers will provide false information. Observations are carried out by a small team of specialists, such as an engineer, a human-factors expert, and a designer. Each specialist then documents their observations and the session is videoed to capture subtle interactions such as body language and facial expressions. Learning users' unarticulated needs through a process of keen observation and interpretation can lead to breakthrough designs. Deszca et al. argue that market forces and competitive pressures in a fast-paced world are augmenting the importance of product innovation as a source of competitive advantage. They argue that in empathic design techniques, users are almost as involved in product design as designers and engineers. Therefore, such technique can achieve new designs in potentially shorter product development cycles. To achieve this, they caution that observation group should consist of several others than simply designers and engineers, including trained anthropologists and/or ethnographers. Von Hippel's research supports the theory that customers or users themselves are the source of much innovation. Empathic design using field observation can reveal opportunities to commercialize innovations existing users have already developed to improve products. == Process == Leonard and Rayport identify the five key steps in empathic design as: Observation Capturing Data Reflection and Analysis Brainstorming for solutions Developing prototypes of possible solutions Prototypes, simulation and role-playing are other forms of learning processes, typically used to gather customer feedback to designs that have been developed based on empathic design. One of the practitioners of empathic design is design company IDEO. IDEO believes that "seeing and hearing things with your own eyes and ears is a critical first step in creating a breakthrough product" IDEO refers to this as "human factors" or "human inspiration" and states that "Innovation starts with an eye", and once they start observing carefully, all kinds of insights and opportunities can pop up. IDEO include empathic design in their projects and list the key steps to their method as: Understand the market, client, technology and perceived constraints. Observe people in real-life situations to find out what motivates them, what confuses them, what they like, hate, where they have latent needs not addressed by current products and services. Visualize concepts that are new to the world Evaluate and refine the prototype Implement the new concept for commercialization. The empathic model is a technique used to simulate age-related sensory losses to give designers personal experience with how their product performs for users. An example is how designers of a retirement community used empathy tools, such as glasses which reduced their vision and gloves which limited their grip and strength. Suri et al. reported another method of empathic design, involving designers shadowing vision-impaired users. The designer was then required to utilize non-visual cues to learn about a product by working in a dark environment. Since its introduction, empathic design techniques for new products were first adopted by automotive and electronic product manufacturing industry. However, the techniques have been successfully used by several other organizations for designing innovative products. While the five steps mentioned above are at the foundation of empathic design process, several other techniques are used in combination with these five steps. A study performed on UK based textile fiber manufacturer, Tencel Limited, by Lofthouse et al., shows that use of the Kano model in combination with the first step of user observation has led to understanding of new insights into how customers really perceived Tencel's fiber, and enabled the product development team to 'walk in the shoes' of the end user. The Kano model offered some insight into which product attributes were perceived to be important to customers. The questionnaires used to seek information from users, an important part of Kano model, were used in multiple focus groups consisting of target customers and multidisciplinary design teams. These focus groups carried the process into next three steps of capturing data, reflection and analysis, and brainstorming. In doing so they developed a so-called "journey diagram" to record activities that these groups identified to be necessary to move the project towards its final target. Jääsko and Mattelmäki have studied user-centered design techniques such as empathic design by means of case studies in which they found extensive use of empathic design techniques when developing innovative patient monitoring instruments in hospitals by Datex-Ohmeda division of Instrumentarium Corporation. Datex-Ohmeda used a new technique called "probing" in combination with observation for gathering instrumental, visual and empathic data from "sensitive settings" – that is, situations and places where design team had no access or the access was only temporary. The probing process consisted of diaries, cameras, and illustrated cards with open questions and tasks for documenting routines, actions, and needs in different use situations. Brandt and Grunnet have studied the use of drama and props as tools in empathic design process to collaboratively generate and explore innovative design ideas. They argue that use of drama and props may aid in engaging users more directly in the design process, especially during the prototype simulation step. == Examples in practice == The following examples demonstrate cases where empathic design was applied to the new product development process successfully. Design Continuum of Milan, Italy, designed a series of baby bottles by using empathic design techniques where a team of designers collected data on user needs by observing kids in kindergartens and immersing themselves in the homes of some first-time mothers. The Instrumentarian Corporation’s Datex-Ohmeda division used empathic design (including the use of user diaries, cameras, and short-term observation in critical situations) to assist in the improvement of products provided to nurses in the health care industry. Polar Electro Oy, a manufacturer of heart rate monitors, used empathic design principles to observe and record user interactions with their product. The resulting data was fed back into the design organization to influence future designs and product development. Tencel LTD, a textile manufacturer in the United Kingdom, used empathic design techniques to solicit feedback on their current product line, understand positive and negative traits, and determine areas for immediate improvement. IDEO, Inc., a broad-based design services company, is well known for its employment of empathic design and brainstorming as its principal design methodology. Most products designed by IDEO incorporate some features based on the results of an empathic design experience. == See also == Contextual design Design thinking Ethnography Kano model Whole product == References ==
Wikipedia/Empathic_design
Hardware interface design (HID) is a cross-disciplinary design field that shapes the physical connection between people and technology in order to create new hardware interfaces that transform purely digital processes into analog methods of interaction. It employs a combination of filmmaking tools, software prototyping, and electronics breadboarding. Through this parallel visualization and development, hardware interface designers are able to shape a cohesive vision alongside business and engineering that more deeply embeds design throughout every stage of the product. The development of hardware interfaces as a field continues to mature as more things connect to the internet. Hardware interface designers draw upon industrial design, interaction design and electrical engineering. Interface elements include touchscreens, knobs, buttons, sliders and switches as well as input sensors such as microphones, cameras, and accelerometers. == History == In the last decade a trend had evolved in the area of human-machine-communication, taking the user experience from haptic, tactile and acoustic interfaces to a more digitally graphical approach. Important tasks that had been assigned to the industrial designers so far, had instead been moved into fields like UI and UX design and usability engineering. The creation of good user interaction was more a question of software than hardware. Things like having to push two buttons on the tape recorder to have them pop back out again and the cradle of some older telephones remain mechanical haptic relicts that have long found their digital nemesis and are waiting to disappear. However, this excessive use of GUIs in today’s world has led to a worsening impairment of the human cognitive capabilities. Visual interfaces are at the maximum of their upgradability. Even though the resolution of new screens is constantly rising, you can see a change of direction away from the descriptive intuitive design to natural interface strategies, based on learnable habits (Google’s Material Design, Apple’s iOS flat design, Microsoft’s Metro Design Language). Several of the more important commands are not shown directly but can be accessed through dragging, holding and swiping across the screen; gestures which have to be learned once but feel very natural afterwards and are easy to remember. In the area of controlling these systems, there is a need to move away from GUIs and instead find other means of interaction which use the full capabilities of all our senses. Hardware interface design solves this by taking physical forms and objects and connecting them with digital information to have the user control virtual data flow through grasping, moving and manipulating the used physical forms. If you see the classic industrial hardware interface design as an “analog” method, it finds its digital counterpart in the HID approach. Instead of translating analog methods of control into a virtual form via a GUI, one can see the TUI as an approach to do the exact opposite: transmitting purely digital processes into analog methods of interaction. == Examples == Example hardware interfaces include a computer mouse, TV remote control, kitchen timer, control panel for a nuclear power plant and an aircraft cockpit. == See also == == References ==
Wikipedia/Hardware_interface_design
Metadesign (or meta-design) is an emerging conceptual framework aimed at defining and creating social, economic and technical infrastructures in which new forms of collaborative design can take place. It consists of a series of practical design-related tools for achieving this. As a methodology, its aim is to nurture emergence of the previously unthinkable as possibilities or prospects through the collaboration of designers within interdisciplinarity 'metadesign' teams. Inspired by the way living systems work, this new field aims to help improve the way we feed, clothe, shelter, assemble, communicate and live together. == History == Metadesign has been initially put forward as an industrial design approach to complexity theory and information systems by Dutch designer Andries Van Onck in 1963, while at Ulm School of Design (later at Politecnico di Milano and Rome and Florence ISIA). Since then, several different design, creative and research approaches have used the name "Metadesign", ranging from Humberto Maturana and Francisco Varela's biological approach, to Gerhard Fischer's and Elisa Giaccardi's techno-social approach, and Paul Virilio's techno-policital approach. Later on, a very active group was present at Politecnico di Milano, and several different universities and graduate programs began applying Metadesign in design teaching around the world generally based at Van Onck's approach, further developed at Politecnico di Milano. Nevertheless, there's a very active, but widely dispersed, group that base their activities at Maturana and Varela's approach. More recently, some efforts have been made to systematize Metadesign as a structured creative process, such as (1) Fischer's and Giaccardi's and (2) Caio Vassão's academic works, among several others, based on a much wider reference frame, ranging from post-structuralist philosophy, Neil Postman's media ecology, Christopher Alexander's pattern languages and deep ecology. This variety of approaches is justified by the myriad interpretations that can be derived from the etymological structure of the term. == Re-designing design == The Greek word 'meta' originally meant 'beyond' or 'after' and is now sometimes used to imply a comprehensive, insightful self-awareness. Employed as a prefix, it explicitly denotes self-referentiality. Metadesign, therefore, alludes to a design practice that (re)designs itself (see Maturana and Varela's term autopoiesis). The idea of Metadesign acknowledges that future uses and problems cannot be completely anticipated at design time. Aristotle's influential theory of design defined it by saying that the 'cause' of design was its final state. This teleological perspective is similar to the orthodox idea of an economic payback at the point of sale, rather than successive stages when the product could be seen to achieve high levels of perceived value, throughout the whole design cycle. Some supporters of metadesign hope that it will extend the traditional notion of system design beyond the original development of a system by allowing users to become co-designers. == The importance of languaging == By harnessing creative teamwork within a suitable co-design framework, some metadesigners have sought to catalyse changes at a behavioural level. However, as Albert Einstein said, "We can't solve problems by using the same kind of thinking we used when we created them". This points to a need for appropriate innovation at all levels, including the metaphorical language that serves to sustain a given paradigm. In practical terms this adds considerable complexity to the task of managing actions and outcomes. What may be so neatly described as 'new knowledge', in practical terms, exists as an interpersonal and somatic web of tacit knowledge that needs to be interpreted and applied by many collaborators. This tends to reduce the semantic certainty of roles, actions and descriptors within a given team, making it necessary to rename particular shared experiences that seem inappropriately defined. In other instances it may be necessary to invent new words to describe perceived gaps in what can be discussed within a prevailing vernacular. Humberto Maturana's work on distributed language and the field of biosemiotics is germane to this task. Some researchers have used bisociation in order to create an auspicious synergy of benign synergies. In aspiring to this outcome, metadesign teams will cultivate auspicious 'diversities-of-diversities'. It suggests that metadesign would offer a manifold ethical space. In this respect, related approaches include what Arthur Koestler (1967) called holarchy, or what John Dewey and John Chris Jones have called 'creative democracy'. == Metadesign conceptual tools == Regarding a wide range of applications and contexts, Vassão has argued that Metadesign can be understood as a set of four "conceptual tools", utilizing Gilles Deleuze's understanding of the term "tool": Levels of abstraction (the ability to understand the structure and limits of abstractions, language and instrumental thinking); Diagrams and topology (the use of diagrammatic thinking and design, sustained by topological understanding); Procedural design (the creation of realities through the use of procedures, such as in game and role playing, as well as in procedural design, art and architecture); Emergence (the absence of absolute control, and the ability to take advantage of unintended and unforeseen results). Vassão has argued that, in all different approaches to metadesign, the presence of these conceptual tools can be verified. == See also == == References == Wood, J. ed., 2022. Metadesigning designing in the Anthropocene. Taylor & Francis. == External links == Attainable Utopias - Definition of Metadesign Metadesigners Open Network
Wikipedia/Metadesign
An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components are etched onto a small, flat piece ("chip") of semiconductor material, usually silicon. Integrated circuits are used in a wide range of electronic devices, including computers, smartphones, and televisions, to perform various functions such as processing and storing information. They have greatly impacted the field of electronics by enabling device miniaturization and enhanced functionality. Integrated circuits are orders of magnitude smaller, faster, and less expensive than those constructed of discrete components, allowing a large transistor count. The IC's mass production capability, reliability, and building-block approach to integrated circuit design have ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other home appliances are now essential parts of the structure of modern societies, made possible by the small size and low cost of ICs such as modern computer processors and microcontrollers. Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s. ICs have three main advantages over circuits constructed out of discrete components: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated. == Terminology == An integrated circuit is defined as: A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce. In strict usage, integrated circuit refers to the single-piece circuit construction originally known as a monolithic integrated circuit, which comprises a single piece of silicon. In general usage, circuits not meeting this strict definition are sometimes referred to as ICs, which are constructed using many different technologies, e.g. 3D IC, 2.5D IC, MCM, thin-film transistors, thick-film technologies, or hybrid integrated circuits. The choice of terminology frequently appears in discussions related to whether Moore's Law is obsolete. == History == An early attempt at combining several components in one device (like modern ICs) was the Loewe 3NF vacuum tube first made in 1926. Unlike ICs, it was designed with the purpose of tax avoidance, as in Germany, radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. It allowed radio receivers to have a single tube holder. One million were manufactured, and were "a first step in integration of radioelectronic devices". The device contained an amplifier, composed of three triodes, two capacitors and four resistors in a six-pin device. Radios with the Loewe 3NF were less expensive than other radios, showing one of the advantages of integration over using discrete components, that would be seen decades later with ICs. Early concepts of an integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a three-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported. Another early proponent of the concept was Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other. The monolithic integrated circuit chip was enabled by the inventions of the planar process by Jean Hoerni and p–n junction isolation by Kurt Lehovec. Hoerni's invention was built on Carl Frosch and Lincoln Derick's work on surface protection and passivation by silicon dioxide masking and predeposition, as well as Fuller, Ditzenberger's and others work on the diffusion of impurities into silicon. === The first integrated circuits === A precursor idea to the IC was to create small ceramic substrates (so-called micromodules), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC. Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working example of an integrated circuit on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated". The first customer for the new invention was the US Air Force. Kilby won the 2000 Nobel Prize in physics for his part in the invention of the integrated circuit. However, Kilby's invention was not a true monolithic integrated circuit chip since it had external gold-wire connections, which would have made it difficult to mass-produce. Half a year after Kilby, Robert Noyce at Fairchild Semiconductor invented the first true monolithic IC chip. More practical than Kilby's implementation, Noyce's chip was made of silicon, whereas Kilby's was made of germanium, and Noyce's was fabricated using the planar process, developed in early 1959 by his colleague Jean Hoerni and included the critical on-chip aluminum interconnecting lines. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's. NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965. === TTL integrated circuits === Transistor–transistor logic (TTL) was developed by James L. Buie in the early 1960s at TRW Inc. TTL became the dominant integrated circuit technology during the 1970s to early 1980s. Dozens of TTL integrated circuits were a standard method of construction for the processors of minicomputers and mainframe computers. Computers such as IBM 360 mainframes, PDP-11 minicomputers and the desktop Datapoint 2200 were built from bipolar integrated circuits, either TTL or the even faster emitter-coupled logic (ECL). === MOS integrated circuits === Nearly all modern IC chips are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The MOSFET invented at Bell Labs between 1955 and 1960, made it possible to build high-density integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was pointed out by Dawon Kahng in 1961. The list of IEEE milestones includes the first integrated circuit by Kilby in 1958, Hoerni's planar process and Noyce's planar IC in 1959. The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. Following the development of the self-aligned gate (silicon-gate) MOSFET by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC technology with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at Fairchild Semiconductor by Federico Faggin in 1968. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. This led to the inventions of the microprocessor and the microcontroller by the early 1970s. During the early 1970s, MOS integrated circuit technology enabled the very large-scale integration (VLSI) of more than 10,000 transistors on a single chip. At first, MOS-based computers only made sense when high density was required, such as aerospace and pocket calculators. Computers built entirely from TTL, such as the 1970 Datapoint 2200, were much faster and more powerful than single-chip MOS microprocessors such as the 1972 Intel 8008 until the early 1980s. Advances in IC technology, primarily smaller features and larger chips, have allowed the number of MOS transistors in an integrated circuit to double every two years, a trend known as Moore's law. Moore originally stated it would double every year, but he went on to change the claim to every two years in 1975. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor goes down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling (MOSFET scaling). Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from tens of microns in the early 1970s to 10 nanometers in 2017 with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm2, with up to 25 million transistors per mm2. The expected shrinking of feature sizes and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems. Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in an attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors. Charge-coupled devices, and the closely related active-pixel sensors, are chips that are sensitive to light. They have largely replaced photographic film in scientific, medical, and consumer applications. Billions of these devices are now produced each year for applications such as cellphones, tablets, and digital cameras. This sub-field of ICs won the Nobel Prize in 2009. Very small mechanical devices driven by electricity can be integrated onto chips, a technology known as microelectromechanical systems (MEMS). These devices were developed in the late 1980s and are used in a variety of commercial and military applications. Examples include DLP projectors, inkjet printers, and accelerometers and MEMS gyroscopes used to deploy automobile airbags. Since the early 2000s, the integration of optical functionality (optical computing) into silicon chips has been actively pursued in both academic research and in industry resulting in the successful commercialization of silicon based integrated optical transceivers combining optical devices (modulators, detectors, routing) with CMOS based electronics. Photonic integrated circuits that use light such as Lightelligence's PACE (Photonic Arithmetic Computing Engine) also being developed, using the emerging field of physics known as photonics. Integrated circuits are also being developed for sensor applications in medical implants or other bioelectronic devices. Special sealing techniques have to be applied in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials. As of 2018, the vast majority of all transistors are MOSFETs fabricated in a single layer on one side of a chip of silicon in a flat two-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as: various approaches to stacking several layers of transistors to make a three-dimensional integrated circuit (3DIC), such as through-silicon via, "monolithic 3D", stacked wire bonding, and other methodologies. transistors built from other materials: graphene transistors, molybdenite transistors, carbon nanotube field-effect transistor, gallium nitride transistor, transistor-like nanowire electronic devices, organic field-effect transistor, etc. fabricating transistors over the entire surface of a small sphere of silicon. modifications to the substrate, typically to make "flexible transistors" for a flexible display or other flexible electronics, possibly leading to a roll-away computer. As it becomes more difficult to manufacture ever smaller transistors, companies are using multi-chip modules/chiplets, three-dimensional integrated circuits, package on package, High Bandwidth Memory and through-silicon vias with die stacking to increase performance and reduce size, without having to reduce the size of the transistors. Such techniques are collectively known as advanced packaging. Advanced packaging is mainly divided into 2.5D and 3D packaging. 2.5D describes approaches such as multi-chip modules while 3D describes approaches where dies are stacked in one way or another, such as package on package and high bandwidth memory. All approaches involve 2 or more dies in a single package. Alternatively, approaches such as 3D NAND stack multiple layers on a single die. A technique has been demonstrated to include microfluidic cooling on integrated circuits, to improve cooling performance as well as peltier thermoelectric coolers on solder bumps, or thermal solder bumps used exclusively for heat dissipation, used in flip-chip. == Design == The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars. Therefore, it only makes economic sense to produce integrated circuit products with high production volume, so the non-recurring engineering (NRE) costs are spread across typically millions of production units. Modern semiconductor chips have billions of components, and are far too complex to be designed by hand. Software tools to help the designer are essential. Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design, verify, and analyze entire semiconductor chips. Some of the latest EDA tools use artificial intelligence (AI) to help engineers save time and improve chip performance. == Types == Integrated circuits can be broadly classified into analog, digital and mixed signal, consisting of analog and digital signaling on the same IC. Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, use boolean algebra to process "one" and "zero" signals. Among the most advanced integrated circuits are the microprocessors or "cores", used in personal computers, cell-phones, etc. Several cores may be integrated together in a single IC or chip. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits. In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a chip to be programmed to do various LSI-type functions such as logic gates, adders and registers. Programmability comes in various forms – devices that can be programmed only once, devices that can be erased and then re-programmed using UV light, devices that can be (re)programmed using flash memory, and field-programmable gate arrays (FPGAs) which can be programmed at any time, including during operation. Current FPGAs can (as of 2016) implement the equivalent of millions of gates and operate at frequencies up to 1 GHz. Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-amps), process continuous signals, and perform analog functions such as amplification, active filtering, demodulation, and mixing. ICs can combine analog and digital circuits on a chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller size and lower cost, but must account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, radio chips have been developed using RF CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies. Modern electronic component distributors often further sub-categorize integrated circuits: Digital ICs are categorized as logic ICs (such as microprocessors and microcontrollers), memory chips (such as MOS memory and floating-gate memory), interface ICs (level shifters, serializer/deserializer, etc.), power management ICs, and programmable devices. Analog ICs are categorized as linear integrated circuits and RF circuits (radio frequency circuits). Mixed-signal integrated circuits are categorized as data acquisition ICs (including A/D converters, D/A converters, digital potentiometers), clock/timing ICs, switched capacitor (SC) circuits, and RF CMOS circuits. Three-dimensional integrated circuits (3D ICs) are categorized into through-silicon via (TSV) ICs and Cu-Cu connection ICs. == Manufacturing == === Fabrication === The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals with minimal defects in semiconducting materials' crystal structure. Semiconductor ICs are fabricated in a planar process which includes three key process steps – photolithography, deposition (such as chemical vapor deposition), and etching. The main process steps are supplemented by doping and cleaning. More recent or high-performance ICs may instead use multi-gate FinFET or GAAFET transistors instead of planar ones, starting at the 22 nm node (Intel) or 16/14 nm nodes. Mono-crystal silicon wafers are used in most applications (or for special applications, other semiconductors such as gallium arsenide are used). The wafer need not be entirely silicon. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them. Dopants are impurities intentionally introduced to a semiconductor to modulate its electronic properties. Doping is the process of adding dopants to a semiconductor material. Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (doped polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers. In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer (this is called "the self-aligned gate").: p.1 (see Fig. 1.1)  Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs. Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance. More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators. Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar junction transistor devices. A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process. Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded to pads, usually found around the edge of the die. Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices. As of 2022, a fabrication facility (commonly known as a semiconductor fab) can cost over US$12 billion to construct. The cost of a fabrication facility rises over time because of increased complexity of new products; this is known as Rock's law. Such a facility features: The wafers up to 300 mm in diameter (wider than a common dinner plate). As of 2022, 5 nm transistors. Copper interconnects where copper wiring replaces aluminum for interconnects. Low-κ dielectric insulators. Silicon on insulator (SOI). Strained silicon in a process used by IBM known as Strained silicon directly on insulator (SSDOI). Multigate devices such as tri-gate transistors. ICs can be manufactured either in-house by integrated device manufacturers (IDMs) or using the foundry model. IDMs are vertically integrated companies (like Intel and Samsung) that design, manufacture and sell their own ICs, and may offer design and/or manufacturing (foundry) services to other companies (the latter often to fabless companies). In the foundry model, fabless companies (like Nvidia) only design and sell ICs and outsource all manufacturing to pure play foundries such as TSMC. These foundries may offer IC design services. === Packaging === The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic, which is commonly cresol-formaldehyde-novolac. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches. In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still used for high-end microprocessors. Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for a much higher pin count than other package types, were developed in the 1990s. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. BGA devices have the advantage of not needing a dedicated socket but are much harder to replace in case of device failure. Intel transitioned away from PGA to land grid array (LGA) and BGA beginning in 2004, with the last PGA socket released in 2014 for mobile platforms. As of 2018, AMD uses PGA packages on mainstream desktop processors, BGA packages on mobile processors, and high-end desktop and server microprocessors use LGA packages. Electrical signals leaving the die must pass through the material electrically connecting the die to the package, through the conductive traces (paths) in the package, through the leads connecting the package to the conductive traces on the printed circuit board. The materials and structures used in the path these electrical signals must travel have very different electrical properties, compared to those that travel to different parts of the same die. As a result, they require special design techniques to ensure the signals are not corrupted, and much more electric power than signals confined to the die itself. When multiple dies are put in one package, the result is a system in package, abbreviated SiP. A multi-chip module (MCM), is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a large MCM and a small printed circuit board is sometimes fuzzy. Packaged integrated circuits are usually large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface-mount technology parts often bear only a number used in a manufacturer's lookup table to find the integrated circuit's characteristics. The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. == Intellectual property == The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout designs. The US Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits. A diplomatic conference held at Washington, D.C., in 1989 adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The treaty is currently not in force, but was partially integrated into the TRIPS agreement. There are several United States patents connected to the integrated circuit, which include patents by J.S. Kilby US3,138,743, US3,261,081, US3,434,015 and by R.F. Stewart US3,138,747. National laws protecting IC layout designs have been adopted in a number of countries, including Japan, the EC, the UK, Australia, and Korea. The UK enacted the Copyright, Designs and Patents Act, 1988, c. 48, § 213, after it initially took the position that its copyright law fully protected chip topographies. See British Leyland Motor Corp. v. Armstrong Patents Co. Criticisms of inadequacy of the UK copyright approach as perceived by the US chip industry are summarized in further chip rights developments. Australia passed the Circuit Layouts Act of 1989 as a sui generis form of chip protection. Korea passed the Act Concerning the Layout-Design of Semiconductor Integrated Circuits in 1992. == Generations == In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As metal–oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of electronic design automation, or EDA. Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production. === Small-scale integration (SSI) === The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI. SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other United States Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government spending on space and defense still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate the industrial market and eventually the consumer market. The average price per integrated circuit dropped from $50 in 1962 to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the 1970s decade. A typical application was FM inter-carrier sound processing in television receivers. The first application MOS chips were small-scale integration (SSI) chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960, the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI chips was for NASA satellites. === Medium-scale integration (MSI) === The next step in the development of integrated circuits introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI). MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a then-incredible 120 MOS transistors on a single chip. The same year, General Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of 120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s. === Large-scale integration (LSI) === Further development, driven by the same MOSFET scaling technology and economic factors, led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors per chip. The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar. For large or complex ICs (such as memories or processors), this was often done by specially hired professionals in charge of circuit layout, placed under the supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask. Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors. === Very-large-scale integration (VLSI) === "Very-large-scale integration" (VLSI) is a development that started with hundreds of thousands of transistors in the early 1980s. As of 2023, maximum transistor counts continue to grow beyond 5.3 trillion transistors per chip. Multiple developments were required to achieve this increased density. Manufacturers moved to smaller MOSFET design rules and cleaner fabrication facilities. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS), which has since been succeeded by the International Roadmap for Devices and Systems (IRDS). Electronic design tools improved, making it practical to finish designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. The complexity and density of modern VLSI devices made it no longer feasible to check the masks or do the original design by hand. Instead, engineers use EDA tools to perform most functional verification work. In 1986, one-megabit random-access memory (RAM) chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989, and the billion-transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors. === ULSI, WSI, SoC and 3D-IC === To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of more than 1 million transistors. Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed. A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and whilst performance benefits can be had from integrating all needed components on one die, the cost of licensing and developing a one-die machine still outweigh having separate devices. With appropriate licensing, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging). Further, signal sources and destinations are physically closer on die, reducing the length of wiring and therefore latency, transmission power costs and waste heat from communication between modules on the same chip. This has led to an exploration of so-called Network-on-Chip (NoC) devices, which apply system-on-chip design methodologies to digital communication networks as opposed to traditional bus architectures. A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation. == Silicon labeling and graffiti == To allow identification during production, most silicon chips will have a serial number in one corner. It is also common to add the manufacturer's logo. Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These artistic additions, often created with great attention to detail, showcase the designers' creativity and add a touch of personality to otherwise utilitarian components. These are sometimes referred to as chip art, silicon art, silicon graffiti or silicon doodling. == ICs and IC families == The 555 timer IC The Operational amplifier 7400-series integrated circuits 4000-series integrated circuits, the CMOS counterpart to the 7400 series (see also: 74HC00 series) Intel 4004, generally regarded as the first commercially available microprocessor, which led to the 8008, the famous 8080 CPU, the 8086, 8088 (used in the original IBM PC), and the fully-backward compatible (with the 8088/8086) 80286, 80386/i386, i486, etc. The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home computers of the early 1980s The Motorola 6800 series of computer-related chips, leading to the 68000 and 88000 series (the 68000 series was very successful and was used in the Apple Lisa and pre-PowerPC-based Macintosh, Commodore Amiga, Atari ST/TT/Falcon030, and NeXT families of computers, along with many models of workstations and servers from many manufacturers in the 80s, along with many other systems and devices) The LM-series of analog integrated circuits == See also == Central processing unit Chip carrier CHIPS and Science Act Chipset Czochralski method Dark silicon Ion implantation Integrated injection logic Integrated passive devices Interconnect bottleneck Heat generation in integrated circuits High-temperature operating life Microelectronics Monolithic microwave integrated circuit Multi-threshold CMOS Silicon–germanium Sound chip SPICE Thermal simulations for integrated circuits Hybrot == References == == Further reading == Veendrick, H.J.M. (2025). Nanometer CMOS ICs, from Basics to ASICs. Springer. ISBN 978-3-031-64248-7. OCLC 1463505655. Baker, R.J. (2010). CMOS: Circuit Design, Layout, and Simulation (3rd ed.). Wiley-IEEE. ISBN 978-0-470-88132-3. OCLC 699889340. Marsh, Stephen P. (2006). Practical MMIC design. Artech House. ISBN 978-1-59693-036-0. OCLC 1261968369. Camenzind, Hans (2005). Designing Analog Chips (PDF). Virtual Bookworm. ISBN 978-1-58939-718-7. OCLC 926613209. Archived from the original (PDF) on 12 June 2017. Hans Camenzind invented the 555 timer Hodges, David; Jackson, Horace; Saleh, Resve (2003). Analysis and Design of Digital Integrated Circuits. McGraw-Hill. ISBN 978-0-07-228365-5. OCLC 840380650. Rabaey, J.M.; Chandrakasan, A.; Nikolic, B. (2003). Digital Integrated Circuits (2nd ed.). Pearson. ISBN 978-0-13-090996-1. OCLC 893541089. Mead, Carver; Conway, Lynn (1991). Introduction to VLSI systems. Addison Wesley Publishing Company. ISBN 978-0-201-04358-7. OCLC 634332043. == External links == Media related to Integrated circuits at Wikimedia Commons The first monolithic integrated circuits A large chart listing ICs by generic number including access to most of the datasheets for the parts. The History of the Integrated Circuit
Wikipedia/Integrated_circuits
Contextual design (CD) is a user-centered design process developed by Hugh Beyer and Karen Holtzblatt. It incorporates ethnographic methods for gathering data relevant to the product via field studies, rationalizing workflows, and designing human–computer interfaces. In practice, this means that researchers aggregate data from customers in the field where people are living and applying these findings into a final product. Contextual design can be seen as an alternative to engineering and feature driven models of creating new systems. == Process overview == The contextual design process consists of the following top-level steps: contextual inquiry, interpretation, data consolidation, visioning, storyboarding, user environment design, and prototyping. === Collecting data – contextual inquiry === Contextual inquiry is a field data collection technique used to capture detailed information about how users of a product interact with the product in their normal work environment. This information is captured by both observations of user behavior and conversations with the user while she or he works. A key aspect of the technique is to partner with the user, letting their work and the issues they encounter guide the interview. Key takeaways from the technique are to learn what users actually do, why they do it that way, latent needs, desires, and core values. === Interpretation === Data from each interview is analyzed and key issues and insights are captured. Detailed work models are also created in order to understand the different aspects of the work that matter for design. Contextual design consists of five work models which are used to model the work tasks and details of the working environment. These work models are: Flow model – represents the coordination, communication, interaction, roles, and responsibilities of the people in a certain work practice Sequence model – represents the steps users go through to accomplish a certain activity, including breakdowns Cultural model – represents the norms, influences, and pressures that are present in the work environment Artifact model – represents the documents or other physical things that are created while working or are used to support the work. Artifacts often have a structure or styling that could represent the user's way of structuring the work Physical model – represents the physical environment where the work tasks are accomplished; often, there are multiple physical models representing, e.g., office layout, network topology, or the layout of tools on a computer display. === Data consolidation === Data from individual customer interviews are analyzed in order to reveal patterns and the structure across distinct interviews. Models of the same type can be consolidated together (but not generalized—detail must be maintained). Another method of processing the observations is making an affinity diagram ("wall"), as described by Beyer & Holtzblatt: A single observation is written on each piece of paper. Individual notes are grouped according to the similarity of their contents. These groups are labeled with colored Post-it notes, each color representing a distinct level in the hierarchy. Then the groups are combined with other groups to get the final construct of observations in a hierarchy of up to three levels. Beyer & Holtzblatt propose the following color-coding convention for grouping the notes, from lowest to highest level in the hierarchy: White notes – individual notes captured during interpretation, also known as "affinity notes" Blue notes – summaries of groups of white notes that convey all the relevant details Pink notes – summaries of groups of blue notes that reveal key issues in the data Green notes – labels identifying an area of concern indicated by pink notes Beyer & Holtzblatt emphasize the importance of building the entire affinity diagram in one or two sessions rather than building smaller affinity diagrams over many sessions. This immersion in the data for an extended period of time helps teams see the broad scope of a problem quickly and encourages a paradigm shift of thought rather than assimilation of ideas. The design ideas and relevant issues that arise during the process should be included in the affinity diagram. Any holes in the data and areas that need more information should also be labeled. After completing the wall, participants "walk" the affinity diagram to stimulate new ideas and identify any remaining issues or holes in data. The affinity diagram is a bottom-up method. Consolidated data may also be used to create a cause-and-effect diagram or a set of personas describing typical users of the proposed system. === Visioning === In visioning, a cross-functional team comes together to create stories of how new product concepts, services, and technology can better support the user work practice. The visioning team starts by reviewing the data to identify key issues and opportunities. The data walking session is followed by a group visioning session during which the visioning team generates a variety of new product concepts by telling stories of different usage scenarios based on the data collected. A vision includes the system, its delivery, and support structures to make the new work practice successful, but is told from the user's point of view. === Storyboarding === After visioning, the team develops the vision in storyboards, capturing scenarios of how people will work with the new system. Understanding the current way of working, its structure and the complete workflow helps the design team address the problems and design the new workflow. Storyboards work out the details of the vision, guided by the consolidated data, using pictures and text in a series of hand-drawn cells. === User Environment Design === The User Environment Design captures the floor plan of the new system. It shows each part of the system, how it supports the user's work, exactly what function is available in that part, and how the user gets to and from other parts of the system. Contextual design uses the User Environment Design (UED) diagram, which displays the focus areas, i.e., areas which are visible to the user or which are relevant to the user. Focus areas can be defined further as functions in a system that support a certain type or part of the work. The UED also presents how the focus areas relate to each other and shows the links between focus areas. === Prototyping === Testing the design ideas with paper prototypes or even with more sophisticated interactive prototypes before the implementation phase helps the designers communicate with users about the new system and develop the design further. Prototypes test the structure of a User Environment Design and initial user interface ideas, as well as the understanding of the work, before the implementation phase. Depending on the results of the prototype test, more iterations or alternative designs may be needed. == Uses and adaptations == Contextual design has primarily been used for the design of computer information systems, including hardware, software. Parts of contextual design have been adapted for use as a usability evaluation method and for contextual application design. Contextual design has also been applied to the design of digital libraries and other learning technologies, Contextual design has also been used as a means of teaching user-centered design/Human–computer interaction at the university level. A more lightweight approach to contextual design has been developed by its originators to address an oft-heard criticism that the method is too labor-intensive or lengthy for some needs. Yet others find the designer/user engagement promoted by contextual design to be too brief. == References == == External links == Karen Holtzblatt and Hugh Beyer: "Contextual Design", in: Soegaard, Mads and Dam, Rikke Friis (eds.). The Encyclopedia of Human–Computer Interaction, 2nd Ed. Aarhus, Denmark: The Interaction Design Foundation, 2014. InContext: "Contextual Design" (InContext was founded by Karen Holtzblatt and Hugh Beyer). "Contextual inquiry" on UsabilityNet. Hostrings: Contextual Design and Development is the Driving Force Behind all Successful Mobile Applications.
Wikipedia/Contextual_design
Mechanical systems drawing is a type of technical drawing that shows information about heating, ventilating, air conditioning and transportation (elevators and escalators) around a building. It is a tool that helps analyze complex systems. These drawings are often a set of detailed drawings used for construction projects; it is a requirement for all HVAC work. They are based on the floor and reflected ceiling plans of the architect. After the mechanical drawings are complete, they become part of the construction drawings, which is then used to apply for a building permit. They are also used to determine the price of the project. == Sets of drawings == === Arrangement drawing === Arrangement drawings include information about the self-contained units that make up the system: table of parts, fabrication and detail drawing, overall dimension, weight/mass, lifting points, and information needed to construct, test, lift, transport, and install the equipment. These drawings should show at least three different orthographic views and clear details of all the components and how they are assembled. === Assembly drawing === The assembly drawing typically includes three orthographic views of the system: overall dimensions, weight and mass, identification of all the components, quantities of material, supply details, list of reference drawings, and notes. Assembly drawings detail how certain component parts are assembled. An assembly drawing shows which order the product is put together, showing all the parts as if they were stretched out. This will help a welder to understand how the product will go together so he get an idea of where the weld is needed. The assembly drawing will contain the following; information overall dimensions, weight and mass, identification of all the components, quantities of material, supply details, list of reference drawings, and notes. === Detail drawing === In detail drawings, components used to build the mechanical system are described in some detail to show that the designer's specifications are met: relevant codes, standards, geometry, weight, mass, material, heat treatment requirements, surface texture, size tolerances, and geometric tolerances. === Fabrication drawings === A fabrication is made up of many different parts. A fabrication drawing has a list of parts that make up the fabrication. In the list, parts are identified (balloons and leader lines) and complex details are included: welding details, material standards, codes, and tolerances, and details about heat/stress treatments. and also === United Kingdom === ==== Tender drawings ==== ==== Special detailed drawing ==== Line diagrams and layouts indicating basic proposals, location of main items of plant, routes of main pipes, air ducts and cable runs in such detail as to illustrate the incorporation of the engineering services within the project as a whole. ==== Schematic drawing ==== The schematic is a line diagram, not necessarily to scale, that describes interconnection of components in a system. The main features of a schematic drawing show: A two dimensional layout with divisions that show distribution of the system between building levels, or an isometric-style layout that shows distribution of systems across individual floor levels All functional components that make up the system, i.e., plant items, pumps, fans, valves, strainers, terminals, electrical switchgear, distribution and components Symbols and line conventions, in accordance with industry standard guidance Labels for pipe, duct, and cable sizes where not shown elsewhere Components that have a sensing and control function, and links between them—building management systems, fire alarms and HV controls Major components, so their whereabouts in specifications and other drawings can be easily determined ==== Detailed design drawing ==== A drawing the intended locations of plant items and service routes in such detail as to indicate the design intent. The main features of detailed design drawings should be as follows: Plan layouts to a scale of at least 1:100. Plant areas to a scale of at least 1:50 and accompanied by cross-sections. The drawing don't indicate precise positions of services, but should be feasible to install the services within the general routes indicated. It should be possible to produce co-ordination drawings or installation drawings without major re-routing of the services. Represent pipework by single line layouts. Represent ductwork by either double or single line layouts as required to ensure that the routes indicated are feasible. Indicate on the drawing the space available for major service routing in both horizontal and vertical planes. ==== Installation drawing ==== A drawing which based on the detailed drawing, installation drawing or co-ordination drawing (interface drawing) with the primary purpose of defining that information needed by the tradesmen on site to install the works or concurrently work among various engineering assembly. The main features of typical installation drawings are: Plan layouts to a scale of at least 1:50, accompanied by cross-sections to a scale of at least 1:20 for all congested areas A spatially coordinated drawing, i.e., show no physical location clashes between the system components Allowance for inclusion of all supports and fixtures necessary to install the works Allowance for the service at its widest point for spaces between pipe and duct runs, for insulation, standard fitting dimensions, and joint widths Installation details provided from shop drawings Installation working space; space to facilitate commissioning and space to allow on-going operation and maintenance in accordance with the relevant health and safety requirements Plant and equipment including alternatives and options Dimensions where services positioning is important enough not to installers Plant room layouts to a scale of at least 1:20, accompanied by cross-sections and elevations to a scale of at least 1:20 ==== Record (as installed, as-built) drawing ==== A drawing showing the building and services installations as installed at the date of practical completion. Generally the record drawing is a development of the installation drawing. The main features of the record drawings should be as follows. Provide a record of the locations of all the systems and components installed including pumps, fans, valves, strainers, terminals, electrical switchgear, distribution and components. Use a scale not less than that of the installation drawings. Have marked on the drawings the positions of access points for operating and maintenance purposes. The drawings should not be dimensioned unless the inclusion of a dimension is considered necessary for location. ==== Builder's work Drawing ==== ===== Design stage ===== These drawings show the provisions required to accommodate the services that significantly affect the design of the building structure, fabric, and external works. This includes drawings (and schedules) of work the building trade carries out, or that must be cost-estimated at the design stage, e.g., plant bases ===== Installation stage ===== These drawings show requirements for building works necessary to facilitate installing the engineering services (other than where it is appropriate to mark out on site). Information on these drawing includes details of all: Bases for plant formed in concrete, brickwork or blockwork, to a scale of not less than 1:20 Attendant builders work, holes, chases, etc. for conduits, cables and trunking etc. and any item where access for a function of the installation is required to a scale of not less than 1:100 Purpose made brackets for supporting service or plant/equipment to a scale of not less than 1:50 Accesses into ceilings, ducts, etc. at a scale of not less than 1:50 Special fixings, inserts, brackets, anchors, suspensions, supports etc. at a scale of not less than 1:20 Sleeves, puddle flanges, access chambers at a scale not less than 1:20 == Details to include == Size, type, and layout of ducting Diffusers, heat registers, return air grilles, dampers Turning vanes, ductwork insulation HVAC unit Thermostats Electrical, water, and/or gas connections Ventilation Exhaust fans Symbol legend, general notes and specific key notes Heating and/or cooling load summary Connection to existing systems Demolition of part or all of existing systems Smoke detector and firestat re-ducting Thermostat programming Heat loss and heat gain calculations Special condition == Job outlook == About 80,000 jobs are held by mechanical drafters in the United States of America during 2008. From 2008 to 2018, mechanical drafting hiring rate is expected to neither increase nor decrease. It is encouraged to either take two additional years of training in drafting school after high school or attend a four-year college/university to develop better technical skills and gain more experience with CAD (computer-aided design). === Income of mechanical drafters in 2008 === Lowest 10% made $29,390. Highest 10% made $71,340. Middle 50% made between $36,490 to $59,010. Median: $46,640. == ADDA certification == The American Design Drafting Association (ADDA) has developed a Drafter Certification Test. The test assesses the drafter's skill in basic drafting concepts: geometric construction, working drawings, and architectural terms and standards. The test is administered periodically at ADDA-authorized sites. == Regulations in Canada == Mechanical system drawings must abide by all of the following regulations: the National Building Code of Canada, the National Fire Code, and Model National Energy Code of Canada for Buildings. For residential projects, The National Housing Code of Canada and the Model National Energy Code of Canada for Houses must also be followed. These drawings must also adhere to local and provincial codes and bylaws. == See also == Architectural drawing Electrical drawing Engineering drawing Plumbing drawing Structural drawing Plan (drawing) == References == == External links == Examples of mechanical drawings Mechanical Drafters Occupational Employment and Wages, May 2023
Wikipedia/Mechanical_systems_drawing
A bill of materials or product structure (sometimes bill of material, BOM or associated list) is a list of the raw materials, sub-assemblies, intermediate assemblies, sub-components, parts, and the quantities of each needed to manufacture an end product. A BOM may be used for communication between manufacturing partners or confined to a single manufacturing plant. A bill of materials is often tied to a production order whose issuance may generate reservations for components in the bill of materials that are in stock and requisitions for components that are not in stock. The first hierarchical databases were developed for automating bills of materials for manufacturing organizations in the early 1960s. At present, this BOM is used as a database to identify the many parts and their codes in automobile manufacturing companies. A BOM can also be visually represented by a product structure tree, although they are rarely used in the workplace. For example, one of them is Time-Phased Product Structure where this diagram illustrates the time needed to build or acquire the needed components to assemble the final product. For each product, the time-phased product structure shows the sequence and duration of each operation. == Structure == BOMs are of hierarchical nature, with the top level representing the finished product which may be a sub-assembly or a completed item. BOMs that describe the sub-assemblies are referred to as modular BOMs. An example of this is the NAAMS BOM (North American Automotive Metric Standard) that is used in the automotive industry to list all the components in an assembly line. The structure of the NAAMS BOM is System, Line, Tool, Unit and Detail. A bill of materials "implosion" links component pieces to a major assembly, while a bill of materials "explosion" breaks apart each assembly or sub-assembly into its component parts. == Usage == In process industries, the BOM is also known as the formula, recipe, or ingredients list. The phrase "bill of material" (or "BOM") is frequently used by engineers attributively to refer not to the literal bill, but to the current production configuration of a product, to distinguish it from modified or improved versions under study or in test. In electronics, the BOM represents the list of components used on the printed circuit board (PCB). Once the design of the circuit is completed, the BOM list is passed on to the PCB layout engineer as well as the component engineer who will procure the components required for the design. == Types == A BOM can define products as they are designed (engineering bill of materials), as they are ordered (sales bill of materials), as they are built (manufacturing bill of materials), or as they are maintained (service bill of materials). The different types depend on the business need and use for which they are intended. Sometimes the term "pseudo-bill of materials" or "pseudo-BOM" is used to refer to a more flexible or simplified version. Often a place-holder part number is used to represent a group of related (usually standard) parts that have common attributes and are interchangeable in the context of this BOM. A modular BOM (or variant parts list) can be displayed in the following formats: A single-level BOM (or unit list) that displays the assembly or sub-assembly with only one level of children. Thus it displays the components directly needed to make the assembly or sub-assembly. An indented BOM (or structural parts list) that displays the highest-level item closest to the left margin and the components used in that item indented more to the right. Modular (planning) BOM A single-level BOM resolved to list the effectively needed quantities of components to produce a product (rather than to list each individual part by its logical name) is also called quantity synopsis parts list. === Configurable BOM === A configurable bill of materials (CBOM) is a form of BOM used by industries that have multiple options and highly configurable products (e.g. telecom systems, data-center hardware (SANS, servers, etc.), PCs, cars). The CBOM is used to dynamically create "end-items" that a company sells. The benefit of using CBOM structure is that it reduces the work-effort needed to maintain product structures. The configurable BOM is most frequently driven by "configurator" software, however it can be enabled manually (manual maintenance is infrequent because it is unwieldy to manage the number of permutations and combinations of possible configurations). The development of the CBOM is dependent on having a modular BOM structure in place. The modular BOM structure provides the assemblies/sub-systems that can be selected to "configure" an end-item. While most configurators utilize top-down hierarchical rules syntax to find appropriate modular BOMs, maintenance of very similar BOMs (i.e., only one component is different for various voltages) becomes highly excessive. A newer approach, (bottom-up/rules-based structuring) utilizing a proprietary search engine scheme transversing through selectable componentry at high speeds eliminates the planning modular BOM duplications. The search engine is also used for all combinatorial feature constraints and graphical user interface (GUI) representations to support specification selections. To decide which assembly variant of the parts or components is to be chosen, they are attributed by the product options which are the characteristic features of the product. If the options of the product build an ideal Boolean algebra, it is possible to describe the connection between parts and product variants with a Boolean expression, which refers to a subset of the set of products. Parts which will not be assembled at all in one or more variants are typically marked as "DNP" (for "do not populate" or "do not place") in the affected variants. Other less frequently used designators for this include "NP" ("no placement", "not placed"), "NF" ("no fit", "not fitting"), "DNM" ("do not mount"), "NM" ("not mounted"), "NU" ("not used"), "DNI" ("do not install", "do not insert"), "DNE" ("do not equip"), "DNA" ("do not assemble"), "DNS" ("do not stuff"), "NOFIT" etc. === Multi-level BOM === A multi-level bill of materials (BOM), referred to as an indented BOM, is a bill of materials that lists the assemblies, components, and parts required to make a product in a parent-child, top-down method. It provides a display of all items that are in parent-children relationships. When an item is a sub-component, of a (parent) component, it can in-turn have its own child components, and so on. The resulting top-level BOM (item number) would include children; a mix of finished sub-assemblies, various parts and raw materials. A multi-level structure can be illustrated by a tree with several levels. In contrast, a single-level structure only consists of one level of children in components, assemblies and material. == See also == Software bill of materials RKM code Component placement Product breakdown structure, similar breakdown structure used in project management == Notes == == References == == Further reading == Avak, Björn (2007) [November 2006]. Written at Zürich, Switzerland. Variant Management of Modular Product Families in the Market Phase (PDF). Fortschritt-Berichte VDI (Thesis). Reihe 16: Technik und Wirtschaft. Düsseldorf, Germany: VDI Verlag. doi:10.3929/ethz-a-005320674. eISSN 0507-617X. hdl:20.500.11850/149674. ISBN 978-3-18318016-5. ISSN 0178-9597. ETH No. 16885. Archived (PDF) from the original on 2018-04-22. Retrieved 2018-04-22. [2]
Wikipedia/Bill_of_materials
Design is a form of intellectual property right concerned with the visual appearance of articles which have commercial or industrial use. The visual form of the product is what is protected rather than the product itself. The visual features protected are the shape, configuration, pattern or ornamentation. A design infringement is where a person infringes a registered design during the period of registration. The definition of a design infringement differs in each jurisdiction but typically encompasses the purported use and make of the design, as well as if the design is imported or sold during registration. To understand if a person has infringed the monopoly of the registered design, the design is assessed under each jurisdiction's provisions. The infringement is of the visual appearance of the manufactured product rather than the function of the product, which is covered under patents. Often infringement decisions are more focused on the similarities between the two designs, rather than the differences. == Legislation == === Australia === In Australia, a person infringes a registered design if a party manufactures and sells, uses or imports the same or similar design to the registered design without permission of the registered owner. This is held under s71 of the Design Act 2003 (Cth). The following is an extract of s71 of the Designs Act 2003 (Cth), under Infringement of Design."(1) A person infringes a registered design if, during the term of registration of the design, and without the licence or authority of the registered owner of the design, the person: (a) makes or offers to make a product, in relation to which the design is registered, which embodies a design that is identical to, or substantially similar in overall impression to, the registered design; or (b) imports such a product into Australia for sale, or for use for the purposes of any trade or business’ or (c) sells, hires or otherwise disposes of, or offers to sell, hire or otherwise dispose of, such a product; or (d) uses such a product in any way for the purposes of any trade or business; or (e) keeps such a product for the purpose of doing any of the things mentioned in paragraph (c) or (d)"The Designs Act recognises two types of infringement: primary and secondary infringement. A primary infringement relates to s71(1)(a), where a person directs, causes or procures the product to be made by a third party. Secondary infringement relate to ss 71(1)(b), (c), (d), (e), where a person infringes a registered design if there is no licence or authority given. A parallel import of a registered design is allowed in Australia. The Designs Act 2003 replaced the Designs Act 1906, having a particular change to the way design infringements are identified. Key changes included removing tests of obvious and fraudulent imitations. Also introduced was that a certificate of examination must be issued prior to infringement proceedings. The test for infringement is significantly broader as it expressly requires an assessment of the similarities and differences between the registered design and the purported infringing design. === United Kingdom === Under the Registered Designs Act 1949, a design right is infringed when a person without consent from the registered design holder makes, offers, import or exports the product. The infringement rights in registered designs is laid out within Section 7A of the Registered Designs Act 1949. An infringement of the right in a registered design is actionable by the registered proprietor. The Act advises that the right in a registered design is not infringed if, the act is done in private and not commercial in nature, is experimental, or a reproduction for teaching purposes. The UK Court of Appeal confirmed that in determining an infringing article, the registered design, the alleged infringing object, and the prior art must be evaluated. Simply the test is a visual comparison between the two designs. The Act also provides an exemption for innocent infringers. Damages are not awarded against a defendant if there is sufficient evidence to prove that he/she was not aware that the design was registered. An alternative protection that the United Kingdom legislation offers is the principle of an unregistered design. To prosecute for an infringement, the unregistered design right holder must prove that they created the design in the first place and that the infringing article is a deliberate duplication. Further it must be proved that the shape and overall configuration of the protected product is not the same as any products that have been publicised before the design was created. === United States === In the United States, designs are governed by the patent statute, set out in 35 USC § 171 Chapter 16. Here, protection is given for a new, original and ornamental design of an article. As with other jurisdictions, the design patent within the US only provides protection for the visual design aspects of the article, rather than the function. Chapter 28 of 35 USC § 171 covers the infringement of patents, and defines and infringement as without authority, makes, uses, offers, or sells a patented design. An infringement also covers any attempt in infringing a design, and selling components of patented articles. Section 127 outlines both direct and indirect infringement. Direct infringement encompasses the unauthorised importation of patented products (35 U.S.C. § 271(a)), and unauthorised importation of products of a patented process (35 U.S.C. § 271(a)). Indirect infringement imposes liability upon those why gave aided another in direct infringement of a registered design (35 U.S.C. § 271(b)) or contributed to infringement (35 U.S.C. § 271(c)). This highlights the most common type of infringement, where the infringer's knowledge of the actions taken are established to confirm if there is an infringement. Proof of intent is also necessary to show the contribution to infringement. The statute does not provide protection for unregistered designs. To gain protection from infringement, and any design patent right, it is necessary to file a patent application. == Testing infringement == Infringement of a registered design can be identified through ‘the eyes of an ordinary observer’ test. This means that the appearance of an accused design is seen to be an infringement, if the design is significantly similar, and one may purchase the accused design product thinking that it is the patented design. This test is based on an ordinary observer being familiar with a product and being able to distinguish between the registered design and prior art designs. The case of Egyptian Goddess Inc. v Swisa Inc. was key in adopting the ordinary observer test. The Court found that for a design to be infringed, the accused design must have appropriated the registered design. The infringement lies within the similarity between the designs what is distinguished from the prior art base. In assessing the overall similarity of designs, for example, s19 of the Designs Act 2003 (Cth) provides a list of factors to consider in testing infringement. This includes understanding the differences between designs rather than emphasising the similarities. More weight must be given to the similarities between two designs. If one aspect of a design is substantially similar, weight must be given to the importance of that aspect of the design. The decision maker must further take the point of view from the standard of a person who is familiar with the products, the informed user. The ‘informed user’ test can also be used to identify an infringement of a registered or unregistered design. An informed user can range from a consumer to a sectoral expert with technical proficiency. The user will be able to notice small difference between design and can be seen as particularly observant having had a personal experience with, or key knowledge of the product. The informed user should not always be an expert of consumer to be the test in all cases. The selected informed user must be a person who has significant familiarity with the product's appearance, use and nature. == Audiences == Within design and patent law, experts are seen to be the primary decision makers in assessing the similarities and differences of designs. As infringement is judged from different audiences of the design, e.g., consumers and experts, the motivations for infringement are distinguished. Consumers note accused designs to be substitutes where they function in similar ways. To consumers the designs would be interchangeable. The test for infringement of a design patent draws much more from trademark than from patent law. As the test evokes an audience of reasonable purchasers of the design or product, similar to that of the trademark test. As mentioned, infringement is judged “in the eye of an ordinary observer”. From this, the audience for the test of infringement is an ordinary observer who is placed in the position to determine the similarities of the designs. == Enforcement == To commence enforcement proceedings it must be decided if the Court will establish that the product is infringing the registered design, and if the design is a valid registration. The registered design owner can only consider enforcement proceedings once a certificate of examination has been provided. To enforce design rights against an infringing designer the owner of the registered design must initiate the process of examination. This is in line with the certificate of examination. A design Registrar will not grant a certificate of examination of the design is found to be invalid as there is no newness or distinctiveness to the design. The examination will consist of a comparison of the designs that existed prior to the lodgement of the design application. The test of the ordinary observer enables registered designs with significant similarities to have a broader scope of enforcement. Among many jurisdictions is common for a party threatened with infringement to be allowed to seek relief even through the design has not yet been certified. Action may be taken to protect the goodwill and reputation of the design holder. === Courts === Courts are an essential aspect in the enforcement of design infringement. Court appointed experts are beneficial to enforcement proceedings, as a panel of assessors such as patent attorneys, designers and engineers enhance the limited technical knowledge a judge may have in a certain area. The Courts will assess damages based on the loss of profit and reputation of the design holder, and the profits made by the infringer. Case management is supported by the Court to enable the most economic and efficient method to bring the infringement proceedings to trial. === Alternative dispute resolution === Alternative dispute resolution can be a more effective way of resolving design infringement, as enforcement mechanisms are often not suited to the common disputes that arise. Design disputes can involve complex technical and commercial issues that can be better determined by an expert within alternative dispute resolution rather than employing witnesses within the courts. Arbitration and mediation are suitable for resolving intellectual property disputes, as most common disputes involve small claims for damages. For intellectual property disputes, alternative dispute resolution provides benefits including confidentiality, greater control over the process and a more neutral outcome. Alternative dispute resolution is more cost effective than litigation, therefore more attractive to smaller companies and individuals without the resources, time and funding to resolve cases in court. == References ==
Wikipedia/Design_infringement
Design for testing or design for testability (DFT) consists of integrated circuit design techniques that add testability features to a hardware product design. The added features make it easier to develop and apply manufacturing tests to the designed hardware. The purpose of manufacturing tests is to validate that the product hardware contains no manufacturing defects that could adversely affect the product's correct functioning. Tests are applied at several steps in the hardware manufacturing flow and, for certain products, may also be used for hardware maintenance in the customer's environment. The tests are generally driven by test programs that execute using automatic test equipment (ATE) or, in the case of system maintenance, inside the assembled system itself. In addition to finding and indicating the presence of defects (i.e., the test fails), tests may be able to log diagnostic information about the nature of the encountered test fails. The diagnostic information can be used to locate the source of the failure. In other words, the response of vectors (patterns) from a good circuit is compared with the response of vectors (using the same patterns) from a DUT (device under test). If the response is the same or matches, the circuit is good. Otherwise, the circuit is not manufactured as intended. DFT plays an important role in the development of test programs and as an interface for test applications and diagnostics. Automatic test pattern generation (ATPG) is much easier if appropriate DFT rules and suggestions have been implemented. == History == DFT techniques have been used at least since the early days of electric/electronic data processing equipment. Early examples from the 1940s/50s are the switches and instruments that allowed an engineer to "scan" (i.e., selectively probe) the voltage/current at some internal nodes in an analog computer [analog scan]. DFT often is associated with design modifications that provide improved access to internal circuit elements such that the local internal state can be controlled (controllability) and/or observed (observability) more easily. The design modifications can be strictly physical in nature (e.g., adding a physical probe point to a net) and/or add active circuit elements to facilitate controllability/observability (e.g., inserting a multiplexer into a net). While controllability and observability improvements for internal circuit elements definitely are important for test, they are not the only type of DFT. Other guidelines, for example, deal with the electromechanical characteristics of the interface between the product under test and the test equipment. Examples are guidelines for the size, shape, and spacing of probe points, or the suggestion to add a high-impedance state to drivers attached to probed nets such that the risk of damage from back-driving is mitigated. Over the years the industry has developed and used a large variety of more or less detailed and more or less formal guidelines for desired and/or mandatory DFT circuit modifications. The common understanding of DFT in the context of electronic design automation (EDA) for modern microelectronics is shaped to a large extent by the capabilities of commercial DFT software tools as well as by the expertise and experience of a professional community of DFT engineers researching, developing, and using such tools. Much of the related body of DFT knowledge focuses on digital circuits while DFT for analog/mixed-signal circuits takes somewhat of a backseat. == Objectives of DFT for microelectronics products == DFT affects and depends on the methods used for test development, test application, and diagnostics. Most tool-supported DFT practiced in the industry today, at least for digital circuits, is predicated on a Structural test paradigm. Structural test makes no direct attempt to determine if the overall functionality of the circuit is correct. Instead, it tries to make sure that the circuit has been assembled correctly from some low-level building blocks as specified in a structural netlist. For example, are all specified logic gates present, operating correctly, and connected correctly? The stipulation is that if the netlist is correct, and structural testing has confirmed the correct assembly of the circuit elements, then the circuit should be functioning correctly. Note that this is very different from functional testing, which attempts to validate that the circuit under test functions according to its functional specification. This is closely related to the functional verification problem of determining if the circuit specified by the netlist meets the functional specifications, assuming it is built correctly. One benefit of the Structural paradigm is that test generation can focus on testing a limited number of relatively simple circuit elements rather than having to deal with an exponentially exploding multiplicity of functional states and state transitions. While the task of testing a single logic gate at a time sounds simple, there is an obstacle to overcome. For today's highly complex designs, most gates are deeply embedded whereas the test equipment is only connected to the primary Input/outputs (I/Os) and/or some physical test points. The embedded gates, hence, must be manipulated through intervening layers of logic. If the intervening logic contains state elements, then the issue of an exponentially exploding state space and state transition sequencing creates an unsolvable problem for test generation. To simplify test generation, DFT addresses the accessibility problem by removing the need for complicated state transition sequences when trying to control and/or observe what's happening at some internal circuit element. Depending on the DFT choices made during circuit design/implementation, the generation of Structural tests for complex logic circuits can be more or less automated or self-automated. One key objective of DFT methodologies, hence, is to allow designers to make trade-offs between the amount and type of DFT and the cost/benefit (time, effort, quality) of the test generation task. Another benefit is to diagnose a circuit in case any problem emerges in the future. It is like adding some features or provisions in the design so that devices can be tested in case of any fault during its use. == Looking forward == One challenge for the industry is keeping up with the rapid advances in chip technology (I/O count/size/placement/spacing, I/O speed, internal circuit count/speed/power, thermal control, etc.) without being forced to continually upgrade the test equipment. Modern DFT techniques, hence, have to offer options that allow next-generation chips and assemblies to be tested on existing test equipment and/or reduce the requirements/cost for new test equipment. As a result, DFT techniques are continually being updated, such as incorporation of compression, in order to make sure that tester application times stay within certain bounds dictated by the cost target for the products under test. == Diagnostics == Especially for advanced semiconductor technologies, it is expected some of the chips on each manufactured wafer contain defects that render them non-functional. The primary objective of testing is to find and separate those non-functional chips from the fully functional ones, meaning that one or more responses captured by the tester from a non-functional chip under test differ from the expected response. The percentage of chips that fail test, hence, should be closely related to the expected functional yield for that chip type. In reality, however, it is not uncommon that all chips of a new chip type arriving at the test floor for the first time fail (so-called zero-yield situation). In that case, the chips have to go through a debug process that tries to identify the reason for the zero-yield situation. In other cases, the test fall-out (percentage of test fails) may be higher than expected/acceptable or fluctuate suddenly. Again, the chips have to be subjected to an analysis process to identify the reason for the excessive test fall-out. In both cases, vital information about the nature of the underlying problem may be hidden in the way the chips fail during test. To facilitate better analysis, additional fail information beyond a simple pass/fail is collected into a fail log. The fail log typically contains information about when (e.g., tester cycle), where (e.g., at what tester channel), and how (e.g., logic value) the test failed. Diagnostics attempt to derive from the fail log at which logical/physical location inside the chip the problem most likely started. By running a large number of failures through the diagnostics process, called volume diagnostics, systematic failures can be identified. In some cases (e.g., printed circuit boards, multi-chip modules (MCMs), embedded or stand-alone memories) it may be possible to repair a failing circuit under test. For that purpose, diagnostics must quickly find the failing unit and create a work order for repairing or replacing the failing unit. DFT approaches can be more or less diagnostics-friendly. The related objectives of DFT are to facilitate or simplify failure data collection and diagnostics to an extent that can enable intelligent failure analysis (FA) sample selection, as well as improve the cost, accuracy, speed, and throughput of diagnostics and FA. == Scan design == The most common method for delivering test data from chip inputs to internal circuits under test (CUTs, for short), and observing their outputs, is called scan-design. In scan design, registers (flip-flops or latches) in the design are connected in one or more scan chains, which are used to gain access to internal nodes of the chip. Test patterns are shifted in via the scan chain(s), functional clock signals are pulsed to test the circuit during the "capture cycle(s)", and the results are then shifted out to chip output pins and compared against the expected "good machine" results. Straightforward application of scan techniques can result in large vector sets with corresponding long tester time and memory requirements. Test compression techniques address this problem, by decompressing the scan input on chip and compressing the test output. Large gains are possible since any particular test vector usually only needs to set and/or examine a small fraction of the scan chain bits. The output of a scan design may be provided in forms such as Serial Vector Format (SVF), to be executed by test equipment. == Debug using DFT features == In addition to being useful for manufacturing "go/no go" testing, scan chains can also be used to "debug" chip designs. In this context, the chip is exercised in normal "functional mode" (for example, a computer or mobile phone chip might execute assembly language instructions). At any time, the chip clock can be stopped, and the chip re-configured into "test mode". At this point, the full internal state can be dumped out, or set to any desired values, by use of the scan chains. Another use of scan to aid debugging consists of scanning in an initial state to all memory elements and then go back to functional mode to perform system debugging. The advantage is to bring the system to a known state without going through many clock cycles. This use of scan chains, along with the clock control circuits is a related sub-discipline of logic design called design for debug or design for debuggability. == See also == BIST Design for X Fault grading Iddq testing JTAG == References == == External links == Boundary-Scan Chain Design Board Level Design Design for Testability Guidelines
Wikipedia/Design_for_testing
In electrical engineering, the process of circuit design can cover systems ranging from complex electronic systems down to the individual transistors within an integrated circuit. One person can often do the design process without needing a planned or structured design process for simple circuits. Still, teams of designers following a systematic approach with intelligently guided computer simulation are becoming increasingly common for more complex designs. In integrated circuit design automation, the term "circuit design" often refers to the step of the design cycle which outputs the schematics of the integrated circuit. Typically this is the step between logic design and physical design. == Process == Traditional circuit design usually involves several stages. Sometimes, a design specification is written after liaising with the customer. A technical proposal may be written to meet the requirements of the customer specification. The next stage involves synthesising on paper a schematic circuit diagram, an abstract electrical or electronic circuit that will meet the specifications. A calculation of the component values to meet the operating specifications under specified conditions should be made. Simulations may be performed to verify the correctness of the design. A breadboard or other prototype version of the design for testing against specification may be built. It may involve making any alterations to the circuit to achieve compliance. A choice as to a method of construction and all the parts and materials to be used must be made. There is a presentation of component and layout information to draughtspersons and layout and mechanical engineers for prototype production. This is followed by the testing or type-testing several prototypes to ensure compliance with customer requirements. Usually, there is a signing and approval of the final manufacturing drawings, and there may be post-design services (obsolescence of components, etc.). == Specification == The process of circuit design begins with the specification, which states the functionality that the finished design must provide but does not indicate how it is to be achieved . The initial specification is a technically detailed description of what the customer wants the finished circuit to achieve and can include a variety of electrical requirements, such as what signals the circuit will receive, what signals it must output, what power supplies are available and how much power it is permitted to consume. The specification can (and normally does) also set some of the physical parameters that the design must meet, such as size, weight, moisture resistance, temperature range, thermal output, vibration tolerance, and acceleration tolerance. As the design process progresses, the designer(s) will frequently return to the specification and alter it to take account of the progress of the design. This can involve tightening specifications that the customer has supplied and adding tests that the circuit must pass to be accepted. These additional specifications will often be used in the verification of a design. Changes that conflict with or modify the customer's original specifications will almost always have to be approved by the customer before they can be acted upon. Correctly identifying the customer needs can avoid a condition known as 'design creep', which occurs in the absence of realistic initial expectations, and later by failing to communicate fully with the client during the design process. It can be defined in terms of its results; "at one extreme is a circuit with more functionality than necessary, and at the other is a circuit having an incorrect functionality". Nevertheless, some changes can be expected. It is good practice to keep options open for as long as possible because it's easier to remove spare elements from the circuit later on than it is to put them in. == Design == The design process involves moving from the specification at the start to a plan that contains all the information needed to be physically constructed at the end; this happens typically by passing through several stages, although in the straightforward circuit, it may be done in a single step. The process usually begins with the conversion of the specification into a block diagram of the various functions that the circuit must perform, at this stage the contents of each block are not considered, only what each block must do, this is sometimes referred to as a "black box" design. This approach allows the possibly highly complex task to be broken into smaller tasks either by tackled in sequence or divided amongst members of a design team. Each block is then considered in more detail, still at an abstract stage, but with a lot more focus on the details of the electrical functions to be provided. At this or later stages, it is common to require a large amount of research or mathematical modeling into what is and is not feasible to achieve. The results of this research may be fed back into earlier stages of the design process, for example if it turns out one of the blocks cannot be designed within the parameters set for it, it may be necessary to alter other blocks instead. At this point, it is also common to start considering both how to demonstrate that the design does meet the specifications, and how it is to be tested ( which can include self diagnostic tools ). Finally, the individual circuit components are chosen to carry out each function in the overall design; at this stage, the physical layout and electrical connections of each component are also decided, this layout commonly taking the form of artwork for the production of a printed circuit board or Integrated circuit. This stage is typically highly time-consuming because of the vast array of choices available. A practical constraint on the design at this stage is standardization;. At the same time, a certain value of a component may be calculated for use in some location in a circuit; if that value cannot be purchased from a supplier, then the problem has still not been solved. To avoid this, a certain amount of 'catalog engineering' can be applied to solve the more mundane tasks within an overall design. In general, the circuit principles of design flow are including but not limited to architecture scope definition, materials selection, schematic capture, PCB layout design that include power and signal integrity considition, test and validation. === Costs === Generally, the cost of designing circuits is directly tied to the final circuits' complexity. The greater the complexity (quantity of components and design novelty), the more hours of a skilled engineer's time will be necessary to create a functional product. The process can be tedious, as minute details or features could take any amount of time, materials and manpower to create. Like taking into account the effects of modifying transistor sizes or codecs. In the world of flexible electronics, replacing the, widely used, polyamide substrates with materials like PEN or PET to produce flexible electronics, could reduce costs by factors of 5-10. Costs for designing a circuit are almost always far higher than production costs per unit, as the cost of production and function of the circuit depends greatly on the design of the circuit. Although the typical PCB production methods involve subtractive manufacturing, there are methods that use an additive manufacturing process, such as using a 3D printer to "print" a PCB. This method is thought to cost less than additive manufacturing and eliminates the need for waste management altogether. == Verification and testing == Once a circuit has been designed, it must be both verified and tested. Verification is the process of going through each stage of a design and ensuring that it will do what the specification requires it to do. This is frequently a highly mathematical process and can involve large-scale computer simulations of the design. In any complicated design, it is very likely that problems will be found at this stage and may affect a large amount of the design work to be redone to fix them. Testing is the real-world counterpart to verification; testing involves physically building at least a prototype of the design and then (in combination with the test procedures in the specification or added to it) checking the circuit does what it was designed to. == Design Software == In the Software of the visual DSD, the Logic Circuit of complement circuit is implemented by the compiling program code. These types of software programs are creating cheaper more efficient circuits for all types of circuits. We have implemented functional simulations to verify logic functions corresponding to logic expressions in our proposed circuits. The proposed architectures are modeled in VHDL language. Using this language will create more efficient circuits that will not only be cheaper but last longer. These are only two of many design software that help individuals plan there circuits for production. == Prototyping == Prototyping plays a significant role in the complex process of circuit design. This iterative process involves continuous refinement and correction of errors. The task of circuit design is demanding and requires meticulous attention to detail to avoid errors. Circuit designers are required to conduct multiple tests to ensure the efficiency and safety of their designs before they are deemed suitable for consumer use. Prototyping is an integral part of electrical work due to its precise and meticulous nature. The absence of prototyping could potentially lead to errors in the final product. Circuit designers, who are compensated for their expertise in creating electrical circuits, bear the responsibility of ensuring the safety of consumers who purchase and use these circuits at home. The risks associated with neglecting the prototyping process and releasing a flawed electrical circuit are significant. These risks include the potential for fires and overheated wires, which could result in burns or severe injuries to unsuspecting individuals. == Results == Every electrical circuit starts with a circuit board simulator of how the things will be put together at the end of the day and show how the circuit will work virtually. A blueprint is the drawing of the technical design and final product. After all, this is done, and you use the blueprint to put the circuit together, you will get the results of electrical circuits that are quite memorable. The circuit will run anything from a vacuum to a big TV in a movie theater. All of these take a long time and a certain skill not everyone can acquire. The electrical circuit is something most things we need in our everyday lives. === Documentation === Any commercial design will normally also include an element of documentation; the precise nature of this documentation will vary according to the size and complexity of the circuit and the country in which it is to be used. As a bare minimum, the documentation will normally include at least the specification and testing procedures for the design and a statement of compliance with current regulations. In the EU this last item will normally take the form of a CE Declaration listing the European directives complied with and naming an individual responsible for compliance. == Software == Altium Designer EasyEDA gEDA KiCad OrCAD NI Multisim SPICE == See also == Advanced Design System Circuit design language Configuration design Electrical system design Electronic circuit design Electronic design automation Espresso heuristic logic minimizer GDSII Integrated circuit design List of EDA companies Mesh analysis Open Artwork System Interchange Standard == References == === Sources === "Does Your Design Meet Its Specs? Introduction to Hardware Design Verification | What Is Design Verification?". InformIT. 19 August 2005. Diagram of possible design process US guide on CE marking UK guide on CE marking A beginners tutorial on understanding, analysing, and designing basic electronic circuits Vladimir Gurevich Electronic Devices on Discrete Components for Industrial and Power Engineering, CRC Press, London - New York, 2008, 418 p., ISBN 9781420069822
Wikipedia/Circuit_design
Intelligent design (ID) is a pseudoscientific argument for the existence of God, presented by its proponents as "an evidence-based scientific theory about life's origins". Proponents claim that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." ID is a form of creationism that lacks empirical support and offers no testable or tenable hypotheses, and is therefore not science. The leading proponents of ID are associated with the Discovery Institute, a Christian, politically conservative think tank based in the United States. Although the phrase intelligent design had featured previously in theological discussions of the argument from design, its first publication in its present use as an alternative term for creationism was in Of Pandas and People, a 1989 creationist textbook intended for high school biology classes. The term was substituted into drafts of the book, directly replacing references to creation science and creationism, after the 1987 Supreme Court's Edwards v. Aguillard decision barred the teaching of creation science in public schools on constitutional grounds. From the mid-1990s, the intelligent design movement (IDM), supported by the Discovery Institute, advocated inclusion of intelligent design in public school biology curricula. This led to the 2005 Kitzmiller v. Dover Area School District trial, which found that intelligent design was not science, that it "cannot uncouple itself from its creationist, and thus religious, antecedents", and that the public school district's promotion of it therefore violated the Establishment Clause of the First Amendment to the United States Constitution. ID presents two main arguments against evolutionary explanations: irreducible complexity and specified complexity, asserting that certain biological and informational features of living things are too complex to be the result of natural selection. Detailed scientific examination has rebutted several examples for which evolutionary explanations are claimed to be impossible. ID seeks to challenge the methodological naturalism inherent in modern science, though proponents concede that they have yet to produce a scientific theory. As a positive argument against evolution, ID proposes an analogy between natural systems and human artifacts, a version of the theological argument from design for the existence of God. ID proponents then conclude by analogy that the complex features, as defined by ID, are evidence of design. Critics of ID find a false dichotomy in the premise that evidence against evolution constitutes evidence for design. == History == === Origin of the concept === In 1910, evolution was not a topic of major religious controversy in America, but in the 1920s, the fundamentalist–modernist controversy in theology resulted in fundamentalist Christian opposition to teaching evolution and resulted in the origins of modern creationism. As a result, teaching of evolution was effectively suspended in U.S. public schools until the 1960s, and when evolution was then reintroduced into the curriculum, there was a series of court cases in which attempts were made to get creationism taught alongside evolution in science classes. Young Earth creationists (YECs) promoted "creation science" as "an alternative scientific explanation of the world in which we live". This frequently invoked the argument from design to explain complexity in nature as supposedly demonstrating the existence of God. The argument from design, also known as the teleological argument or "argument from intelligent design", has been presented by theologists for centuries. Thomas Aquinas presented ID in his fifth proof of God's existence as a syllogism. In 1802, William Paley's Natural Theology presented examples of intricate purpose in organisms. His version of the watchmaker analogy argued that a watch has evidently been designed by a craftsman and that it is supposedly just as evident that the complexity and adaptation seen in nature must have been designed. He went on to argue that the perfection and diversity of these designs supposedly shows the designer to be omnipotent and that this can supposedly only be the Christian god. Like "creation science", intelligent design centers on Paley's religious argument from design, but while Paley's natural theology was open to deistic design through God-given laws, intelligent design seeks scientific confirmation of repeated supposedly miraculous interventions in the history of life. "Creation science" prefigured the intelligent design arguments of irreducible complexity, even featuring the bacterial flagellum. In the United States, attempts to introduce "creation science" into schools led to court rulings that it is religious in nature and thus cannot be taught in public school science classrooms. Intelligent design is also presented as science and shares other arguments with "creation science" but avoids literal Biblical references to such topics as the biblical flood story or using Bible verses to estimate the age of the Earth. Barbara Forrest writes that the intelligent design movement began in 1984 with the book The Mystery of Life's Origin: Reassessing Current Theories, co-written by the creationist and chemist Charles B. Thaxton and two other authors and published by Jon A. Buell's Foundation for Thought and Ethics. In March 1986, Stephen C. Meyer published a review of this book, discussing how information theory could suggest that messages transmitted by DNA in the cell show "specified complexity" and must have been created by an intelligent agent. He also argued that science is based upon "foundational assumptions" of naturalism that were as much a matter of faith as those of "creation theory". In November of that year, Thaxton described his reasoning as a more sophisticated form of Paley's argument from design. At a conference that Thaxton held in 1988 ("Sources of Information Content in DNA"), he said that his intelligent cause view was compatible with both metaphysical naturalism and supernaturalism. Intelligent design avoids identifying or naming the intelligent designer—it merely states that one (or more) must exist—but leaders of the movement have said the designer is the Christian God. Whether this lack of specificity about the designer's identity in public discussions is a genuine feature of the concept – or just a posture taken to avoid alienating those who would separate religion from the teaching of science – has been a matter of great debate between supporters and critics of intelligent design. The Kitzmiller v. Dover Area School District court ruling held the latter to be the case. === Origin of the term === Since the Middle Ages, discussion of the religious "argument from design" or "teleological argument" in theology, with its concept of "intelligent design", has persistently referred to the theistic Creator God. Although ID proponents chose this provocative label for their proposed alternative to evolutionary explanations, they have de-emphasized their religious antecedents and denied that ID is natural theology, while still presenting ID as supporting the argument for the existence of God. While intelligent design proponents have pointed out past examples of the phrase intelligent design that they said were not creationist and faith-based, they have failed to show that these usages had any influence on those who introduced the label in the intelligent design movement. Variations on the phrase appeared in Young Earth creationist publications: a 1967 book co-written by Percival Davis referred to "design according to which basic organisms were created". In 1970, A. E. Wilder-Smith published The Creation of Life: A Cybernetic Approach to Evolution. The book defended Paley's design argument with computer calculations of the improbability of genetic sequences, which he said could not be explained by evolution but required "the abhorred necessity of divine intelligent activity behind nature", and that "the same problem would be expected to beset the relationship between the designer behind nature and the intelligently designed part of nature known as man." In a 1984 article as well as in his affidavit to Edwards v. Aguillard, Dean H. Kenyon defended creation science by stating that "biomolecular systems require intelligent design and engineering know-how", citing Wilder-Smith. Creationist Richard B. Bliss used the phrase "creative design" in Origins: Two Models: Evolution, Creation (1976), and in Origins: Creation or Evolution (1988) wrote that "while evolutionists are trying to find non-intelligent ways for life to occur, the creationist insists that an intelligent design must have been there in the first place." ==== Of Pandas and People ==== The most common modern use of the words "intelligent design" as a term intended to describe a field of inquiry began after the United States Supreme Court ruled in June 1987 in the case of Edwards v. Aguillard that it is unconstitutional for a state to require the teaching of creationism in public school science curricula. A Discovery Institute report says that Charles B. Thaxton, editor of Pandas, had picked the phrase up from a NASA scientist. In two successive 1987 drafts of the book, over one hundred uses of the root word "creation", such as "creationism" and "Creation Science", were changed, almost without exception, to "intelligent design", while "creationists" was changed to "design proponents" or, in one instance, "cdesign proponentsists" [sic]. In June 1988, Thaxton held a conference titled "Sources of Information Content in DNA" in Tacoma, Washington. Stephen C. Meyer was at the conference, and later recalled that "The term intelligent design came up..." In December 1988 Thaxton decided to use the label "intelligent design" for his new creationist movement. Of Pandas and People was published in 1989, and in addition to including all the current arguments for ID, was the first book to make systematic use of the terms "intelligent design" and "design proponents" as well as the phrase "design theory", defining the term intelligent design in a glossary and representing it as not being creationism. It thus represents the start of the modern intelligent design movement. "Intelligent design" was the most prominent of around fifteen new terms it introduced as a new lexicon of creationist terminology to oppose evolution without using religious language. It was the first place where the phrase "intelligent design" appeared in its primary present use, as stated both by its publisher Jon A. Buell, and by William A. Dembski in his expert witness report for Kitzmiller v. Dover Area School District. The National Center for Science Education (NCSE) has criticized the book for presenting all of the basic arguments of intelligent design proponents and being actively promoted for use in public schools before any research had been done to support these arguments. Although presented as a scientific textbook, philosopher of science Michael Ruse considers the contents "worthless and dishonest". An American Civil Liberties Union lawyer described it as a political tool aimed at students who did not "know science or understand the controversy over evolution and creationism". One of the authors of the science framework used by California schools, Kevin Padian, condemned it for its "sub-text", "intolerance for honest science" and "incompetence". == Concepts == === Irreducible complexity === The term "irreducible complexity" was introduced by biochemist Michael Behe in his 1996 book Darwin's Black Box, though he had already described the concept in his contributions to the 1993 revised edition of Of Pandas and People. Behe defines it as "a single system which is composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning". Behe uses the analogy of a mousetrap to illustrate this concept. A mousetrap consists of several interacting pieces—the base, the catch, the spring and the hammer—all of which must be in place for the mousetrap to work. Removal of any one piece destroys the function of the mousetrap. Intelligent design advocates assert that natural selection could not create irreducibly complex systems, because the selectable function is present only when all parts are assembled. Behe argued that irreducibly complex biological mechanisms include the bacterial flagellum of E. coli, the blood clotting cascade, cilia, and the adaptive immune system. Critics point out that the irreducible complexity argument assumes that the necessary parts of a system have always been necessary and therefore could not have been added sequentially. They argue that something that is at first merely advantageous can later become necessary as other components change. Furthermore, they argue, evolution often proceeds by altering preexisting parts or by removing them from a system, rather than by adding them. This is sometimes called the "scaffolding objection" by an analogy with scaffolding, which can support an "irreducibly complex" building until it is complete and able to stand on its own. In the case of Behe's mousetrap analogy, it has been shown that a mousetrap can be created with increasingly fewer parts and that even a single part is sufficient. Behe has acknowledged using "sloppy prose", and that his "argument against Darwinism does not add up to a logical proof." Irreducible complexity has remained a popular argument among advocates of intelligent design; in the Dover trial, the court held that "Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large." === Specified complexity === In 1986, Charles B. Thaxton, a physical chemist and creationist, used the term "specified complexity" from information theory when claiming that messages transmitted by DNA in the cell were specified by intelligence, and must have originated with an intelligent agent. The intelligent design concept of "specified complexity" was developed in the 1990s by mathematician, philosopher, and theologian William A. Dembski. Dembski states that when something exhibits specified complexity (i.e., is both complex and "specified", simultaneously), one can infer that it was produced by an intelligent cause (i.e., that it was designed) rather than being the result of natural processes. He provides the following examples: "A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified." He states that details of living things can be similarly characterized, especially the "patterns" of molecular sequences in functional biological molecules such as DNA. Dembski defines complex specified information (CSI) as anything with a less than 1 in 10150 chance of occurring by (natural) chance. Critics say that this renders the argument a tautology: complex specified information cannot occur naturally because Dembski has defined it thus, so the real question becomes whether or not CSI actually exists in nature. The conceptual soundness of Dembski's specified complexity/CSI argument has been discredited in the scientific and mathematical communities. Specified complexity has yet to be shown to have wide applications in other fields, as Dembski asserts. John Wilkins and Wesley R. Elsberry characterize Dembski's "explanatory filter" as eliminative because it eliminates explanations sequentially: first regularity, then chance, finally defaulting to design. They argue that this procedure is flawed as a model for scientific inference because the asymmetric way it treats the different possible explanations renders it prone to making false conclusions. Richard Dawkins, evolutionary biologist and religion critic, argues in The God Delusion (2006) that allowing for an intelligent designer to account for unlikely complexity only postpones the problem, as such a designer would need to be at least as complex. Other scientists have argued that evolution through selection is better able to explain the observed complexity, as is evident from the use of selective evolution to design certain electronic, aeronautic and automotive systems that are considered problems too complex for human "intelligent designers". === Fine-tuned universe === Intelligent design proponents have also occasionally appealed to broader teleological arguments outside of biology, most notably an argument based on the fine-tuning of universal constants that make matter and life possible and that are argued not to be solely attributable to chance. These include the values of fundamental physical constants, the relative strength of nuclear forces, electromagnetism, and gravity between fundamental particles, as well as the ratios of masses of such particles. Intelligent design proponent and Center for Science and Culture fellow Guillermo Gonzalez argues that if any of these values were even slightly different, the universe would be dramatically different, making it impossible for many chemical elements and features of the Universe, such as galaxies, to form. Thus, proponents argue, an intelligent designer of life was needed to ensure that the requisite features were present to achieve that particular outcome. Scientists have generally responded that these arguments are poorly supported by existing evidence. Victor J. Stenger and other critics say both intelligent design and the weak form of the anthropic principle are essentially a tautology; in his view, these arguments amount to the claim that life is able to exist because the Universe is able to support life. The claim of the improbability of a life-supporting universe has also been criticized as an argument by lack of imagination for assuming no other forms of life are possible: life as we know it might not exist if things were different, but a different sort of life might exist in its place. A number of critics also suggest that many of the stated variables appear to be interconnected and that calculations made by mathematicians and physicists suggest that the emergence of a universe similar to ours is quite probable. === Intelligent designer === The contemporary intelligent design movement formulates its arguments in secular terms and intentionally avoids identifying the intelligent agent (or agents) they posit. Although they do not state that God is the designer, the designer is often implicitly hypothesized to have intervened in a way that only a god could intervene. Dembski, in The Design Inference (1998), speculates that an alien culture could fulfill these requirements. Of Pandas and People proposes that SETI illustrates an appeal to intelligent design in science. In 2000, philosopher of science Robert T. Pennock suggested the Raëlian UFO religion as a real-life example of an extraterrestrial intelligent designer view that "make[s] many of the same bad arguments against evolutionary theory as creationists". The authoritative description of intelligent design, however, explicitly states that the Universe displays features of having been designed. Acknowledging the paradox, Dembski concludes that "no intelligent agent who is strictly physical could have presided over the origin of the universe or the origin of life." The leading proponents have made statements to their supporters that they believe the designer to be the Christian God, to the exclusion of all other religions. Beyond the debate over whether intelligent design is scientific, a number of critics argue that existing evidence makes the design hypothesis appear unlikely, irrespective of its status in the world of science. For example, Jerry Coyne asks why a designer would "give us a pathway for making vitamin C, but then destroy it by disabling one of its enzymes" (see pseudogene) and why a designer would not "stock oceanic islands with reptiles, mammals, amphibians, and freshwater fish, despite the suitability of such islands for these species". Coyne also points to the fact that "the flora and fauna on those islands resemble that of the nearest mainland, even when the environments are very different" as evidence that species were not placed there by a designer. Previously, in Darwin's Black Box, Behe had argued that we are simply incapable of understanding the designer's motives, so such questions cannot be answered definitively. Odd designs could, for example, "...have been placed there by the designer for a reason—for artistic reasons, for variety, to show off, for some as-yet-undetected practical purpose, or for some unguessable reason—or they might not." Coyne responds that in light of the evidence, "either life resulted not from intelligent design, but from evolution; or the intelligent designer is a cosmic prankster who designed everything to make it look as though it had evolved." Intelligent design proponents such as Paul Nelson avoid the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design. Behe cites Paley as his inspiration, but he differs from Paley's expectation of a perfect Creation and proposes that designers do not necessarily produce the best design they can. Behe suggests that, like a parent not wanting to spoil a child with extravagant toys, the designer can have multiple motives for not giving priority to excellence in engineering. He says that "Another problem with the argument from imperfection is that it critically depends on a psychoanalysis of the unidentified designer. Yet the reasons that a designer would or would not do anything are virtually impossible to know unless the designer tells you specifically what those reasons are." This reliance on inexplicable motives of the designer makes intelligent design scientifically untestable. Retired UC Berkeley law professor, author and intelligent design advocate Phillip E. Johnson puts forward a core definition that the designer creates for a purpose, giving the example that in his view AIDS was created to punish immorality and is not caused by HIV, but such motives cannot be tested by scientific methods. Asserting the need for a designer of complexity also raises the question "What designed the designer?" Intelligent design proponents say that the question is irrelevant to or outside the scope of intelligent design. Richard Wein counters that "...scientific explanations often create new unanswered questions. But, in assessing the value of an explanation, these questions are not irrelevant. They must be balanced against the improvements in our understanding which the explanation provides. Invoking an unexplained being to explain the origin of other beings (ourselves) is little more than question-begging. The new question raised by the explanation is as problematic as the question which the explanation purports to answer." Richard Dawkins sees the assertion that the designer does not need to be explained as a thought-terminating cliché. In the absence of observable, measurable evidence, the question "What designed the designer?" leads to an infinite regression from which intelligent design proponents can only escape by resorting to religious creationism or logical contradiction. == Movement == The intelligent design movement is a direct outgrowth of the creationism of the 1980s. The scientific and academic communities, along with a U.S. federal court, view intelligent design as either a form of creationism or as a direct descendant that is closely intertwined with traditional creationism; and several authors explicitly refer to it as "intelligent design creationism". The movement is headquartered in the Center for Science and Culture, established in 1996 as the creationist wing of the Discovery Institute to promote a religious agenda calling for broad social, academic and political changes. The Discovery Institute's intelligent design campaigns have been staged primarily in the United States, although efforts have been made in other countries to promote intelligent design. Leaders of the movement say intelligent design exposes the limitations of scientific orthodoxy and of the secular philosophy of naturalism. Intelligent design proponents allege that science should not be limited to naturalism and should not demand the adoption of a naturalistic philosophy that dismisses out-of-hand any explanation that includes a supernatural cause. The overall goal of the movement is to "reverse the stifling dominance of the materialist worldview" represented by the theory of evolution in favor of "a science consonant with Christian and theistic convictions". Phillip E. Johnson stated that the goal of intelligent design is to cast creationism as a scientific concept. All leading intelligent design proponents are fellows or staff of the Discovery Institute and its Center for Science and Culture. Nearly all intelligent design concepts and the associated movement are the products of the Discovery Institute, which guides the movement and follows its wedge strategy while conducting its "teach the controversy" campaign and their other related programs. Leading intelligent design proponents have made conflicting statements regarding intelligent design. In statements directed at the general public, they say intelligent design is not religious; when addressing conservative Christian supporters, they state that intelligent design has its foundation in the Bible. Recognizing the need for support, the Institute affirms its Christian, evangelistic orientation: Alongside a focus on influential opinion-makers, we also seek to build up a popular base of support among our natural constituency, namely, Christians. We will do this primarily through apologetics seminars. We intend these to encourage and equip believers with new scientific evidences that support the faith, as well as to "popularize" our ideas in the broader culture. Barbara Forrest, an expert who has written extensively on the movement, describes this as being due to the Discovery Institute's obfuscating its agenda as a matter of policy. She has written that the movement's "activities betray an aggressive, systematic agenda for promoting not only intelligent design creationism, but the religious worldview that undergirds it." === Religion and leading proponents === Although arguments for intelligent design by the intelligent design movement are formulated in secular terms and intentionally avoid positing the identity of the designer, the majority of principal intelligent design advocates are publicly religious Christians who have stated that, in their view, the designer proposed in intelligent design is the Christian conception of God. Stuart Burgess, Phillip E. Johnson, William A. Dembski, and Stephen C. Meyer are evangelical Protestants; Michael Behe is a Roman Catholic; Paul Nelson supports young Earth creationism; and Jonathan Wells is a member of the Unification Church. Non-Christian proponents include David Klinghoffer, who is Jewish, Michael Denton and David Berlinski, who are agnostic, and Muzaffar Iqbal, a Pakistani-Canadian Muslim. Phillip E. Johnson has stated that cultivating ambiguity by employing secular language in arguments that are carefully crafted to avoid overtones of theistic creationism is a necessary first step for ultimately reintroducing the Christian concept of God as the designer. Johnson explicitly calls for intelligent design proponents to obfuscate their religious motivations so as to avoid having intelligent design identified "as just another way of packaging the Christian evangelical message." Johnson emphasizes that "...the first thing that has to be done is to get the Bible out of the discussion. ...This is not to say that the biblical issues are unimportant; the point is rather that the time to address them will be after we have separated materialist prejudice from scientific fact." The strategy of deliberately disguising the religious intent of intelligent design has been described by William A. Dembski in The Design Inference. In this work, Dembski lists a god or an "alien life force" as two possible options for the identity of the designer; however, in his book Intelligent Design: The Bridge Between Science and Theology (1999), Dembski states: Christ is indispensable to any scientific theory, even if its practitioners don't have a clue about him. The pragmatics of a scientific theory can, to be sure, be pursued without recourse to Christ. But the conceptual soundness of the theory can in the end only be located in Christ. Dembski also stated, "ID is part of God's general revelation ... Not only does intelligent design rid us of this ideology [materialism], which suffocates the human spirit, but, in my personal experience, I've found that it opens the path for people to come to Christ." Both Johnson and Dembski cite the Bible's Gospel of John as the foundation of intelligent design. Barbara Forrest contends such statements reveal that leading proponents see intelligent design as essentially religious in nature, not merely a scientific concept that has implications with which their personal religious beliefs happen to coincide. She writes that the leading proponents of intelligent design are closely allied with the ultra-conservative Christian Reconstructionism movement. She lists connections of (current and former) Discovery Institute Fellows Phillip E. Johnson, Charles B. Thaxton, Michael Behe, Richard Weikart, Jonathan Wells and Francis J. Beckwith to leading Christian Reconstructionist organizations, and the extent of the funding provided the Institute by Howard Ahmanson, Jr., a leading figure in the Reconstructionist movement. === Reaction from other creationist groups === Not all creationist organizations have embraced the intelligent design movement. According to Thomas Dixon, "Religious leaders have come out against ID too. An open letter affirming the compatibility of Christian faith and the teaching of evolution, first produced in response to controversies in Wisconsin in 2004, has now been signed by over ten thousand clergy from different Christian denominations across America." Hugh Ross of Reasons to Believe, a proponent of Old Earth creationism, believes that the efforts of intelligent design proponents to divorce the concept from Biblical Christianity make its hypothesis too vague. In 2002, he wrote: "Winning the argument for design without identifying the designer yields, at best, a sketchy origins model. Such a model makes little if any positive impact on the community of scientists and other scholars. ... the time is right for a direct approach, a single leap into the origins fray. Introducing a biblically based, scientifically verifiable creation model represents such a leap." Likewise, two of the most prominent YEC organizations in the world have attempted to distinguish their views from those of the intelligent design movement. Henry M. Morris of the Institute for Creation Research (ICR) wrote, in 1999, that ID, "even if well-meaning and effectively articulated, will not work! It has often been tried in the past and has failed, and it will fail today. The reason it won't work is because it is not the Biblical method." According to Morris: "The evidence of intelligent design ... must be either followed by or accompanied by a sound presentation of true Biblical creationism if it is to be meaningful and lasting." In 2002, Carl Wieland, then of Answers in Genesis (AiG), criticized design advocates who, though well-intentioned, "'left the Bible out of it'" and thereby unwittingly aided and abetted the modern rejection of the Bible. Wieland explained that "AiG's major 'strategy' is to boldly, but humbly, call the church back to its Biblical foundations ... [so] we neither count ourselves a part of this movement nor campaign against it." === Reaction from the scientific community === The unequivocal consensus in the scientific community is that intelligent design is not science and has no place in a science curriculum. The U.S. National Academy of Sciences has stated that "creationism, intelligent design, and other claims of supernatural intervention in the origin of life or of species are not science because they are not testable by the methods of science." The U.S. National Science Teachers Association and the American Association for the Advancement of Science have termed it pseudoscience. Others in the scientific community have denounced its tactics, accusing the ID movement of manufacturing false attacks against evolution, of engaging in misinformation and misrepresentation about science, and marginalizing those who teach it. More recently, in September 2012, Bill Nye warned that creationist views threaten science education and innovations in the United States. In 2001, the Discovery Institute published advertisements under the heading "A Scientific Dissent From Darwinism", with the claim that listed scientists had signed this statement expressing skepticism: We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged. The ambiguous statement did not exclude other known evolutionary mechanisms, and most signatories were not scientists in relevant fields, but starting in 2004 the Institute claimed the increasing number of signatures indicated mounting doubts about evolution among scientists. The statement formed a key component of Discovery Institute campaigns to present intelligent design as scientifically valid by claiming that evolution lacks broad scientific support, with Institute members continuing to cite the list through at least 2011. As part of a strategy to counter these claims, scientists organised Project Steve, which gained more signatories named Steve (or variants) than the Institute's petition, and a counter-petition, "A Scientific Support for Darwinism", which quickly gained similar numbers of signatories. === Polls === Several surveys were conducted prior to the December 2005 decision in Kitzmiller v. Dover School District, which sought to determine the level of support for intelligent design among certain groups. According to a 2005 Harris poll, 10% of adults in the United States viewed human beings as "so complex that they required a powerful force or intelligent being to help create them." Although Zogby polls commissioned by the Discovery Institute show more support, these polls suffer from considerable flaws, such as having a low response rate (248 out of 16,000), being conducted on behalf of an organization with an expressed interest in the outcome of the poll, and containing leading questions. A series of Gallup polls in the United States from 1982 through 2014 on "Evolution, Creationism, Intelligent Design" found support for "human beings have developed over millions of years from less advanced formed of life, but God guided the process" of between 31% and 40%, support for "God created human beings in pretty much their present form at one time within the last 10,000 years or so" varied from 40% to 47%, and support for "human beings have developed over millions of years from less advanced forms of life, but God had no part in the process" varied from 9% to 19%. The polls also noted answers to a series of more detailed questions. The 2017 Gallup creationism survey found that 38% of adults in the United States hold the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings, which was noted as being at the lowest level in 35 years. === Allegations of discrimination against ID proponents === There have been allegations that ID proponents have met discrimination, such as being refused tenure or being harshly criticized on the Internet. In the documentary film Expelled: No Intelligence Allowed, released in 2008, host Ben Stein presents five such cases. The film contends that the mainstream science establishment, in a "scientific conspiracy to keep God out of the nation's laboratories and classrooms", suppresses academics who believe they see evidence of intelligent design in nature or criticize evidence of evolution. Investigation into these allegations turned up alternative explanations for perceived persecution. The film portrays intelligent design as motivated by science, rather than religion, though it does not give a detailed definition of the phrase or attempt to explain it on a scientific level. Other than briefly addressing issues of irreducible complexity, Expelled examines it as a political issue. The scientific theory of evolution is portrayed by the film as contributing to fascism, the Holocaust, communism, atheism, and eugenics. Expelled has been used in private screenings to legislators as part of the Discovery Institute intelligent design campaign for Academic Freedom bills. Review screenings were restricted to churches and Christian groups, and at a special pre-release showing, one of the interviewees, PZ Myers, was refused admission. The American Association for the Advancement of Science describes the film as dishonest and divisive propaganda aimed at introducing religious ideas into public school science classrooms, and the Anti-Defamation League has denounced the film's allegation that evolutionary theory influenced the Holocaust. The film includes interviews with scientists and academics who were misled into taking part by misrepresentation of the topic and title of the film. Skeptic Michael Shermer describes his experience of being repeatedly asked the same question without context as "surreal". == Criticism == === Scientific criticism === Advocates of intelligent design seek to keep God and the Bible out of the discussion, and present intelligent design in the language of science as though it were a scientific hypothesis. For a theory to qualify as scientific, it is expected to be: Consistent Parsimonious (sparing in its proposed entities or explanations; see Occam's razor) Useful (describes and explains observed phenomena, and can be used in a predictive manner) Empirically testable and falsifiable (potentially confirmable or disprovable by experiment or observation) Based on multiple observations (often in the form of controlled, repeated experiments) Correctable and dynamic (modified in the light of observations that do not support it) Progressive (refines previous theories) Provisional or tentative (is open to experimental checking, and does not assert certainty) For any theory, hypothesis, or conjecture to be considered scientific, it must meet most, and ideally all, of these criteria. The fewer criteria are met, the less scientific it is; if it meets only a few or none at all, then it cannot be treated as scientific in any meaningful sense of the word. Typical objections to defining intelligent design as science are that it lacks consistency, violates the principle of parsimony, is not scientifically useful, is not falsifiable, is not empirically testable, and is not correctable, dynamic, progressive, or provisional. Intelligent design proponents seek to change this fundamental basis of science by eliminating "methodological naturalism" from science and replacing it with what the leader of the intelligent design movement, Phillip E. Johnson, calls "theistic realism". Intelligent design proponents argue that naturalistic explanations fail to explain certain phenomena and that supernatural explanations provide a simple and intuitive explanation for the origins of life and the universe. Many intelligent design followers believe that "scientism" is itself a religion that promotes secularism and materialism in an attempt to erase theism from public life, and they view their work in the promotion of intelligent design as a way to return religion to a central role in education and other public spheres. It has been argued that methodological naturalism is not an assumption of science, but a result of science well done: the God explanation is the least parsimonious, so according to Occam's razor, it cannot be a scientific explanation. The failure to follow the procedures of scientific discourse and the failure to submit work to the scientific community that withstands scrutiny have weighed against intelligent design being accepted as valid science. The intelligent design movement has not published a properly peer-reviewed article supporting ID in a scientific journal, and has failed to publish supporting peer-reviewed research or data. The only article published in a peer-reviewed scientific journal that made a case for intelligent design was quickly withdrawn by the publisher for having circumvented the journal's peer-review standards. The Discovery Institute says that a number of intelligent design articles have been published in peer-reviewed journals, but critics, largely members of the scientific community, reject this claim and state intelligent design proponents have set up their own journals with peer review that lack impartiality and rigor, consisting entirely of intelligent design supporters. Further criticism stems from the fact that the phrase intelligent design makes use of an assumption of the quality of an observable intelligence, a concept that has no scientific consensus definition. The characteristics of intelligence are assumed by intelligent design proponents to be observable without specifying what the criteria for the measurement of intelligence should be. Critics say that the design detection methods proposed by intelligent design proponents are radically different from conventional design detection, undermining the key elements that make it possible as legitimate science. Intelligent design proponents, they say, are proposing both searching for a designer without knowing anything about that designer's abilities, parameters, or intentions (which scientists do know when searching for the results of human intelligence), as well as denying the distinction between natural/artificial design that allows scientists to compare complex designed artifacts against the background of the sorts of complexity found in nature. Among a significant proportion of the general public in the United States, the major concern is whether conventional evolutionary biology is compatible with belief in God and in the Bible, and how this issue is taught in schools. The Discovery Institute's "teach the controversy" campaign promotes intelligent design while attempting to discredit evolution in United States public high school science courses. The scientific community and science education organizations have replied that there is no scientific controversy regarding the validity of evolution and that the controversy exists solely in terms of religion and politics. === Arguments from ignorance === Eugenie C. Scott, along with Glenn Branch and other critics, has argued that many points raised by intelligent design proponents are arguments from ignorance. In the argument from ignorance, a lack of evidence for one view is erroneously argued to constitute proof of the correctness of another view. Scott and Branch say that intelligent design is an argument from ignorance because it relies on a lack of knowledge for its conclusion: lacking a natural explanation for certain specific aspects of evolution, we assume intelligent cause. They contend most scientists would reply that the unexplained is not unexplainable, and that "we don't know yet" is a more appropriate response than invoking a cause outside science. Particularly, Michael Behe's demands for ever more detailed explanations of the historical evolution of molecular systems seem to assume a false dichotomy, where either evolution or design is the proper explanation, and any perceived failure of evolution becomes a victory for design. Scott and Branch also contend that the supposedly novel contributions proposed by intelligent design proponents have not served as the basis for any productive scientific research. In his conclusion to the Kitzmiller trial, Judge John E. Jones III wrote that "ID is at bottom premised upon a false dichotomy, namely, that to the extent evolutionary theory is discredited, ID is confirmed." This same argument had been put forward to support creation science at the McLean v. Arkansas (1982) trial, which found it was "contrived dualism", the false premise of a "two model approach". Behe's argument of irreducible complexity puts forward negative arguments against evolution but does not make any positive scientific case for intelligent design. It fails to allow for scientific explanations continuing to be found, as has been the case with several examples previously put forward as supposed cases of irreducible complexity. === Possible theological implications === Intelligent design proponents often insist that their claims do not require a religious component. However, various philosophical and theological issues are naturally raised by the claims of intelligent design. Intelligent design proponents attempt to demonstrate scientifically that features such as irreducible complexity and specified complexity could not arise through natural processes, and therefore required repeated direct miraculous interventions by a Designer (often a Christian concept of God). They reject the possibility of a Designer who works merely through setting natural laws in motion at the outset, in contrast to theistic evolution (to which even Charles Darwin was open). Intelligent design is distinct because it asserts repeated miraculous interventions in addition to designed laws. This contrasts with other major religious traditions of a created world in which God's interactions and influences do not work in the same way as physical causes. The Roman Catholic tradition makes a careful distinction between ultimate metaphysical explanations and secondary, natural causes. The concept of direct miraculous intervention raises other potential theological implications. If such a Designer does not intervene to alleviate suffering even though capable of intervening for other reasons, some imply the designer is not omnibenevolent (see problem of evil and related theodicy). Further, repeated interventions imply that the original design was not perfect and final, and thus pose a problem for any who believe that the Creator's work had been both perfect and final. Intelligent design proponents seek to explain the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design (for example, proposing that vestigial organs have unknown purposes), or by proposing that designers do not necessarily produce the best design they can, and may have unknowable motives for their actions. In 2005, the director of the Vatican Observatory, the Jesuit astronomer George Coyne, set out theological reasons for accepting evolution in an August 2005 article in The Tablet, and said that "Intelligent design isn't science even though it pretends to be. If you want to teach it in schools, intelligent design should be taught when religion or cultural history is taught, not science." In 2006, he "condemned ID as a kind of 'crude creationism' which reduced God to a mere engineer." Critics state that the wedge strategy's "ultimate goal is to create a theocratic state". === God of the gaps === Intelligent design has also been characterized as a God-of-the-gaps argument, which has the following form: There is a gap in scientific knowledge. The gap is filled with acts of God (or intelligent designer) and therefore proves the existence of God (or intelligent designer). A God-of-the-gaps argument is the theological version of an argument from ignorance. A key feature of this type of argument is that it merely answers outstanding questions with explanations (often supernatural) that are unverifiable and ultimately themselves subject to unanswerable questions. Historians of science observe that the astronomy of the earliest civilizations, although astonishing and incorporating mathematical constructions far in excess of any practical value, proved to be misdirected and of little importance to the development of science because they failed to inquire more carefully into the mechanisms that drove the heavenly bodies across the sky. It was the Greek civilization that first practiced science, although not yet as a formally defined experimental science, but nevertheless an attempt to rationalize the world of natural experience without recourse to divine intervention. In this historically motivated definition of science any appeal to an intelligent creator is explicitly excluded for the paralysing effect it may have on scientific progress. == Legal challenges in the United States == === Kitzmiller trial === Kitzmiller v. Dover Area School District was the first direct challenge brought in the United States federal courts against a public school district that required the presentation of intelligent design as an alternative to evolution. The plaintiffs successfully argued that intelligent design is a form of creationism, and that the school board policy thus violated the Establishment Clause of the First Amendment to the United States Constitution. Eleven parents of students in Dover, Pennsylvania, sued the Dover Area School District over a statement that the school board required be read aloud in ninth-grade science classes when evolution was taught. The plaintiffs were represented by the American Civil Liberties Union (ACLU), Americans United for Separation of Church and State (AU) and Pepper Hamilton LLP. The National Center for Science Education acted as consultants for the plaintiffs. The defendants were represented by the Thomas More Law Center. The suit was tried in a bench trial from September 26 to November 4, 2005, before Judge John E. Jones III. Kenneth R. Miller, Kevin Padian, Brian Alters, Robert T. Pennock, Barbara Forrest and John F. Haught served as expert witnesses for the plaintiffs. Michael Behe, Steve Fuller and Scott Minnich served as expert witnesses for the defense. On December 20, 2005, Judge Jones issued his 139-page findings of fact and decision, ruling that the Dover mandate was unconstitutional, and barring intelligent design from being taught in Pennsylvania's Middle District public school science classrooms. On November 8, 2005, there had been an election in which the eight Dover school board members who voted for the intelligent design requirement were all defeated by challengers who opposed the teaching of intelligent design in a science class, and the current school board president stated that the board did not intend to appeal the ruling. In his finding of facts, Judge Jones made the following condemnation of the "Teach the Controversy" strategy: Moreover, ID's backers have sought to avoid the scientific scrutiny which we have now determined that it cannot withstand by advocating that the controversy, but not ID itself, should be taught in science class. This tactic is at best disingenuous, and at worst a canard. The goal of the IDM is not to encourage critical thought, but to foment a revolution which would supplant evolutionary theory with ID. === Reaction to Kitzmiller ruling === Judge Jones himself anticipated that his ruling would be criticized, saying in his decision that: Those who disagree with our holding will likely mark it as the product of an activist judge. If so, they will have erred as this is manifestly not an activist Court. Rather, this case came to us as the result of the activism of an ill-informed faction on a school board, aided by a national public interest law firm eager to find a constitutional test case on ID, who in combination drove the Board to adopt an imprudent and ultimately unconstitutional policy. The breathtaking inanity of the Board's decision is evident when considered against the factual backdrop which has now been fully revealed through this trial. The students, parents, and teachers of the Dover Area School District deserved better than to be dragged into this legal maelstrom, with its resulting utter waste of monetary and personal resources. As Jones had predicted, John G. West, Associate Director of the Center for Science and Culture, said: The Dover decision is an attempt by an activist federal judge to stop the spread of a scientific idea and even to prevent criticism of Darwinian evolution through government-imposed censorship rather than open debate, and it won't work. He has conflated Discovery Institute's position with that of the Dover school board, and he totally misrepresents intelligent design and the motivations of the scientists who research it. Newspapers have noted that the judge is "a Republican and a churchgoer". The decision has been examined in a search for flaws and conclusions, partly by intelligent design supporters aiming to avoid future defeats in court. In its Winter issue of 2007, the Montana Law Review published three articles. In the first, David K. DeWolf, John G. West and Casey Luskin, all of the Discovery Institute, argued that intelligent design is a valid scientific theory, the Jones court should not have addressed the question of whether it was a scientific theory, and that the Kitzmiller decision will have no effect at all on the development and adoption of intelligent design as an alternative to standard evolutionary theory. In the second Peter H. Irons responded, arguing that the decision was extremely well reasoned and spells the death knell for the intelligent design efforts to introduce creationism in public schools, while in the third, DeWolf, et al., answer the points made by Irons. However, fear of a similar lawsuit has resulted in other school boards abandoning intelligent design "teach the controversy" proposals. === Anti-evolution legislation === A number of anti-evolution bills have been introduced in the United States Congress and State legislatures since 2001, based largely upon language drafted by the Discovery Institute for the Santorum Amendment. Their aim has been to expose more students to articles and videos produced by advocates of intelligent design that criticise evolution. They have been presented as supporting "academic freedom", on the supposition that teachers, students, and college professors face intimidation and retaliation when discussing scientific criticisms of evolution, and therefore require protection. Critics of the legislation have pointed out that there are no credible scientific critiques of evolution, and an investigation in Florida of allegations of intimidation and retaliation found no evidence that it had occurred. The vast majority of the bills have been unsuccessful, with the one exception being Louisiana's Louisiana Science Education Act, which was enacted in 2008. In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K–12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, these worldviews as well as others "that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others." == Status outside the United States == === Europe === In June 2007, the Council of Europe's Committee on Culture, Science and Education issued a report, The dangers of creationism in education, which states "Creationism in any of its forms, such as 'intelligent design', is not based on facts, does not use any scientific reasoning and its contents are pathetically inadequate for science classes." In describing the dangers posed to education by teaching creationism, it described intelligent design as "anti-science" and involving "blatant scientific fraud" and "intellectual deception" that "blurs the nature, objectives and limits of science" and links it and other forms of creationism to denialism. On October 4, 2007, the Council of Europe's Parliamentary Assembly approved a resolution stating that schools should "resist presentation of creationist ideas in any discipline other than religion", including "intelligent design", which it described as "the latest, more refined version of creationism", "presented in a more subtle way". The resolution emphasises that the aim of the report is not to question or to fight a belief, but to "warn against certain tendencies to pass off a belief as science". In the United Kingdom, public education includes religious education, and there are many faith schools that teach the ethos of particular denominations. When it was revealed that a group called Truth in Science had distributed DVDs produced by Illustra Media featuring Discovery Institute fellows making the case for design in nature, and claimed they were being used by 59 schools, the Department for Education and Skills (DfES) stated that "Neither creationism nor intelligent design are taught as a subject in schools, and are not specified in the science curriculum" (part of the National Curriculum, which does not apply to private schools or to education in Scotland). The DfES subsequently stated that "Intelligent design is not a recognised scientific theory; therefore, it is not included in the science curriculum", but left the way open for it to be explored in religious education in relation to different beliefs, as part of a syllabus set by a local Standing Advisory Council on Religious Education. In 2006, the Qualifications and Curriculum Authority produced a "Religious Education" model unit in which pupils can learn about religious and nonreligious views about creationism, intelligent design and evolution by natural selection. On June 25, 2007, the UK Government responded to an e-petition by saying that creationism and intelligent design should not be taught as science, though teachers would be expected to answer pupils' questions within the standard framework of established scientific theories. Detailed government "Creationism teaching guidance" for schools in England was published on September 18, 2007. It states that "Intelligent design lies wholly outside of science", has no underpinning scientific principles, or explanations, and is not accepted by the science community as a whole. Though it should not be taught as science, "Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory." However, "Teachers of subjects such as RE, history or citizenship may deal with creationism and intelligent design in their lessons." The British Centre for Science Education lobbying group has the goal of "countering creationism within the UK" and has been involved in government lobbying in the UK in this regard. Northern Ireland's Department for Education says that the curriculum provides an opportunity for alternative theories to be taught. The Democratic Unionist Party (DUP) – which has links to fundamentalist Christianity – has been campaigning to have intelligent design taught in science classes. A DUP former Member of Parliament, David Simpson, has sought assurances from the education minister that pupils will not lose marks if they give creationist or intelligent design answers to science questions. In 2007, Lisburn city council voted in favor of a DUP recommendation to write to post-primary schools asking what their plans are to develop teaching material in relation to "creation, intelligent design and other theories of origin". Plans by Dutch Education Minister Maria van der Hoeven to "stimulate an academic debate" on the subject in 2005 caused a severe public backlash. After the 2006 elections, she was succeeded by Ronald Plasterk, described as a "molecular geneticist, staunch atheist and opponent of intelligent design". As a reaction on this situation in the Netherlands, the Director General of the Flemish Secretariat of Catholic Education (VSKO) in Belgium, Mieke Van Hecke, declared that: "Catholic scientists already accepted the theory of evolution for a long time and that intelligent design and creationism doesn't belong in Flemish Catholic schools. It's not the tasks of the politics to introduce new ideas, that's task and goal of science." === Australia === The status of intelligent design in Australia is somewhat similar to that in the UK (see Education in Australia). In 2005, the Australian Minister for Education, Science and Training, Brendan Nelson, raised the notion of intelligent design being taught in science classes. The public outcry caused the minister to quickly concede that the correct forum for intelligent design, if it were to be taught, is in religion or philosophy classes. The Australian chapter of Campus Crusade for Christ distributed a DVD of the Discovery Institute's documentary Unlocking the Mystery of Life (2002) to Australian secondary schools. Tim Hawkes, the head of The King's School, one of Australia's leading private schools, supported use of the DVD in the classroom at the discretion of teachers and principals. === Relation to Islam === Muzaffar Iqbal, a notable Pakistani-Canadian Muslim, signed "A Scientific Dissent From Darwinism", a petition from the Discovery Institute. Ideas similar to intelligent design have been considered respected intellectual options among Muslims, and in Turkey many intelligent design books have been translated. In Istanbul in 2007, public meetings promoting intelligent design were sponsored by the local government, and David Berlinski of the Discovery Institute was the keynote speaker at a meeting in May 2007. === Relation to ISKCON === In 2011, the International Society for Krishna Consciousness (ISKCON) Bhaktivedanta Book Trust published an intelligent design book titled Rethinking Darwin: A Vedic Study of Darwinism and Intelligent Design. The book included contributions from intelligent design advocates William A. Dembski, Jonathan Wells and Michael Behe as well as from Hindu creationists Leif A. Jensen and Michael Cremo. == See also == == Notes == == References == == Bibliography == Pigliucci, Massimo (2010). "Science in the Courtroom: The Case against Intelligent Design" (PDF). Nonsense on Stilts: How to Tell Science from Bunk. Chicago, IL: University of Chicago Press. pp. 160–186. ISBN 978-0-226-66786-7. LCCN 2009049778. OCLC 457149439. == Further reading ==
Wikipedia/Intelligent_design
Automotive design is the process of developing the appearance (and to some extent the ergonomics) of motor vehicles, including automobiles, motorcycles, trucks, buses, coaches, and vans. The functional design and development of a modern motor vehicle is typically done by a large team from many different disciplines also included within automotive engineering, however, design roles are not associated with requirements for professional- or chartered-engineer qualifications. Automotive design in this context focuses primarily on developing the visual appearance or aesthetics of vehicles, while also becoming involved in the creation of product concepts. Automotive design as a professional vocation is practiced by designers who may have an art background and a degree in industrial design or in transportation design. For the terminology used in the field, see the glossary of automotive design. == Design elements == The task of the design team is usually split into three main aspects: exterior design, interior design, and color and trim design. Graphic design is also an aspect of automotive design; this is generally shared amongst the design team as the lead designer sees fit. The design focuses not only on the isolated outer shape of automobile parts, but concentrates on the combination of form and function, starting from the vehicle package. The aesthetic value will need to correspond to ergonomic functionality and utility features as well. In particular, vehicular electronic components and parts will give more challenges to automotive designers who are required to update on the latest information and knowledge associated with emerging vehicular gadgetry, particularly dashtop mobile devices, like GPS navigation, satellite radio, HD radio, mobile TV, MP3 players, video playback, and smartphone interfaces. Though not all the new vehicular gadgets are to be designated as factory standard items, some of them may be integral to determining the future course of any specific vehicular models. === Exterior design === The design team(s) responsible for the exterior of the vehicle develops the proportions, shape, and surface details of the vehicle. Exterior design is first done by a series of manual sketches and digital drawings. Progressively, more detailed drawings are executed and approved by appropriate layers of management, followed by digital rendering to images. Consumer feedback is generally sought at this point to help iteratively refine vehicle concepts according to the targeted market and will continue throughout the rest of the design refinement process. After more progressive refinement, industrial plasticine and or digital models are developed from and along with the drawings and images. The data from these models are then used to create quarter-scale and finally full-sized mock-ups of the final design. With three- and five-axis CNC milling machines, the clay model is first designed in a computer program and then "carved" using the machine and large amounts of clay. Even in times of photorealistic (three-dimensional) software and virtual models on power walls, the clay model is still the most important tool for a final evaluation of the exterior design of a vehicle and, therefore, is used throughout the industry. === Interior design === The designer responsible for the vehicles' interior develops the proportions, shape, placement, and surfaces for the instrument panel, seats, door trim panels, headliner, pillar trims, etc. Here the emphasis is on ergonomics and the comfort of the passengers. The procedure here is the same as with exterior design (sketch, digital model, and clay model). === Color and trim design === The color and trim (or color and materials) designer is responsible for the research, design, and development of all interior and exterior colors and materials used on a vehicle. These include paints, plastics, fabric designs, leather, grains, carpet, headliner, wood trim, and so on. Color, contrast, texture, and pattern must be carefully combined to give the vehicle a unique interior environment experience. Designers work closely with exterior and interior designers. Designers draw inspiration from other design disciplines such as industrial design, fashion, home furnishing, architecture, and sometimes product design. Specific research is done into global trends to design for projects two to three model years in the future. Trend boards are created from this research to keep track of design influences as they relate to the automotive industry. The designer then uses this information to develop themes and concepts that are then further refined and tested on the vehicle models. === Graphic design === The design team also develops graphics for items, such as badges, decals, dials, switches, kick or tread strips, or liveries. === Computer-Aided Design and Class-A development === The sketches and rendering are transformed into 3D digital surface modeling and rendering for real-time evaluation with Math data in the initial stages. During the development process succeeding phases will require the 3D model fully developed to meet the aesthetic requirements of a designer as well as all engineering and manufacturing requirements. The fully developed CAS digital model will be re-developed for manufacturing meeting the Class-A surface standards that involve both technical as well as aesthetics. A Product Engineering team will further develop this data. These modelers usually have a background in industrial design or sometimes tooling engineering in the case of some Class-A modelers. Autodesk Alias and ICEM Surf are the two most widely used software tools for Class-A development. Uniform surface continuity is critical to ensure Class-A surface standards. The continuity of the areas is between the quality grades G0 through G3, with G3 guaranteeing the cleanest patch-to-patch transitions. == Development process == === Design development cycle === Several manufacturers have slightly varied development cycles for designing an automobile, but in practice, these are the following: Design and consumer research Concept development sketching CAS (Computer Aided Styling) Clay modeling Interior buck model Vehicle ergonomics Class-A Surface Development Color and trim Vehicle graphics The design process occurs concurrently with other product engineers who will be engineering the styling data for meeting performance, manufacturing, and safety regulations. From mid-phase, back-and-forth interactions between the designers and product engineers culminate into a finished product being manufacturing-ready. Apart from this the engineering team parallelly works in the following areas. Product Engineering (Body In White Sheetmetal Design and Plastic engineering), NVH Development team, Prototype development, Powertrain engineering, Physical Vehicle validation, Tool and Die development, and Manufacturing process design. === Development team === The design team for a specific model consists of a chief designer and an exterior as well as an interior designer. In some cases, all three roles are done by one designer. Several junior designers are involved in the development process as well and make specific contributions all overseen by the chief designer. Apart from this, the color and trim designer works closely with other designers. The clay model team and digital model team work closely with the styling team all located within the studio. Apart from this, there would be studio heads, studio managers, and prototype engineers who would work across all teams in the studio. The total team size for developing a full car usually ranges from 25 to 40 members and the development time lasts for more than 24 months until signed-off for tooling and production. After that, a smaller team would be working until the vehicle launch. == History == === United States === In the United States, automotive design reached a turning point in the 1920s when the American national automobile market began reaching saturation. To maintain unit sales, General Motors head Alfred P. Sloan Jr. suggested annual model-year design changes to convince car owners that they needed to buy a new replacement each year, an idea borrowed from the bicycle industry (though Sloan usually gets the credit or blame). Critics called his strategy planned obsolescence. Sloan preferred the term "dynamic obsolescence". This strategy had far-reaching effects on the auto business, the field of product design, and eventually the American economy. The smaller automakers could not maintain the pace and expense of yearly re-styling. Henry Ford did not like the model-year change because he clung to an engineer's notions of simplicity, the economics of scale, and design integrity. GM surpassed Ford's sales in 1931 and became the dominant company in the industry thereafter. The frequent design changes also made it necessary to use a body-on-frame rather than the lighter but less adaptable monocoque design used by most European automakers. In the 1930s, Chrysler's innovations with aerodynamics helped launch the Chrysler Airflow in 1934, which was revolutionary and radical compared to contemporary vehicles. However, inadequate consumer acceptance of the advanced appearance of the cars forced a re-design of succeeding models of the Airflow. This marketing experience made the entire industry take note of the high risks involved in incorporating major design advancements into their production cars. A major influence on American auto styling and marketing was Harley Earl, who is credited with inventing the idea of a concept car, and who brought the tailfin and other aeronautical design references to auto design starting with the rear fenders of the 1948 Cadillac. Another notable designer was Chrysler group's designer Virgil Exner, who developed the Forward look design in the mid-1950s. Exner is also credited with using wind tunnel testing to justify incorporating tailfins, thus moving the company away from boxy-looking cars into more aerodynamic and futuristic designs that were influenced by rockets after WWII. Other influential automotive designers include Raymond Loewy, who was responsible for a number of Studebaker vehicles such as the Avanti, and Gordon Buehrig, who was responsible for the Auburn 851, as well as the Cord 810 and 812. Starting in the 1960s, Dick Teague, who spent most of his career with American Motors Corporation (AMC), originated the concept of using interchangeable body panels so as to create a wide array of different vehicles using the same stampings, starting with the AMC Cavalier. Teague was responsible for automotive designs such as the two-seat AMC AMX muscle car, the subcompact Gremlin, the Pacer, and Matador coupe, as well as the original and market segment-creating, Jeep Cherokee. Additionally during the 1960s, Ford's first-generation Ford Mustang and Thunderbird marked another era leading into new market segments from Detroit. The Ford Mustang achieved record sales in its first year of production and established the pony car segment. Personal injury litigation has affected the design and appearance of the car in the 20th century. === Europe === Until World War I, most automakers were concerned with mechanical reliability rather than the external appearance of their cars. Later, luxury and aesthetics became a demand, and also an effective marketing tool. Designs from each nation with their own strong cultural identity are reflected in their exterior and interior designs. World War II slowed the progress, but after the early 1950s, Italian designers set the trend and remained the driving force until the early part of the 1980s. ==== France ==== In France notable designs came from Bugatti and Avions Voisin. Of the mass-selling cars, Citroën launched their vehicles with innovative designs and engineering, aided by the styling of Flaminio Bertoni as evident from the Citroën DS. After World War II, with the decline of the coachbuilding industry, French automakers (except Citroën) followed British and other popular trends until they gained financial stability. During the 1980s, manufacturers like Renault cultivated their own strong design identities with designers like Patrick Le Quément. Peugeot, which was dependent on Pininfarina since the early post-war period, later established its own brand identity from the 1980s onwards. Its other company, Citroën, still retains its distinctive French innovations for its designs. ==== Great Britain ==== Great Britain was Europe's leading manufacturer of automobiles until the late-1960s. During that era, there were more British-based automakers than in the rest of Europe combined. The British automobile industry catered to all segments ranging from compact, budget, sports, utility, and luxury-type cars. Car design in Britain was markedly different from other European designs largely because British designers were not influenced by other European art or design movements, as well as the British clay modelers, used a different sweep set. British cars until World War II were sold in most of the British colonies. Innovations in vehicle packaging and chassis engineering combined with global familiarity with British designs meant vehicles were acceptable to public tastes at that time. British skilled resources such as panel beaters, die machinists, and clay modelers were also available partly due to their involvement with the motorsport industry. Still, during the 1960s, British manufacturers sought professional help from Italian designers and studios such as Giovanni Michelotti, Ercole Spada, and Pininfarina. Notable British contributions to automobile designs were the Morris Mini by Alec Issigonis, several Jaguar Cars by Sir William Lyons and Malcolm Sayer, Aston Martin DB Series, and several cars from Triumph and MG. Ford Europe, based in Great Britain, is notable for the Ford Sierra line, a work of Uwe Bahnsen, Robert Lutz, and Patrick le Quément. Other notable British designers include William Towns for Aston Martin, and David Bache for Land Rover and Range Rover, and Ian Callum for Jaguar. In the late 1980s, Royden Axe (previously Chrysler UK Design Director) and Gordon Sked along with Gerry McGovern produced most notably the MGF and Rover 800 During the complex changes in the British car industry in the 1990s and beyond, a mix of designers worked on all of the remaining mainstream brands under one design director and in one studio. The director most involved was Geoff Upex, with his team of Richard Woolley, Dave Saddington, George Thomson, Alan Mobberley, and Martin Peach (colour and trim). Together they created the Rover 75, 45, and 25 (previously 400 and 200) the L322 Range Rover, the T5 platform-based Discovery and Range Rover sport, the Freelander 2, and the Mini as requested by BMW before the company was sold ==== Germany ==== Germany is often considered the birthplace of industrial design with the Bauhaus School of Design, before it was closed down by the Nazi regime. Ferdinand Porsche and his family played a significant role in German design. Mercedes-Benz passenger cars were also in the luxury segment and played an important role in German car design. After the 1980s, German design evolved into a distinctive Teutonic style often to complement their highly engineered cars suited to Autobahns. The early German design clues of the present-day owe some part to Italian designers like Giovanni Michelotti, Ercole Spada, Bruno Sacco, and Giorgetto Giugiaro. During the mid- and late-20th century one of the most influential coach builders/designers in Germany was Karmann. German designs started gaining popularity after the 1980s, notable after the formation of Audi. Volkswagen, which was dependent on Marcello Gandini, Giorgetto Giugiaro, and Karmann, later formed the contemporary design language along with Audi. BMW entered the automobile design with sporty-looking everyday sedans using Giovanni Michelotti. These models were later enhanced by Ercole Spada into the 1980s, and Klaus Luthe until the mid-1990s. The American-born designer Chris Bangle was hired by BMW in the late-1990s to redefine the brand. Bangle incorporated new single-press technology for compound curves to add controversial styling elements to his designs. The Porsche family contribution was instrumental in the evolution of Porsche cars, while the Italian designer Bruno Sacco helped create various Mercedes Models from the 1960s to the 1990s. ==== Italy ==== In Italy, Fiat and Alfa Romeo played a major role in car design. Many coachbuilders were dependent on these two major manufacturers. Italian manufacturers had a large presence in motorsports leading to several sport car manufacturers like Ferrari, Lancia, Lamborghini, Maserati, etc. During the late-1950s, Italian automobile designs gained global popularity coinciding with the modern fashion and architecture at that time around the world. Various design and technical schools in Turin turned out designers on a large scale. By the late-1960s, almost all Italian coachbuilders transformed into design studios catering to automakers around the world. The trend continued in the 1990s when the Japanese and Korean manufacturers sourced designs from these styling studios. One example is Pininfarina. Some Italian designers whose design services were sought globally include Aldo Brovarone, Giovanni Michelotti, Ercole Spada, Bruno Sacco, Marcello Gandini, Giorgetto Giugiaro, and Walter de Silva. ==== Scandinavia ==== Sweden had Volvo and Saab as domestic automakers, and the nation's northern location required that cars needed to withstand Nordic climate conditions. The Scandinavian design elements are known for their minimalism and simplicity. One of the early original Scandinavian designs was the Saab 92001 by Sixten Sason and Gunnar Ljungström. Koenigsegg, founded in the 1990s, became Sweden's first domestic producer of high end sports cars, with many of their models featuring in house designs by Swedish designers. ==== Czechoslovakia ==== Before World War II and until the early-1990s, Czechoslovakia had a strong presence in the automotive industry with manufacturers like Skoda, Jawa, Tatra, CZ, Praga, and Zetor. Czech automobiles were generally known for their originality in mechanical simplicity and designs were remarkably Bohemian as evident from Tatra cars and Jawa motorcycles. During the Communist regime, the design started falling back and ultimately the domestic automakers ended up as subsidiaries of EU-based companies. == See also == == References == == Further reading == Nikolaos Gkikas, ed. (2013). Automotive Ergonomics: Driver – Vehicle Interaction. Boca Raton, FL.: CRC Press. ISBN 9781439894255. Lamm, Michael; Hollis, Dave (1996). A century of automotive style - 100 years of American car design. Stockton, CA: Lamm-Morada. ISBN 9780932128072. == External links == Learning materials related to New car design at Wikiversity Media related to Automotive design at Wikimedia Commons
Wikipedia/Automotive_design
DesignSpark PCB is a free electronic design automation software package for printed circuit boards. Although there is no charge for the software, the user must register with DesignSpark.com to unlock the program and it displays advertisements which must be acknowledged before the user can begin working. == Background == DesignSpark PCB resulted from a collaboration between electronics distributor RS Components and EDA software developer 'Number One Systems', being a fork of their long-established EDA Easy-PC. Electronic design automation (EDA) software is a sub-class of computer-aided design (CAD) software. == Projects == Projects are used in DesignSpark PCB to organise design files. A project can have an unlimited number of schematic sheets and one PCB layout file. == Schematic capture == DesignSpark PCB has a Schematic editor. Schematics are used to draw up circuit diagrams and connections. A given project can have multiple schematic sheets that together combine to form the complete design. There are some useful third party libraries that can be added. == PCB layout == Schematics are translated to a PCB layout file with a PCB Wizard. A PCB layout editor is then used to refine the physical layout of the printed circuit board. A design may have several iterations before a finalised printed circuit board is passed for production. == Autorouter == DesignSpark PCB includes an auto-router which automatically places tracks between components on a PCB layout. Components can also be auto-placed. DesignSpark PCB produces Gerber and Excellon drill files. These standard files are accepted by PCB fabrication companies and are used to build a printed circuit board. == DesignSpark PCB Pro == DesignSpark PCB Pro was a paid upgrade from the free DesignSpark PCB software. It was aimed at professional electronic design engineers of SMEs with an expanded feature set compared to the free DesignSpark PCB software. It was discontinued in September 2022 and its features were merged into DesignSpark PCB as part of paid subscription plans. === Background === DesignSpark PCB Pro also resulted from a collaboration between electronics distributor RS Components and EDA software developer 'WestDev Ltd'. === Features === DesignSpark PCB Pro includes specialised tools to aid professionals with customising and managing large projects involving multi-layer PCBs. Noteworthy features referred to on the developer's website include: Blind and Buried Vias - to connect between different layers of a complex printed circuit board Fully configurable design rules editor and checker - to verify a circuit design and pass quality checks for production Hierarchical Schematic Design - split multi-sheet schematics into building blocks to make them readable and manageable Panel design editor - visualise and manage assembly of finished PCB designs prior to manufacturing Variant Manager - to specify different design variants to meet global market requirements Automatic shape-based and gridless router Advanced routing modes – trunk routing for differential pairs and buses, pull tight, auto mitre, and auto neck tracks Dual screen support Cross-hatch copper pours, teardrops, bullet, and asymmetric pad shapes are supported. == See also == Comparison of EDA Software DesignSpark Mechanical == References == == Further reading == "Monthly subscriptions add to RS DesignSpark tools". electronicsweekly.com "Профессиональная работ ав САПР DesignSpark PCB" (English title: Professional work CAD DesignSpark PCB.) (in Russian) "DesignSpark PCB - Version 12 release highlights". rs-online.com == External links == Official DesignSpark PCB website Official DesignSpark PCB Forum
Wikipedia/DesignSpark_PCB
Assembly modeling is a technology and method used by computer-aided design and product visualization computer software systems to handle multiple files that represent components within a product. The components within an assembly are represented as solid or surface models. == Overview == The designer generally has access to models that others are working on concurrently. For example, several people may be designing one machine that has many parts. New parts are added to an assembly model as they are created. Each designer has access to the assembly model, while a work in progress, and while working in their own parts. The design evolution is visible to everyone involved. Depending on the system, it might be necessary for the users to acquire the latest versions saved of each individual components to update the assembly. The individual data files describing the 3D geometry of individual components are assembled together through a number of sub-assembly levels to create an assembly describing the whole product. All CAD and CPD systems support this form of bottom-up construction. Some systems, via associative copying of geometry between components also allow top-down method of design.: 242  Components can be positioned within the product assembly using absolute coordinate placement methods or by means of mating conditions. Mating conditions are definitions of the relative position of components between each other; for example alignment of axis of two holes or distance of two faces from one another. The final position of all components based on these relationships is calculated using a geometry constraint engine built into the CAD or visualization package. The importance of assembly modeling in achieving the full benefits of PLM has led to ongoing advances in this technology. These include the use of lightweight data structures such as JT that allow visualization of and interaction with large amounts of product data, direct interface to between Digital Mock ups and PDM systems and active digital mock up technology that unites the ability to visualize the assembly mock up with the ability to measure, analyze, simulate, design and redesign. == See also == System testing (e2e : components testing as a whole) == References ==
Wikipedia/Assembly_modelling
Electric guitar design is a type of industrial design where the looks and efficiency of the shape as well as the acoustical aspects of the guitar are important factors. In the past many guitars have been designed with various odd shapes as well as very practical and convenient solutions to improve the usability of the object. == History == George Beauchamp is occasionally credited with inventing the electric guitar by designing a lap steel guitar with a pickup, though a lap steel does not have functional frets or a standard guitar-type neck. The earliest "electrified" fretted guitars were hollow-bodied archtop acoustic guitars to which some form of electromagnetic transducer had been attached. The first commercial electrified guitar was the Electro-Spanish Ken Roberts model produced from 1931 to 1936 by Rickenbacker, with one Beauchamp-designed pickup and an early "Vib-rola" hand vibrato created by Doc Kauffman. === Early years === Paul Tutmarc built and may have offered an electric solid-body guitar as early as 1932, under the brand "Audiovox". Tutmarc is also credited as the co-inventor of the magnetic pickup along with Art Stimpson, and the fretted electric bass guitar. Bob Wisner worked for Tutmarc, converting tube radio amplifiers into guitar amplifiers (eventually developing his own amplifier circuits) so Tutmark's instruments could be sold matched up with amplifiers. Paul was unsuccessful at obtaining a patent for his magnetic pickup as it was too similar to the telephone microphone coil sensor device. Audiovox production was handed over to Paul's son, Bud Tutmarc, who continued building these instruments under the "Bud-Electro" brand until the early 1950s. Bud Tutmarc had been delegated by the senior Tutmarc the task of winding the pickup coils, and he continued producing them for his own guitars. He used horseshoe magnets in a single-coil and later a hum-cancelling dual-coil configuration. When Wisner was hired by Rickenbacher (later Rickenbacker), he may have passed along Tutmarc's magnetic pickup design, which strongly resembles the pickup on their cast aluminum lap steel guitar, nicknamed The Frying Pan or The Pancake Guitar, released in 1933. Another early solid-body electric guitar was built by musician and inventor Les Paul in the early 1940s, working after hours in the Epiphone Guitar factory. His log guitar (a wood post with a neck attached to it and two hollow body halves attached to the sides for appearance only) was patented, and is often considered to be the first of its kind, although it shares nothing of design or hardware in common with the solid-body "Les Paul" model later created by Gibson. === Fender === In 1950 and 1951, amplifier builder Leo Fender designed the first commercially successful solid-body electric guitar with a single magnetic pickup, which was initially named the "Esquire". The later two-pickup version of the Esquire was called the "Broadcaster". The bolt-on neck was consistent with Leo Fender's belief that the instrument design should be modular to allow cost-effective and consistent manufacture and assembly, as well as simplified repair or replacement. The Broadcaster name was changed to Telecaster because of a legal dispute over the name. In 1954, the Fender Electric Instrument Company introduced the Fender Stratocaster, or "Strat". It was positioned as a deluxe model and offered various product improvements and innovations over the Telecaster, often based upon responses from working musicians. These innovations included an ash or alder double-cutaway body design, with an integrated vibrato mechanism (called a synchronized tremolo by Fender, thus beginning a confusion of the terms that still continues), three single-coil pickups, and "comfort contours" where the body edges are significantly contoured. Leo Fender is also credited with developing the first commercially successful electric bass, the Fender Precision Bass, introduced in 1951. === Gibson === The more traditionally designed and styled Gibson solid-body instruments were a contrast to Leo Fender's modular designs and heavily contoured "slab" bodies, with the most notable differentiator being the method of neck attachment and the scale of the neck (Gibson-24.75", Fender-25.5"). Gibson, like many guitar manufacturers, had long offered semi-acoustic guitars with pickups, and previously rejected Les Paul and his "log" electric in the 1940s. In apparent response to the Telecaster, Gibson introduced the first Gibson Les Paul solid body guitar in 1952 (Les Paul was brought in only towards the end of the design process for details of the design and for marketing endorsement [2]). Features of the Les Paul include a solid mahogany body with a carved maple top (much like a violin and earlier Gibson archtop hollow body electric guitars) and contrasting edge binding, two single-coil "soapbar" pickups, a 24¾" scale mahogany neck with a more traditional glued-in "set" neck joint, binding on the edges of the fretboard, and a tilt-back headstock with three machine heads (tuners) to a side. The earliest version had a combination bridge and trapeze-tailpiece design as specified by Les Paul himself, but was largely disliked and discontinued after the first year. Gibson then developed the Tune-o-matic bridge and separate stop tailpiece, an adjustable non-vibrato design still in wide use. By 1957, Gibson had made the final major change to the Les Paul of today - the humbucking pickup, or humbucker. The pickup, invented by Seth Lover, was a dual-coil pickup which featured two windings connected out-of-phase and reverse-wound, in order to cancel the 60-cycle mains hum that plagued single-coil pickups; as a byproduct, the two-coil design also produces a distinctive, more "mellow" tone which appeals to many guitarists. === Vox === In 1962 Vox introduced the pentagonal Phantom guitar, originally made in England but soon after made by EKO of Italy. It was followed a year later by the teardrop-shaped Mark VI, the prototype of which was used by Brian Jones of The Rolling Stones. Vox guitars also experimented with onboard effects and electronics. The Teardrop won a prize for its design. In the mid 1960s, as the sound of electric 12 string guitar became popular, Vox introduced the Phantom XII and Mark XII electric 12 string guitars. Vox produced many more traditional 6 and 12 string electric guitars in both England and Italy. It may be noted that the Phantom guitar shape was quite similar to that of first fretted electric bass guitar, the Audiovox "Electric Bass Fiddle" of 1934. In 1966 Vox introduced the revolutionary but problematic GuitarOrgan, a Phantom VI guitar with internal organ electronics. The instrument's trigger mechanism required a specially-wired plectrum that completed circuit connections to each fret, resulting in a very wide and unwieldy neck. John Lennon was given one in a bid to secure an endorsement, though this never panned out. According to Up-Tight: the Velvet Underground Story, Brian Jones of the Rolling Stones also tried one; when asked by the Velvets if it "worked", his answer was negative. The instrument never became popular, but it was a precursor to the modern guitar synthesizer. === Multiscale/Fanned-Fret Guitars === In recent years, guitars and basses with multi-scale or fanned-fret fingerboards started to appear. These instruments are supposed to offer an advantage over the classical fixed-scale guitars and basses by providing more freedom in setting the tension of each string at the design and manufacturing phases. This may result in a more uniform tension of the strings, as well as possibly offer timbre and tonal characteristics somewhat different from the usual fixed-scale instruments. == Variant designs == Materials other than wood have been used. Travis Bean and Kramer built guitars with aluminium necks. The Gittler guitar was a "skeleton" design from the late 1970s, largely stainless steel. In 1979, for the Chicago NAMM trade show, Ibanez built a 76-pound solid-brass guitar, primarily as an attention-getting gimmick but also to demonstrate that while such extreme mass would provide very long note sustain (a characteristic sought by many guitarists), the tonal qualities suffered. Various plastics and composites have been employed. Some hollow-body Danelectro had Masonite body shells. The Ampeg guitars designed by Dan Armstrong pioneered acrylic as a body material. Fiberglass was used by Valco (called "Res-O-Glas") for some models of hollow-body "Airline" guitars sold through Montgomery Ward. Carbon fiber has been used for necks as well as bodies. 1991 saw the introduction of guitar designer Jol Dantzig's first truly workable acoustic-electric hybrid guitar design. The instrument, called the DuoTone, was conceived while Dantzig was at Hamer Guitars. (Dantzig was also the designer of the first 12 string bass.) Adapted by players like Ty Tabor, Stone Gossard, Elvis Costello and Jeff Tweedy, the DuoTone was a full "duplex" instrument that could switch between acoustic and electric tones. Recently there have been many entries in the hybrid category (capable of both acoustic and electric tones) including the T5 by Taylor, Michael Kelly's "Hybrid," the Parker Fly and the Anderson Crowdster. In the 90s the band Neptune began building a weird metal guitar with 3rd bridge options incorporated. A predecessor of this type of guitars is the Pencilina. Linda Manzer designed the Pikasso guitar with multiple necks. == See also == == References ==
Wikipedia/Electric_guitar_design
In the United States, a design patent is a form of legal protection granted to the ornamental design of an article of manufacture. Design patents are a type of industrial design right. Ornamental designs of jewelry, furniture, beverage containers (Fig. 1) and computer icons are examples of objects that are covered by design patents. A similar intellectual property right, a registered design, can be obtained in other countries. In Kenya, Japan, South Korea and Hungary, industrial designs are registered after performing an official novelty search. In the countries of the European Community, one needs to only pay an official fee and meet other formal requirements for registration (e.g. Community design at EUIPO, Germany, France, Spain). For the member states of WIPO, cover is afforded by registration at WIPO and examination by the designated member states in accordance with the Geneva Act of the Hague Agreement. This allows for broad worldwide coverage of a design by filing a single application in a single language (e.g. English). == Protections == A US design patent covers the ornamental design of an article of manufacture. An object with a design that is substantially similar in appearance to the design claimed in a design patent cannot be made, used, sold or imported into the United States without the permission of the patent holder. The object does not have to be exact for the patent to be infringed. It only has to be substantially similar in overall appearance. Design patents with line drawings cover only the features shown as solid lines. Items shown in broken lines are not covered. This is one of the reasons Apple was awarded a jury verdict in the US case of Apple v. Samsung. Apple's patent showed much of their iPhone design as broken lines. It didn't matter if Samsung was different in those areas. The fact that the solid lines of the patent were the same as Samsung's design meant that Samsung infringed the Apple design patent. Design patents are subject to both the novelty and non-obviousness standards of the patent code. However, because design patents are not measured based on the utility of the designs to which they are directed, there is an open question as to how to measure the non-obviousness of an ornamental design. There is substantial case law, however, on how to evaluate design patent non-obviousness. Once a design patent has been submitted, it begins a term of protection. In the United States, for a design patent whose application was submitted on or after May 13, 2015, that patent has a term limit of 15 years from its date of grant. For a design patent whose application was submitted prior to that date, the term limit is 14 years from the date of grant. During this period the patent holder is entitled to bring a lawsuit against any entity that infringes on that patent; once the term expires, it may not be renewed and the design patent ceases to receive protection in US courts. == Computer images == Both novel fonts and computer icons can be covered by design patents. Icons are only covered, however, when they are displayed on a computer screen, thus making them part of an article of manufacture with practical utility. Screen layouts can also be protected with design patents. == Publication of application == In China, Canada, Japan, South Africa, and the United States, a design patent application is not published and is kept secret until granted. In Brazil, the applicant can request that the application be kept in secrecy for a period of 180 days from the filing date. This will also delay the prosecution and granting of the application for 180 days. In Japan, an applicant can request that a design be kept secret for a period of up 3 years after the registration has been granted. == Notable design patents == In 1842, George Bruce was awarded the first design patent, U.S. patent D1. The design patent was for a new font. In 1879, Frédéric Auguste Bartholdi was awarded design patent U.S. patent D11,023 for the Statue of Liberty. This patent covered the sale of small copies of the statue. Proceeds from the sale of the statues helped raise money to build the full statue in New York Harbor. In 1919, three design patents were granted for the badge of the American Legion, U.S. patent D54,296; the badge of the American Legion Women's Auxiliary, U.S. patent D55,398; and the badge of the Sons of the American Legion, U.S. patent D92,187. The original terms of these patents were to have expired in 1933, but Congress has continually extended their protection. The patents were extended for an additional fourteen-year term by an amendment to the National Defense Authorization Act in 2007 that passed the Senate on June 22, 2006. In 1936, Frank A. Redford was awarded U.S. patent D98,617 for the Wigwam Motel. Apple Inc. owns various patents regarding the design of the iPhone smartphone line and its related products. == Other forms of protection == === Utility patents === US utility patents protect the functionality of a given item, i.e., how a product works. Providing the maintenance fees are paid, utility patents are generally valid for up to 20 years from the date of filing (with some exceptions). Design patents cover the ornamental appearance of an item. Design patents can be invalidated if the design is dictated solely by function (e.g. the outline of a key blade blank). Design patents are valid for 14 years from the date of issue if filed prior to May 13, 2015, or 15 years from the date of issue if filed on or after May 13, 2015. There are no maintenance fees. "In general terms, a “utility patent” protects the way an article is used and works (35 U.S.C. 101), while a “design patent” protects the way an article looks (35 U.S.C. 171). The ornamental appearance for an article includes its shape/configuration or surface ornamentation applied to the article, or both. Both design and utility patents may be obtained on an article if its novelty resides both in its utility and ornamental appearance."MPEP - Distinction Between Design and Utility Patents === Copyright === Copyright prevents nonfunctional items from being copied. To show copyright infringement, the plaintiff must show the infringing item was copied from the original. The copyrighted artistic expression must either have no substantial practical utility (e.g. a statue) or be separable from the useful substrate (e.g. picture on a coffee mug). Design patents, on the other hand, protect the ornamental aspects of an article of manufacture from being infringed. One does not have to show that the infringing item was copied from the original. Thus a design that was arrived at independently can still infringe a design patent. Many objects can be covered by both copyright and design patents. The Statue of Liberty is one such example. === Trademark and trade dress === Trademarks and trade dress are used to protect consumers from confusion as to the source of specified goods. To get trademark protection, the trademark owner must show that the mark is non-functional, is distinctive, and is not likely to be confused with other trademarks for items in the same general class. The trademarks can last indefinitely as long as they are used in commerce. Design patents are only granted if the design is novel and not obvious over prior art designs, generally even those of different articles of manufacture than the patented object. An actual shield of a given shape, for example, might be cited as prior art against a design patent on a computer icon with a shield shape. However, recent case law has held that the shape of an art tool cannot be cited as anticipatory prior art against the substantially identical shape of a lip implant. The validity of design patents is not affected by whether or not the design is commercialized. Items can be covered by both trademarks and design patents. The contour bottle of Coca-Cola, for example, was covered by a now expired design patent, U.S. patent D48,160, but is still however protected by at least a US registered trademark. == See also == Geschmacksmuster Industrial design rights Intellectual property organizations Office for Harmonization in the Internal Market, Designs (OHIM) (European Union) Patent Japanese design law == References == == External links == The United States Design Patent Application Filing Guide The Canadian Intellectual Property Office The State Intellectual Property Office of China The Office for Harmonization in the Internal Market - European Community Design Taiwanese Intellectual Property Office Kenya Industrial Property Institute Korean Intellectual Property Office
Wikipedia/Design_patent
Design by contract (DbC), also known as contract programming, programming by contract and design-by-contract programming, is an approach for designing software. It prescribes that software designers should define formal, precise and verifiable interface specifications for software components, which extend the ordinary definition of abstract data types with preconditions, postconditions and invariants. These specifications are referred to as "contracts", in accordance with a conceptual metaphor with the conditions and obligations of business contracts. The DbC approach assumes all client components that invoke an operation on a server component will meet the preconditions specified as required for that operation. Where this assumption is considered too risky (as in multi-channel or distributed computing), the inverse approach is taken, meaning that the server component tests that all relevant preconditions hold true (before, or while, processing the client component's request) and replies with a suitable error message if not. == History == The term was coined by Bertrand Meyer in connection with his design of the Eiffel programming language and first described in various articles starting in 1986 and the two successive editions (1988, 1997) of his book Object-Oriented Software Construction. Eiffel Software applied for trademark registration for Design by Contract in December 2003, and it was granted in December 2004. The current owner of this trademark is Eiffel Software. Design by contract has its roots in work on formal verification, formal specification and Hoare logic. The original contributions include: A clear metaphor to guide the design process The application to inheritance, in particular a formalism for redefinition and dynamic binding The application to exception handling The connection with automatic software documentation == Description == The central idea of DbC is a metaphor on how elements of a software system collaborate with each other on the basis of mutual obligations and benefits. The metaphor comes from business life, where a "client" and a "supplier" agree on a "contract" that defines, for example, that: The supplier must provide a certain product (obligation) and is entitled to expect that the client has paid its fee (benefit). The client must pay the fee (obligation) and is entitled to get the product (benefit). Both parties must satisfy certain obligations, such as laws and regulations, applying to all contracts. Similarly, if the method of a class in object-oriented programming provides a certain functionality, it may: Expect a certain condition to be guaranteed on entry by any client module that calls it: the method's precondition—an obligation for the client, and a benefit for the supplier (the method itself), as it frees it from having to handle cases outside of the precondition. Guarantee a certain property on exit: the method's postcondition—an obligation for the supplier, and obviously a benefit (the main benefit of calling the method) for the client. Maintain a certain property, assumed on entry and guaranteed on exit: the class invariant. The contract is semantically equivalent to a Hoare triple which formalises the obligations. This can be summarised by the "three questions" that the designer must repeatedly answer in the contract: What does the contract expect? What does the contract guarantee? What does the contract maintain? Many programming languages have facilities to make assertions like these. However, DbC considers these contracts to be so crucial to software correctness that they should be part of the design process. In effect, DbC advocates writing the assertions first. Contracts can be written by code comments, enforced by a test suite, or both, even if there is no special language support for contracts. The notion of a contract extends down to the method/procedure level; the contract for each method will normally contain the following pieces of information: Acceptable and unacceptable input values or types, and their meanings Return values or types, and their meanings Error and exception condition values or types that can occur, and their meanings Side effects Preconditions Postconditions Invariants (more rarely) Performance guarantees, e.g. for time or space used Subclasses in an inheritance hierarchy are allowed to weaken preconditions (but not strengthen them) and strengthen postconditions and invariants (but not weaken them). These rules approximate behavioural subtyping. All class relationships are between client classes and supplier classes. A client class is obliged to make calls to supplier features where the resulting state of the supplier is not violated by the client call. Subsequently, the supplier is obliged to provide a return state and data that does not violate the state requirements of the client. For instance, a supplier data buffer may require that data is present in the buffer when a delete feature is called. Subsequently, the supplier guarantees to the client that when a delete feature finishes its work, the data item will, indeed, be deleted from the buffer. Other design contracts are concepts of class invariant. The class invariant guarantees (for the local class) that the state of the class will be maintained within specified tolerances at the end of each feature execution. When using contracts, a supplier should not try to verify that the contract conditions are satisfied—a practice known as offensive programming—the general idea being that code should "fail hard", with contract verification being the safety net. DbC's "fail hard" property simplifies the debugging of contract behavior, as the intended behaviour of each method is clearly specified. This approach differs substantially from that of defensive programming, where the supplier is responsible for figuring out what to do when a precondition is broken. More often than not, the supplier throws an exception to inform the client that the precondition has been broken, and in both cases—DbC and defensive programming alike—the client must figure out how to respond to that. In such cases, DbC makes the supplier's job easier. Design by contract also defines criteria for correctness for a software module: If the class invariant AND precondition are true before a supplier is called by a client, then the invariant AND the postcondition will be true after the service has been completed. When making calls to a supplier, a software module should not violate the supplier's preconditions. Design by contract can also facilitate code reuse, since the contract for each piece of code is fully documented. The contracts for a module can be regarded as a form of software documentation for the behavior of that module. == Performance implications == Contract conditions should never be violated during execution of a bug-free program. Contracts are therefore typically only checked in debug mode during software development. Later at release, the contract checks are disabled to maximize performance. In many programming languages, contracts are implemented with assert. Asserts are by default compiled away in release mode in C/C++, and similarly deactivated in C# and Java. Launching the Python interpreter with "-O" (for "optimize") as an argument will likewise cause the Python code generator to not emit any bytecode for asserts. This effectively eliminates the run-time costs of asserts in production code—irrespective of the number and computational expense of asserts used in development—as no such instructions will be included in production by the compiler. == Relationship to software testing == Design by contract does not replace regular testing strategies, such as unit testing, integration testing and system testing. Rather, it complements external testing with internal self-tests that can be activated both for isolated tests and in production code during a test-phase. The advantage of internal self-tests is that they can detect errors before they manifest themselves as invalid results observed by the client. This leads to earlier and more specific error detection. The use of assertions can be considered to be a form of test oracle, a way of testing the design by contract implementation. == Language support == === Languages with native support === Languages that implement most DbC features natively include: Ada 2012 Ciao Clojure Cobra C++ (since C++26) D Dafny Eiffel Fortress Kotlin Mercury Oxygene (formerly Chrome and Delphi Prism) Racket (including higher order contracts, and emphasizing that contract violations must blame the guilty party and must do so with an accurate explanation) Rust (experimental) Sather Scala SPARK (via static analysis of Ada programs) Vala Vienna Development Method (VDM) Additionally, the standard method combination in the Common Lisp Object System has the method qualifiers :before, :after and :around that allow writing contracts as auxiliary methods, among other uses. == See also == Component-based software engineering Correctness (computer science) Defensive programming Fail-fast system Formal methods Hoare logic Modular programming Program derivation Program refinement Strong typing Test-driven development Typestate analysis == Notes == == Bibliography == == External links == The Power of Design by Contract(TM) A top-level description of DbC, with links to additional resources. Building bug-free O-O software: An introduction to Design by Contract(TM) Older material on DbC. Benefits and drawbacks; implementation in RPS-Obix Using Code Contracts for Safer Code
Wikipedia/Design_by_contract
There is a large body of knowledge that designers call upon and use during the design process to match the ever-increasing complexity of design problems. Design knowledge can be classified into two categories: product knowledge and design process knowledge. == Product Knowledge == Product knowledge has been fairly studied and a number of modeling techniques have been developed. Most of them are tailored to specific products or specific aspects of the design activities. For example, geometric modeling is used mainly for supporting detailed design, while knowledge modeling is working for supporting conceptual designs. Based on these techniques, a design repository project at NIST attempts to model three fundamental facets of an artifact representation: the physical layout of the artifact (form), an indication of the overall effect that the artifact creates (function), and a causal account of the operation of the artifact (behavior). The recent NIST research effort towards the development of the basic foundations of the next generation of CAD systems suggested a core representation for design information called the NIST core product model (CPM) and a set of derived models defined as extensions of the CPM (e.g.). The NIST core product model has been developed to unify and integrate product or assembly information. The CPM provides a base-level product model that is: not tied to any vendor software; open; non-proprietary; expandable; independent of any one product development process; capable of capturing the engineering context that is most commonly shared in product development activities. The core model focuses on artifact representation including function, form, behavior, material, physical and functional decompositions, and relationships among these concepts. The entity-relationship data model influences the model heavily; accordingly, it consists of two sets of classes, called object and relationship, equivalent to the UML class and association class, respectively. == Design Process Knowledge == Design process knowledge can be described in two levels: design activities and design rationale. The importance of representation for design rationale has been recognized but it is a more complex issue that extends beyond artifact function. The design structure matrix (DSM) has been used for modeling design process (activities) and some related research efforts have been conducted. For example, a web-based prototype system for modeling the product development process using a multi-tiered DSM is developed at MIT. However, few research endeavors have been found on design rationale. == Representation Scenarios == In terms of representation scenarios, design knowledge can also be categorized into off-line and on-line knowledge. Design process knowledge can be categorized into ontologies. === Off-line Knowledge === Offline Knowledge refers to existing knowledge representation, including design knowledge in handbook and design ‘‘know-how’’, etc.; the latter refers to the new design knowledge created in the course of design activities by designers themselves. For the off-line knowledge, there are two representation approaches. One is to highly abstract and categorize existing knowledge including experiences into a series of design principles, rationales and constraints. TRIZ is a good instance of this approach. The other is to represent a collection of design knowledge into a certain case for description. Case-based design is an example of this approach. The key issue is on the computerization of the design knowledge representation. For instance, researchers at the Engineering Design Centre at Lancaster University, UK established a unique knowledge representation methodology and knowledge base vocabulary based on the theory of domains, design principles and computer modeling. They developed a software tool for engineering knowledge management. The tool provides an engineering system designer with the capability to search a knowledge base of past solutions, and other known technologies to explored viable alternatives for product design. === On-line Knowledge === On-line knowledge representation is capturing the dynamic design knowledge in a certain format for design re-use and archive. A few research efforts have been found in this area. Blessing proposes the process-based support system (PROSUS) based on a model of the design process rather than the product. It uses a design matrix to represent the design process as a structured set of issues and activities. Together with the common product data model (CPDM), PROSUS supports the capture of all outputs of the design activity. === Ontologies === Ontologies are being used for product representation (e.g.). Research suggests that there is a need to provide computer support that will supply clear and complete design knowledge and also facilitate designer intervention and customization during the decision-making activities in the design process. For example, WebCADET is a design support system that uses distributed Web-based AI tools. It uses the AI as text approach, where knowledge-based systems (KBSs) can be seen as a medium to facilitate the communication of design knowledge between designers. The system can provide support for designers when searching for design knowledge. == References ==
Wikipedia/Design_knowledge
A designer is a person who plans the form or structure of something before it is made, by preparing drawings or plans. In practice, anyone who creates tangible or intangible objects, products, processes, laws, games, graphics, services, or experiences can be called a designer. == Overview == A designer is someone who conceptualizes and creates new concepts, ideas, or products for consumption by the general public. It is different from an artist who creates art for a select few to understand or appreciate. However, both domains require some understanding of aesthetics. The design of clothing, furniture, and other common artifacts were left mostly to tradition or artisans specializing in hand making them. With the increasing complexity in industrial design of today's society, and due to the needs of mass production where more time is usually associated with more cost, the production methods became more complex and with them, the way designs and their production are created. The classical areas are now subdivided into smaller and more specialized domains of design (landscape design, urban design, interior design, industrial design, furniture design, fashion design, and much more) according to the product designed or perhaps its means of production. Despite various specializations within the design industry, all of them have similarities in terms of the approach, skills, and methods of working. Using design methods and design thinking to resolve problems and create new solutions are the most important aspects of being a designer. Part of a designer's job is to get to know the audience they intend on serving. In education, the methods of teaching or the program and theories followed vary according to schools and field of study. In industry, a design team for large projects is usually composed of a number of different types of designers and specialists. The relationships between team members will vary according to the proposed product, the processes of production or the research followed during the idea development, but normally they give an opportunity to everyone in the team to take a part in the creation process. == Design professions == Different types of designers include: Animation Architecture Communication design Costume design Engineering design Fashion design Floral design Furniture design Game design Graphic design Industrial design Instructional design Interaction design Interior design Jewelry design Landscape design Logo design Lighting design Packaging design Product design Scenic design Service design Software design Sound design Strategic design Systems design Textile design Urban design User experience design User interface design Web design == See also == Architect Design Design engineer Design firm Design thinking Visual arts == References ==
Wikipedia/Designer
Sensory design aims to establish an overall diagnosis of the sensory perceptions of a product, and define appropriate means to design or redesign it on that basis. It involves an observation of the diverse and varying situations in which a given product or object is used in order to measure the users' overall opinion of the product, its positive and negative aspects in terms of tactility, appearance, sound and so on. Sensory assessment aims to quantify and describe, in a systematic manner, all human perceptions when confronted with a product or object. Contrary to traditional laboratory analysis, a sensory analysis of a product is either carried out by a panel of trained testers, or by specialized test equipment designed to mimic the perception of humans. The result allows researchers to establish a list of specifications and to set out a precise and quantified requirement. These are applied to materials and objects using various criteria: Touch, textures, compliance, friction. Vision color, luminosity, shape, pattern. Sounds and movements made when a product is handled; Smell; Taste; Temperature and perceived thermal properties == Use in Transportation == In the transportation sphere, sensory analysis are sometimes translated into minor enhancements to the design for a vehicle interior, information system, or station environment to smooth some of the rougher edges of the travel experience. For example, specialized air purifying equipment can be used to design a more pleasant odor in train compartments. == Use in Food and Beverage Industry == Sensory design plays a critical role in the modern food and beverage industry. The food and beverage industry attempts to maintain specific sensory experiences. In addition to smell and flavor, the color (e.g. ripe fruits) and texture of food (e.g. potato chips) are also important. Even the environment is important as "Color affects the appetite, in essence, the taste of food". Food is a multi-sensorial experience in which all five senses (vision, touch, sound, smell and taste) come together to create a strong memory. In food marketing, the goal of the marketers is to design their products so that those food and beverage would stimulate as many senses of the customers. At restaurants, many sensorial aspects such as the interior design (vision), texture of the chairs and tables (touch), background music and the noise level (sound), openness of the kitchen and cooking scene (smell and vision), and of course, the food itself (taste), all come together before a customer decides if he or she likes the experience and would want to revisit. While multi-sensory experiences were only subjected to a few categories in the past, in modern day, the spectrum has expanded to acknowledge the importance of sensory design. Food used to be considered strictly as an experience for taste. Now, as the multi-sensorial trait of food has been known, marketers of food products and restaurants focus more on providing services that extend beyond the sense of taste. In recent research, the role of vestibular sense, a system that contributes to sense of balance and space, has been highlighted in relation to food. Often be referred as "the sixth sense", researches show that vestibular senses that are exhibited through people's postures while eating, can shape their perceptions for food. In general, people tend to rate food as better-tasting when they consume it while sitting down, compared to standing up. The researches conclude that the perception of food and vestibular system is in the result of the different stress levels caused according to the postures. == Use in Architecture == Similar to food that used to be regarded merely as an experience of taste, architecture in the past used to be subjected only to sense of vision, which is why much of architectural products relied on visual forms of photographs, or television. In contrast, architecture has become a multi-sensorial experience in which people visit the architectural sites and feel the various sensorial aspects such as the texture of the building, background noise and the scent of the surrounding area, and the overall look of the building in coordination with the nature and the area. Furthermore, there is a type of design in architecture field called "responsive architecture", which is a design that interacts with people. This kind of architecture could promote the occupants' lifestyle if sensory design is properly applied. For instance, if a responsive architecture is helping an occupant with a goal to exercise more, sensory design can arrange its environmental stimuli in time along an occupant’s path, like a space may serve to feed occupants through their senses to inspire and teach exercise at just the right time and in just the right way. When it comes to the experience of architecture, our visual senses only play a small part. This is also why when architects are designing, they need to think of "after-the-moment" experience instead of merely "in-the-moment" experience for the occupants. == Sensory Design Technologies == While classically limited to the perception of trained sensory experts, advances in sensors and computation have allowed objective quantified measurements of sensory information to be acquired, quantified and communicated leading to improved design communication, translation from prototype to production, and quality assurance. Sensory areas that have been objectively quantified include vision, touch, and smell. === Vision === In vision both light and color are considered in sensory design. Early light meters (called extinction meters) relied on the human eye to gauge and quantify the amount of light. Subsequently, analog and digital light meters have been popularized for photography. Work by Lawrence Herbert in the 1960s lead to a systematic combination of lighting and color samples required to quantify colors by human eye. This became the basis for the Pantone Matching System. Combining this with specialized light meters allowed digital color meters to be invented and popularized. === Touch === Touch plays an important role in a variety of products and is increasingly considered in product design and marketing efforts and has led to a more scientific approach to tactile design and marketing. Classical the field of tribology has developed various tests to evaluate interacting surfaces in relative motion with a focus on measuring friction, lubrication, and wear. However these measurements do not correlate with human perception. Alternative methods for evaluating how materials feel were first popularized from work initiated at Kyoto University. The Kawabata evaluation system developed six measurements of how fabrics feel. The SynTouch Haptics Profiles produced by the SynTouch Toccare Haptics Measurement System that incorporates a biomimetic tactile sensor to quantify 15 dimensions of touch based on psychophysics research performed with over 3000 materials. === Smell === Measuring odors has remained difficult. A variety of techniques have been attempted but “Most measures have had a subjective component that makes them anachronistic with modern methodology in experimental behavioral science, indeterminate regarding the extent of individual differences, unusable with infra humans and of unproved ability to discern small differences”. New methods for robotic exploration of smell are being proposed. == References == == Bibliography == Joy Monice Malnar and Frank Vodvarka, Sensory Design, (Minneapolis: University of Minnesota Press, 2004). ISBN 0-8166-3959-0 (in French) Louise Bonnamy, Jean-François Bassereau, Régine Charvet-Pello. Design sensoriel. Techniques de l'ingénieur, 2009 (in French) Jean-François Bassereau, Régine Charvet-Pello. Dictionnaire des mots du sensoriel. Paris, Tec & Doc - Editions Lavoisier, 2011, 544 p. ISBN 2-7430-1277-3 == See also == Interaction design Communication design Environmental design Experience design WikID
Wikipedia/Sensory_design
Secure by design, in software engineering, means that software products and capabilities have been designed to be foundationally secure. Alternate security strategies, tactics and patterns are considered at the beginning of a software design, and the best are selected and enforced by the architecture, and they are used as guiding principles for developers. It is also encouraged to use strategic design patterns that have beneficial effects on security, even though those design patterns were not originally devised with security in mind. Secure by Design is increasingly becoming the mainstream development approach to ensure security and privacy of software systems. In this approach, security is considered and built into the system at every layer and starts with a robust architecture design. Security architectural design decisions are based on well-known security strategies, tactics, and patterns defined as reusable techniques for achieving specific quality concerns. Security tactics/patterns provide solutions for enforcing the necessary authentication, authorization, confidentiality, data integrity, privacy, accountability, availability, safety and non-repudiation requirements, even when the system is under attack. In order to ensure the security of a software system, not only is it important to design a robust intended security architecture but it is also necessary to map updated security strategies, tactics and patterns to software development in order to maintain security persistence. == Expect attacks == Malicious attacks on software should be assumed to occur, and care is taken to minimize impact. Security vulnerabilities are anticipated, along with invalid user input. Closely related is the practice of using "good" software design, such as domain-driven design or cloud native, as a way to increase security by reducing risk of vulnerability-opening mistakes—even though the design principles used were not originally conceived for security purposes. == Avoid security through obscurity == Generally, designs that work well do not rely on being secret. Often, secrecy reduces the number of attackers by demotivating a subset of the threat population. The logic is that if there is an increase in complexity for the attacker, the increased attacker effort to compromise the target will discourage them. While this technique implies reduced inherent risks, a virtually infinite set of threat actors and techniques applied over time will cause most secrecy methods to fail. While not mandatory, proper security usually means that everyone is allowed to know and understand the design because it is secure. This has the advantage that many people are looking at the source code, which improves the odds that any flaws will be found sooner (see Linus's law). The disadvantage is that attackers can also obtain the code, which makes it easier for them to find vulnerabilities to exploit. It is generally believed, though, that the advantage of the open source code outweighs the disadvantage. == Fewest privileges == Also, it is important that everything works with the fewest privileges possible (see the principle of least privilege). For example, a web server that runs as the administrative user ("root" or "admin") can have the privilege to remove files and users. A flaw in such a program could therefore put the entire system at risk, whereas a web server that runs inside an isolated environment, and only has the privileges for required network and filesystem functions, cannot compromise the system it runs on unless the security around it in itself is also flawed. == Methodologies == Secure Design should be a consideration during the development lifecycle (whichever development methodology is chosen). Some pre-built Secure By Design development methodologies exist (e.g. Microsoft Security Development Lifecycle). == Standards and legislation == Standards and Legislation exist to aide secure design by controlling the definition of "Secure", and providing concrete steps to testing and integrating secure systems. Some examples of standards which cover or touch on Secure By Design principles: ETSI TS 103 645 which is included in part in the UK Government "Proposals for regulating consumer smart product cyber security" ISO/IEC 27000-series covers many aspects of secure design. == Server/client architectures == In server/client architectures, the program at the other side may not be an authorised client and the client's server may not be an authorised server. Even when they are, a man-in-the-middle attack could compromise communications. Often the easiest way to break the security of a client/server system is not to go head on to the security mechanisms, but instead to go around them. A man in the middle attack is a simple example of this, because you can use it to collect details to impersonate a user. Which is why it is important to consider encryption, hashing, and other security mechanisms in your design to ensure that information collected from a potential attacker won't allow access. Another key feature to client-server security design is good coding practices. For example, following a known software design structure, such as client and broker, can help in designing a well-built structure with a solid foundation. Furthermore, if the software is to be modified in the future, it is even more important that it follows a logical foundation of separation between the client and server. This is because if a programmer comes in and cannot clearly understand the dynamics of the program, they may end up adding or changing something that can add a security flaw. Even with the best design, this is always a possibility, but the better the standardization of the design, the less chance there is of this occurring. == See also == Computer security Cyber security standards Hardening Multiple Independent Levels of Security Security through obscurity Software Security Assurance == References == == External links == Secure Programming for Linux and Unix HOWTO Secure UNIX Programming FAQ Top 10 Secure Coding Practices Security by Design Principles
Wikipedia/Secure_by_design
Critical design uses design fiction and speculative design proposals to challenge assumptions and conceptions about the role objects play in everyday life. Critical design plays a similar role to product design, but does not emphasize an object's commercial purpose or physical utility. It is mainly used to share a critical perspective or inspire debate, while increasing awareness of social, cultural, or ethical issues in the eyes of the public. Critical design was popularized by Anthony Dunne and Fiona Raby through their firm, Dunne & Raby. Critical design can make aspects of the future physically present to provoke a reaction. "Critical design is critical thought translated into materiality. It is about thinking through design rather than through words and using the language and structure of design to engage people." It may be conflated with the critical theory or the Frankfurt School, but it is not related. == Definition == A critical design object challenges an audience's preconceptions, provoking new ways of thinking about the object, its use, and the surrounding culture. Its adverse is affirmative design: design that reinforces the status quo. For a project to succeed in critical design, the viewer must be mentally engaged and willing to think beyond the expected and ordinary. Humor is important, but satire is not the goal. Many practitioners of critical design have never heard of the term itself, and/or would describe their work differently. Referring to it as critical design simply garners more attention to it, and emphasizes that design has applications beyond problem solving. It is more of an attitude than a style or movement; a position rather than a method. Critical design builds on this attitude by creatively critiquing concepts and ideologies using fabricated artifacts to embody commentaries around everything from consumer culture to the #MeToo Movement. Regardless of its processes, critical design is often discussed as a unique approach in Design Research, perhaps because of its focus on critiquing widely held social, cultural, and technical beliefs. The process of designing such an object, as well as the presentation and narrative around the object itself, allows for reflection on existing cultural values, morals, and practices. In making such an object, critical designers frequently employ classic design processes—research, user experience, iteration—while working to conceptualize scenarios intended to highlight social, cultural, or political paradigms. Design as societal critique is not a new idea. == History == Italian Radical Design of the 1960s and 70s was highly critical of prevailing social values and design ideologies. The term "critical design" was first used in Anthony Dunne's book Hertzian Tales (1999) and further developed in Design Noir: The Secret Life of Electronic Objects (2001). According to Sanders, critical design probes an "ambiguous stimuli that designers send to people who then respond to them, providing insights for the design process." Uta Brandes identifies critical design as a discrete Design Research method and Bowen integrates it into human-centered design activities as a useful tool for stakeholders to critically think about possible futures. FABRICA, a communication research center owned by Italian fashion giant Benetton Group, has been actively involved in producing provocative imagery and critical design projects. FABRICA's Visual Communication department, led by Omar Vulpinari, actively participates in critiquing social, political and environmental issues through global awareness campaigns for international magazines and organizations like UN-WHO. Several young artists who have produced critical design projects at FABRICA in recent years are Erik Ravello (Cuba), Yianni Hill (Australia), Marian Grabmayer (Austria), Priya Khatri (India), Andy Rementer (United States), and An Namyoung (South Korea). == Function == To attribute to design practice, critical design broadens the vision in design from traditional practice. It is no longer limited to highlighting the physical function in product design, though this causes some ambiguities in the discussion of critical design's function as it maintains in design area. Matt Malpass addresses Larry Ligo's classification of five different types of function: Structural articulation, Physical function, Psychological function, Social function, as well as Cultural-existential function in his article, with a further discussion of how Modernism leaves a narrower understanding of physical utility when we think about function, which leads to the ambiguity in critical design's function. As critical design focuses on present social, cultural, and ethical implications of design objects and practice, it mostly emphasizes on social and cultural impact from its function. In addition, critical design objects have a lot of potential to contribute to testing ideas during the process of the development of new technology. As Dunne and Raby express their concerns about always lacking communication between the specialists and the general public to form a two-way discussion of new technology. It always limits to one-way flow from specialists to the public. Critical design provides a stage to give scenarios, completes the dialog between specialists and the general public and helps to collect feedback from the public for further refinements before the idea is going too far for any changes. == Critical play == Researcher Mary Flanagan wrote Critical Play:Radical Game Design in 2009, the same year that Lindsay Grace started the Critical Gameplay project. Grace's Critical Gameplay project is an internationally exhibited collection of video games that apply critical design. The games provoke questions about the way games are designed and played. The Critical Gameplay Game, Wait, was awarded the Games for Change hall of fame award for being one of the 5 most important games for social impact since 2003. The work has been shown at Electronic Language International Festival, Games, Learning & Society Conference, Conference on Human Factors in Computing Systems among other notable events. == Critiques == As critical design has gained mainstream exposure, the discipline has been itself criticized by some for dramatizing so-called 'dystopian scenarios,' which may, in fact, be reflective of real-life conditions in some places in the world. Some see critical design as rooted in the fears of a wealthy, urban, western population and failing to engage with existing social problems. As an example, a project titled Republic of Salivation, by designers Michael Burton and Michiko Nitta, featured as part of MoMA's Design and Violence series, portrays a society plagued by overpopulation and food scarcity which is reliant on heavily modified, government-provided, nutrient blocks. Certain media responses to the work, point to the "presumed naivety of the project," which presents a scenario that "might be dystopian to some, but in some other parts of the world it has been the reality for decades." == Critical acclaim == In recognition of their formalization of the field, Anthony Dunne and Fiona Raby were presented with the inaugural MIT Media Lab Award in June 2015 with director Joichi Ito pointing out that "[Dunne and Raby's] pioneering approach to critical design and its intersection with science, technology, art, and the humanities has changed the landscape of design education and practice worldwide." == Distinctions with Conceptual art == Conceptual art practice has a very similar role as critical design since both of them are sharing critical perspectives to the public and being commentators to issues, the public may get confused to understand these two different fields. However, Matt Malpass points out that the critical designer still applies the skills from the training and practice as designer but re-orientates these skills from a focus on practical ends to a focus on design work that functions symbolically, culturally, existentially, and discursively. Critical design objects are made precisely based on the design principles and carefully follow the design and design research process. Also, critical design objects always stay close to people's everyday life. They tend to be tested on real people and get feedback for further developments. Conceptual art usually associates with gallery spaces and mostly tends to apply the artistic media in the process. == See also == Social fiction Design fiction Critical making Critical technical practice Science fiction prototyping Speculative design Talk to Me (exhibition) (MoMA), 2011 == References ==
Wikipedia/Critical_design
Industrial design is a process of design applied to physical products that are to be manufactured by mass production. It is the creative act of determining and defining a product's form and features, which takes place in advance of the manufacture or production of the product. Industrial manufacture consists of predetermined, standardized and repeated, often automated, acts of replication, while craft-based design is a process or approach in which the form of the product is determined personally by the product's creator largely concurrent with the act of its production. All manufactured products are the result of a design process, but the nature of this process can vary. It can be conducted by an individual or a team, and such a team could include people with varied expertise (e.g. designers, engineers, business experts, etc.). It can emphasize intuitive creativity or calculated scientific decision-making, and often emphasizes a mix of both. It can be influenced by factors as varied as materials, production processes, business strategy, and prevailing social, commercial, or aesthetic attitudes. Industrial design, as an applied art, most often focuses on a combination of aesthetics and user-focused considerations, but also often provides solutions for problems of form, function, physical ergonomics, marketing, brand development, sustainability, and sales. == History == === Precursors === For several millennia before the onset of industrialization, design, technical expertise, and manufacturing was often done by individual crafts people, who determined the form of a product at the point of its creation, according to their own manual skill, the requirements of their clients, experience accumulated through their own experimentation, and knowledge passed on to them through training or apprenticeship. The division of labour that underlies the practice of industrial design did have precedents in the pre-industrial era. The growth of trade in the medieval period led to the emergence of large workshops in cities such as Florence, Venice, Nuremberg, and Bruges, where groups of more specialized craftsmen made objects with common forms through the repetitive duplication of models which defined by their shared training and technique. Competitive pressures in the early 16th century led to the emergence in Italy and Germany of pattern books: collections of engravings illustrating decorative forms and motifs which could be applied to a wide range of products, and whose creation took place in advance of their application. The use of drawing to specify how something was to be constructed later was first developed by architects and shipwrights during the Italian Renaissance. In the 17th century, the growth of artistic patronage in centralized monarchical states such as France led to large government-operated manufacturing operations epitomized by the Gobelins Manufactory, opened in Paris in 1667 by Louis XIV. Here teams of hundreds of craftsmen, including specialist artists, decorators and engravers, produced sumptuously decorated products ranging from tapestries and furniture to metalwork and coaches, all under the creative supervision of the King's leading artist Charles Le Brun. This pattern of large-scale royal patronage was repeated in the court porcelain factories of the early 18th century, such as the Meissen porcelain workshops established in 1709 by the Grand Duke of Saxony, where patterns from a range of sources, including court goldsmiths, sculptors, and engravers, were used as models for the vessels and figurines for which it became famous. As long as reproduction remained craft-based, however, the form and artistic quality of the product remained in the hands of the individual craftsman, and tended to decline as the scale of production increased. === Birth of industrial design === The emergence of industrial design is specifically linked to the growth of industrialization and mechanization that began with the Industrial Revolution in Great Britain in the mid 18th century. The rise of industrial manufacture changed the way objects were made, urbanization changed patterns of consumption, the growth of empires broadened tastes and diversified markets, and the emergence of a wider middle class created demand for fashionable styles from a much larger and more heterogeneous population. The first use of the term "industrial design" is often attributed to the industrial designer Joseph Claude Sinel in 1919 (although he himself denied this in interviews), but the discipline predates 1919 by at least a decade. Christopher Dresser is considered among the first independent industrial designers. Industrial design's origins lie in the industrialization of consumer products. For instance, the Deutscher Werkbund (a precursor to the Bauhaus founded in 1907 by Peter Behrens and others) was a state-sponsored effort to integrate traditional crafts and industrial mass-production techniques, to put Germany on a competitive footing with Great Britain and the United States. The earliest published use of the term may have been in The Art-Union, 15 September 1840. Dyce's Report to the Board of Trade, on Foreign Schools of Design for Manufactures. Mr. Dyce's official visit to France, Prussia, and Bavaria, for the purpose of examining the state of schools of design in those countries, will be fresh in the recollection of our readers. His report on this subject was ordered to be printed some few months since, on the motion of Mr. Hume; and it is the sum and substance of this Report that we are now about to lay before our own especial portion of the reading public. The school of St. Peter, at Lyons, was founded about 1750, for the instruction of draftsmen employed in preparing patterns for the silk manufacture. It has been much more successful than the Paris school; and having been disorganized by the revolution, was restored by Napoleon and differently constituted, being then erected into an Academy of Fine Art: to which the study of design for silk manufacture was merely attached as a subordinate branch. It appears that all the students who entered the school commence as if they were intended for artists in the higher sense of the word and are not expected to decide as to whether they will devote themselves to the Fine Arts or to Industrial Design, until they have completed their exercises in drawing and painting of the figure from the antique and from the living model. It is for this reason, and from the fact that artists for industrial purposes are both well-paid and highly considered (as being well-instructed men), that so many individuals in France engage themselves in both pursuits. The Practical Draughtsman's Book of Industrial Design by Jacques-Eugène Armengaud was printed in 1853. The subtitle of the (translated) work explains, that it wants to offer a "complete course of mechanical, engineering, and architectural drawing." The study of those types of technical drawing, according to Armengaud, belongs to the field of industrial design. This work paved the way for a big expansion in the field of drawing education in France, the United Kingdom, and the United States. Robert Lepper helped to establish one of the USA's first industrial design degree programs in 1934 at Carnegie Institute of Technology. == Education == Product design and industrial design overlap in the fields of user interface design, information design, and interaction design. Various schools of industrial design specialize in one of these aspects, ranging from pure art colleges and design schools (product styling), to mixed programs of engineering and design, to related disciplines such as exhibit design and interior design, to schools that almost completely subordinated aesthetic design to concerns of usage and ergonomics, the so-called functionalist school. Except for certain functional areas of overlap between industrial design and engineering design, the former is considered an applied art while the latter is an applied science. Educational programs in the U.S. for engineering require accreditation by the Accreditation Board for Engineering and Technology (ABET) in contrast to programs for industrial design which are accredited by the National Association of Schools of Art and Design (NASAD). Of course, engineering education requires heavy training in mathematics and physical sciences, which is not typically required in industrial design education. === Institutions === Most industrial designers complete a design or related program at a vocational school or university. Relevant programs include graphic design, interior design, industrial design, architectural technology, and drafting. Diplomas and degrees in industrial design are offered at vocational schools and universities worldwide. Diplomas and degrees take two to four years of study. The study results in a Bachelor of Industrial Design (B.I.D.), Bachelor of Science (B.Sc.) or Bachelor of Fine Arts (B.F.A.). Afterwards, the bachelor programme can be extended to postgraduate degrees such as Master of Design, Master of Fine Arts and others to a Master of Arts or Master of Science. == Definition == Industrial design studies function and form—and the connection between product, user, and environment. Generally, industrial design professionals work in small scale design, rather than overall design of complex systems such as buildings or ships. Industrial designers don't usually design motors, electrical circuits, or gearing that make machines move, but they may affect technical aspects through usability design and form relationships. Usually, they work with other professionals such as engineers who focus on the mechanical and other functional aspects of the product, assuring functionality and manufacturability, and with marketers to identify and fulfill customer needs and expectations. Design, itself, is often difficult to describe to non-designers because the meaning accepted by the design community is not made of words. Instead, the definition is created as a result of acquiring a critical framework for the analysis and creation of artifacts. One of the many accepted (but intentionally unspecific) definitions of design originates from Carnegie Mellon's School of Design: "Everyone designs who devises courses of action aimed at changing existing situations into preferred ones." This applies to new artifacts, whose existing state is undefined, and previously created artifacts, whose state stands to be improved. Industrial design can overlap significantly with engineering design, and in different countries the boundaries of the two concepts can vary, but in general engineering focuses principally on functionality or utility of products, whereas industrial design focuses principally on aesthetic and user-interface aspects of products. In many jurisdictions this distinction is effectively defined by credentials and/or licensure required to engage in the practice of engineering. "Industrial design" as such does not overlap much with the engineering sub-discipline of industrial engineering, except for the latter's sub-specialty of ergonomics. At the 29th General Assembly in Gwangju, South Korea, 2015, the Professional Practise Committee unveiled a renewed definition of industrial design as follows: "Industrial Design is a strategic problem-solving process that drives innovation, builds business success and leads to a better quality of life through innovative products, systems, services and experiences." An extended version of this definition is as follows: "Industrial Design is a strategic problem-solving process that drives innovation, builds business success and leads to a better quality of life through innovative products, systems, services and experiences. Industrial Design bridges the gap between what is and what's possible. It is a trans-disciplinary profession that harnesses creativity to resolve problems and co-create solutions with the intent of making a product, system, service, experience or a business, better. At its heart, Industrial Design provides a more optimistic way of looking at the future by reframing problems as opportunities. It links innovation, technology, research, business and customers to provide new value and competitive advantage across economic, social and environmental spheres. Industrial Designers place the human in the centre of the process. They acquire a deep understanding of user needs through empathy and apply a pragmatic, user centric problem solving process to design products, systems, services and experiences. They are strategic stakeholders in the innovation process and are uniquely positioned to bridge varied professional disciplines and business interests. They value the economic, social and environmental impact of their work and their contribution towards co-creating a better quality of life. " == Design process == Although the process of design may be considered 'creative,' many analytical processes also take place. In fact, many industrial designers often use various design methodologies in their creative process. Some of the processes that are commonly used are user research, sketching, comparative product research, model making, prototyping and testing. The design process is iterative, involving dozens or even hundreds of ideas being considered until the final design is reached. Industrial designers often utilize 3D software, computer-aided industrial design and CAD programs to move from concept to production. They may also build a prototype or scaled down sketch models through a 3D printing process or using other materials such as paper, balsa wood, various foams, or clay for modeling. They may then use industrial CT scanning to test for interior defects and generate a CAD model. From this the manufacturing process may be modified to improve the product. Product characteristics specified by industrial designers may include the overall form of the object, the location of details with respect to one another, colors, texture, form, and aspects concerning the use of the product. Additionally, they may specify aspects concerning the production process, choice of materials and the way the product is presented to the consumer at the point of sale. The inclusion of industrial designers in a product development process may lead to added value by improving usability, lowering production costs, and developing more appealing products. Industrial design may also focus on technical concepts, products, and processes. In addition to aesthetics, human factors, ergonomics and anthropometrics, it can also encompass engineering, usefulness, market placement, and other concerns—such as psychology, and the emotional attachment of the user. These values and accompanying aspects that form the basis of industrial design can vary—between different schools of thought, and among practicing designers. == Industrial design rights == Industrial design rights are intellectual property rights that make exclusive the visual design of objects that are not purely utilitarian. A design patent would also be considered under this category. An industrial design consists of the creation of a shape, configuration or composition of pattern or color, or combination of pattern and color in three-dimensional form containing aesthetic value. An industrial design can be a two- or three-dimensional pattern used to produce a product, industrial commodity or handicraft. Under the Hague Agreement Concerning the International Deposit of Industrial Designs, a WIPO-administered treaty, a procedure for an international registration exists. An applicant can file for a single international deposit with WIPO or with the national office in a country party to the treaty. The design will then be protected in as many member countries of the treaty as desired. In 2022, about 1.1 million industrial design applications were filed worldwide. This represents a decrease of 3% on 2021, marking a first drop in filings since 2014. In 2023, the number of applications rose again, with about 1.19 million design applications filed. === Hague top applicants === The Hague System for the International Registration of Industrial Designs provides an international mechanism that secures protection of up to 100 designs in multiple countries or regions, through a single international application. International design applications are filed directly through WIPO using the WIPO Hague System. The domestic legal framework of each designated contracting party governs the design protection provided by the resulting international registrations. The Hague System does not require the applicant to file a national or regional design application. == Examples of industrial design == A number of industrial designers have made such a significant impact on culture and daily life that their work is documented by historians of social science. Alvar Aalto, renowned as an architect, also designed a significant number of household items, such as chairs, stools, lamps, a tea-cart, and vases. Raymond Loewy was a prolific American designer who is responsible for the Royal Dutch Shell corporate logo, the original BP logo (in use until 2000), the PRR S1 steam locomotive, the Studebaker Starlight (including the later bulletnose), as well as Schick electric razors, Electrolux refrigerators, short-wave radios, Le Creuset French ovens, and a complete line of modern furniture, among many other items. Richard Teague, who spent most of his career with the American Motors Corporation, originated the concept of using interchangeable body panels so as to create a wide array of different vehicles using the same stampings. He was responsible for such unique automotive designs as the Pacer, Gremlin, Matador coupe, Jeep Cherokee, and the complete interior of the Eagle Premier. Milwaukee's Brooks Stevens was best known for his Milwaukee Road Skytop Lounge car and Oscar Mayer Wienermobile designs, among others. Viktor Schreckengost designed bicycles manufactured by Murray bicycles for Murray and Sears, Roebuck and Company. With engineer Ray Spiller, he designed the first truck with a cab-over-engine configuration, a design in use to this day. Schreckengost also founded The Cleveland Institute of Art's school of industrial design. Oskar Barnack was a German optical engineer, precision mechanic, industrial designer, and the father of 35mm photography. He developed the Leica, which became the hallmark for photography for 50 years, and remains a high-water mark for mechanical and optical design. Charles and Ray Eames were most famous for their pioneering furniture designs, such as the Eames Lounge Chair Wood and Eames Lounge Chair. Other influential designers included Henry Dreyfuss, Eliot Noyes, John Vassos, and Russel Wright. Dieter Rams is a German industrial designer closely associated with the consumer products company Braun and the Functionalist school of industrial design. German industrial designer Luigi Colani, who designed cars for automobile manufacturers including Fiat, Alfa Romeo, Lancia, Volkswagen, and BMW, was also known to the general public for his unconventional approach to industrial design. He had expanded in numerous areas ranging from mundane household items, instruments and furniture to trucks, uniforms and entire rooms. A grand piano created by Colani, the Pegasus, is manufactured and sold by the Schimmel piano company. Many of Apple's recent products were designed by Sir Jonathan Ive. == See also == == References == === Sources === Barnwell, Maurice. Design, Creativity and Culture, Black Dog, 2011, ISBN 978 1 907317 408 Barnwell, Maurice. Design Evolution: Big Bang to Big Data,Toronto, 2014. ISBN 978-0-9937396-0-6 Baynes, Ken (1991). "Forms of Representation". In Pirovano, Carlo (ed.). History of Industrial Design. Vol. 1. Milan: Electa. pp. 108–127. OCLC 32885051. Benton, Charlotte (2000). "Design and Industry". In Kemp, Martin (ed.). The Oxford History of Western Art. Oxford: Oxford University Press. pp. 380–383. ISBN 0198600127. Forty, Adrian. Objects of Desire: Design and Society Since 1750. Thames Hudson, May 1992. ISBN 978-0-500-27412-5 Heskett, John (1980). Industrial Design. Thames & Hudson. ISBN 0500201811. Kirkham, Pat (1999). Industrial design. Grove Art Online. Oxford University Press. Mayall, WH, Industrial Design for Engineers, London: Iliffe Books, 1967, ISBN 978-0592042053 Mayall, WH, Machines and Perception in Industrial Design, London: Studio Vista, 1968, ISBN 978-0289279168 Meikle, Jeffrey. Twentieth Century Limited: Industrial Design engineering in America, 1925 - 1939, Philadelphia: Temple University Press, 1979 ISBN 978-0877222460 Noblet, Jocelyn de (1993). "Design in Progress". In Noblet, Jocelyn de (ed.). Industrial design: reflection of a century. Paris: Flammarion/APCI. pp. 21–25. ISBN 2080135392. Pulos, Arthur (1983). American Design Ethic: A History of Industrial Design. MIT Press. ISBN 9780262368100. == External links == Doodles, Drafts and Designs: Industrial Drawings from the Smithsonian (2004) Smithsonian Institution Libraries Hague Yearly Review related to industrial design Hague applications
Wikipedia/Industrial_design
Quality by design (QbD) is a concept first outlined by quality expert Joseph M. Juran in publications, most notably Juran on Quality by Design. Designing for quality and innovation is one of the three universal processes of the Juran Trilogy, in which Juran describes what is required to achieve breakthroughs in new products, services, and processes. Juran believed that quality could be planned, and that most quality crises and problems relate to the way in which quality was planned. While quality by design principles have been used to advance product and process quality in industry, and particularly the automotive industry, they have also been adopted by the U.S. Food and Drug Administration (FDA) for the discovery, development, and manufacture of drugs. == Juran on quality by design == The Juran Trilogy defines the word "quality" as having two meanings: first, the presence of features that create customer satisfaction; second, the reliability of those features. Failures in features create dissatisfactions, so removing failures is the purpose of quality improvement, while creating features is the purpose of quality by design. Juran's process seeks to create features in response to understanding customer needs. These are customer-driven features. The sum of all features is the new product, service, or process. The quality by design model consists of the following steps: Establish the project design targets and goals. Define the market and customers that will be targeted. Discover the market, customers, and societal needs. Develop the features of the new design that will meet the needs. Develop or redevelop the processes to produce the features. Develop process controls to be able to transfer the new designs to operations. It is not a statistical design method like Design for Six Sigma. === Integrated planning === Integrated planning requires a team with a leader whose sole accountability is for the total success of the new product from defining the opportunity through customer purchase, use, service, and recommendation to others. This team leader reports directly to a senior executive, or the team leader can be a senior executive. Each team member's job is to ensure the success of the new product. In addition to organizational integration, a successful team must begin with clearly articulated common goals for the product that are measurable and authorized by the enterprise. These goals must, at a minimum, cover such elements as: The customers or customer segments to be served by the new product The relative and absolute quality goals The volume of sales or revenue to be generated in an initial time period and for the long run Market share, penetration, or sales relative to key competitors The release date The team will follow a structured process. The structure is the common framework for all participants in launching the new product and helps ensure success. === Customer-focused optimization === Quality by design starts and ends with the customer. Every new product introduction has some amount of trade-off involved. If there are multiple customers, they may have conflicting needs. Even the same customer may have needs that compete with each other. Capacity and speed compete with cost of operation. Capacity can compete with speed. Flexibility and feature-rich offerings may have reduced ease of use, and so on. Quality by design offers a range of tools and methods intended to make these tradeoffs explicit and optimal for the customer. Some tools are highly mathematical, and others relate more to customer behavior. Quality by design sets strong expectations for creative approaches to functional design, product features and goals, and production design. === Control over variation and transfer to operations === Quality by design incorporates modern tools to preemptively control variation. These tools and methods begin by measuring and understanding the variation that exists by using historical data, testing, and modeling to help forecast, analyze, and eliminate the deleterious effects of variation using standard statistical techniques. Process control consists of three basic activities: Evaluate the actual performance of the process Compare actual performance with goals Take action on the difference The final activity of the quality by design process is to implement the plan and validate that the transfer has occurred. == Pharmaceutical quality by design == The FDA imperative is outlined in its report "Pharmaceutical Quality for the 21st Century: A Risk-Based Approach." In the past few years, the agency has implemented the concepts of QbD into its pre-market processes. The focus of this concept is that quality should be built into a product with an understanding of the product and process by which it is developed and manufactured along with a knowledge of the risks involved in manufacturing the product and how best to mitigate those risks. This is a successor to the "quality by QC" (or "quality after design") approach that the companies have taken up until the 1990s. The QbD initiative, which originated from the Office of Biotechnology Products (OBP), attempts to provide guidance on pharmaceutical development to facilitate design of products and processes that maximizes the product's efficacy and safety profile while enhancing product manufacturability. === QbD activities within FDA === The following activities are guiding the implementation of QbD: In FDA's Office of New Drug Quality Assessment (ONDQA), a new risk-based pharmaceutical quality assessment system (PQAS) was established based on the application of product and process understanding. Implementation of a pilot program to allow manufacturers in the pharmaceutical industry to submit information for a new drug application demonstrating use of QbD principles, product knowledge, and process understanding. In 2006, Merck & Co.'s Januvia became the first product approved based upon such an application. Implementation of a Question-based Review (QbR) Process has occurred in CDER's Office of Generic Drugs. CDER's Office of Compliance has played a role in complementing the QbD initiative by optimizing pre-approval inspection processes to evaluate commercial process feasibility and determining if a state of process control is maintained throughout the lifecycle, in accord with the ICH Q10 lifecycle Quality System. First QbD Approval - including design space - for Biologic License Application (BLA) is Gazyva (Roche) While QbD will provide better design predictions, there is also a recognition that industrial scale-up and commercial manufacturing experience provides knowledge about the process and the raw materials used therein. FDA's release of the Process Validation guidance in January 2011 notes the need for companies to continue benefiting from knowledge gained, and continually improve throughout the process lifecycle by making adaptations to assure root causes of manufacturing problems are corrected. === ICH activities === Working with regulators in the European Union (the European Medicines Agency) and Japan, the FDA has furthered quality by design objectives through the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. The ICH Guidelines Q8 through Q11 encapsulate these unified recommendations and provide some assistance for manufacturers to implement quality by design into their own operations. ICH Guideline Q8 describes QbD-based drug formulation development and was first published in 2004, being subsequently revised in 2008 (Q8(R2)). The ICH Guideline Q9 describes Quality Risk Management plans, Q10 explains Pharmaceutical Quality Systems, and Q11 refer to the development of active pharmacological substances including biologicals. In November 2017, the ICH issued Guideline Q12 for public consultation to extend the recommendations for the Product Lifecycle Management Plan that were initially defined in the Guideline Q10. According to the ICH, Guideline Q13 will extend the previous guidelines to accommodate continuous pharmaceutical manufacturing and Q2 (Analytical Validation) will be revised and extended into the guideline Q2(R2)/Q14 to include Analytical quality by design or AQbD. The ICH Steering Committee meets twice a year to discuss the progress of its efforts. This practical input should help ensure that quality risk management and knowledge management are used to make lifecycle adaptations that maintain process control and product quality. == See also == Laboratory quality control Quality control == Further reading == Godfrey, A. Blanton; Kenett, Ron S. (2007). "Joseph M. Juran, a perspective on past contributions and future impact". Quality and Reliability Engineering International. 23 (6): 653–663. doi:10.1002/qre.861. S2CID 23806604. Kenett, Ron S.; Kenett, Dan A. (2008). "Quality by Design applications in biosimilar pharmaceutical products". Accreditation and Quality Assurance. 13 (12): 681–690. doi:10.1007/s00769-008-0459-6. S2CID 110606284. == References == == External links == Implementing Quality by Design, by Helen Winkle, FDA Implementation of QbD Principles in CMC Review, by Chi-Wan Chen, PhD, Deputy Director, Office of New Drug Chemistry Quality-by-Design Case Studies in Pharmaceuticals and Biologics Juran.com
Wikipedia/Quality_by_design
Computer Aided Industrial Design (CAID) is a subset of computer-aided design (CAD) software that can assist in creating the look-and-feel or industrial design aspects of a product in development. CAID programs tend to provide designers with improved freedom of creativity compared to typical CAD tools. However a typical workflow may follow a simple design methodology as follows: Creating sketches, using a stylus Generating curves directly from the sketch Generating surfaces directly from the curves The end result is generally a 3D model that represents the main intent of the designer had in mind for the physical product. Such models can then be saved in formats for more convenient exchange with others (such as OBJ for virtual viewing in 3D graphics programs) or manufacturing (such a STL to create a real-life model via a rapid prototyping machine). CAID helps the designer focus on the technical aspect of the design methodology rather than the sketching and modelling aspects, contributing to the selection of a better product proposal in less time. When product pre-requisites and parameters have been more completely defined, output from the CAID software can be imported into a CAD program for pre-production testing, adjustment, and generation of technical drawings and manufacturing data such as CNC tool-paths. CAID is far more conceptual and less technically focused than CAD. CAID programs tend to offer more tools that allow a designer to freely express themselves with more organic shapes and complex curves, whilst CAD software tends to be more focused on tools for the simple curves and straight lines more suitable for easy manufacturing. CAD implementations have evolved dramatically since initial 3D offerings in the 1970s, which were typically limited to producing drawings similar to hand-drafted output. Advances in programming and computer hardware,[21][22] notably solid modelling in the 1980s, have allowed more versatile applications of computers in design activities. == See also == Industrial Design Freeform surface modelling Class A Surfaces Production Injection molding Automobile design(Y) AliasStudio, ICEM Surf, NX Shape Studio, CATIA Shape Design and Styling, SolidThinking, Rhinoceros, examples of CAID software == References == == External links == ASIS proceedings on CAID ACM Siggraph abstract on CAID
Wikipedia/Computer-aided_industrial_design
Design–build (or design/build, and abbreviated D–B or D/B accordingly), also known as alternative delivery, is a project delivery system used in the construction industry. It is a method to deliver a project in which the design and construction services are contracted by a single entity known as the design–builder or design–build contractor. It can be subdivided into architect-led design–build (ALDB, sometimes known as designer-led design–build) and contractor-led design–build. In contrast to "design–bid–build" (or "design–tender"), design–build relies on a single point of responsibility contract and is used to minimize risks for the project owner and to reduce the delivery schedule by overlapping the design phase and construction phase of a project. Design–build also has a single point responsibility. The design-build contractor is responsible for all work on the project, so the client can seek legal remedies for any fault from one party. The traditional approach for construction projects consists of the appointment of a designer on one side, and the appointment of a contractor on the other side. The design–build procurement route changes the traditional sequence of work. It answers the client's wishes for a single point of responsibility in an attempt to reduce risks and overall costs. Although the use of subcontractors to complete more specialized work is common, the design-build contractor remains the primary contact and primary force behind the work. It is now commonly used in many countries and forms of contracts are widely available. Design–build is sometimes compared to the "master builder" approach, one of the oldest forms of construction procedure. Comparing design–build to the traditional method of procurement, the authors of Design-build Contracting Handbook noted that: "from a historical perspective the so-called traditional approach is actually a very recent concept, only being in use approximately 150 years. In contrast, the design–build concept—also known as the "master builder" concept—has been reported as being in use for over four millennia." Although the Design-Build Institute of America (DBIA) takes the position that design–build can be led by a contractor, a designer, a developer or a joint venture, as long as a design–build entity holds a single contract for both design and construction, some architects have suggested that architect-led design–build is a specific approach to design–build. Design-build plays an important role in pedagogy, both at universities and in independently organised events such as Rural Studio or ArchiCamp. == Design–build contractor == The "design–builder" is often a general contractor, but in many cases a project is led by a design professional (architect, engineer, architectural technologist or other professional designers). Some design–build firms employ professionals from both the design and construction sector. Where the design–builder is a general contractor, the designers are typically retained directly by the contractor. Partnership or a joint venture between a design firm and a construction firm may be created on a long-term basis or for one project only. Until 1979, the AIA American Institute of Architects' code of ethics and professional conduct prohibited their members from providing construction services. However today many architects in the United States and elsewhere aspire to provide integrated design and construction services, and one approach towards this goal is design–build. The AIA has acknowledged that design–build is becoming one of the main approaches to construction. In 2003, the AIA endorsed "The architect's guide to design–build services", which was written to help their members acting as design–build contractors. This publication gives guidance through the different phases of the process: design services, contracts, management, insurances, and finances. == Contractor-led design–build projects: the architect's role == On contractor-led design–build projects, management is structured so that the owner works directly with a contractor who, in turn, coordinates subcontractors. Architects contribute to contractor-led design–build projects in one of several ways, with varying degrees of responsibility (where "A/E" in each diagram represents the architect/engineer): Architect as employee of contractor: The architect works for the contractor as an in-house employee. The architect still bears professional risk and is likely to have less control than in other contractor-led design–build approaches. Architect as a subcontractor: Here, the architect is one of the many subcontractors on the team led by the contractor. The architect bears similar professional risk but still with little control. Architect as second party in contractor-led integrated project delivery (IPD): The architect and contractor work together in a joint venture, both coordinating the subcontractors to get the project built. The building owner has a single contract with this joint venture. The contractor leads the joint venture so in supervising the subs, the architect might defer to the contractor. The architect bears the same risk as they do in the traditional approach but has more control in IPD, even if they were to defer to the contractor. == Architect-led design–build projects == Architect-led design–build projects are those in which interdisciplinary teams of architects and building trades professionals collaborate in an agile management process, where design strategy and construction expertise are seamlessly integrated, and the architect, as owner-advocate, project-steward and team-leader, ensures high fidelity between project aims and outcomes. In architect-led design–build projects, the architect works directly with the owner (the client), acts as the designer and builder, coordinating a team of consultants, subcontractors and materials suppliers throughout the project lifecycle. Architects lead design–build projects in several ways, with varying degrees of responsibility (where "A/E" in each diagram represents the architect/engineer): Architect as provider of extended services: Contracted to the owner, the architect extends his or her services beyond the design phase, taking responsibility for managing the subcontractors on behalf of the owner. The architect bears similar risk but has more control over the project than in the traditional approach or on contractor-led design–build projects. Architect as primary party in architect-led integrated project delivery (IPD): Again, as in working together in a joint venture, both coordinating the subcontractors to get the project built. Again, the building owner has a single contract with this joint venture. This time, the architect leads the joint venture so in supervising the subs, the contractor might defer to the architect. The architect might bear more risk than they do in the traditional approach but risk is shared with the owner and the contractor, as outlined in their agreement. An alternative approach to effectuating this delivery structure is for the architect to contract directly with the owner to design and build the project, and then to subcontract the procurement and construction responsibilities to its allied general contractor, who enters into further subcontracts with the trades. This is a difference in form, rather than in substance, because the business and legal terms of the agreement between the architect and the general contractor may be the same regardless of whether they are characterized as a joint venture or as a subcontract. It is the "flip side of the coin" of the contractor-led approach described above in which the general contractor subcontracts the design to the architect. Architect as full service leader of design build process: Contracted to the owner, the architect offers full service to the owner, taking responsibility for managing the subcontractors, consultants and vendors, and involving them throughout the project, start to finish, from design through construction. The architect's role shifts during the project, from designer to site supervisor (effectively taking the role of a general contractor), but monitors the project vision, and is able to call upon subcontractors' construction expertise throughout. The architect bears the greatest risk but also has more control over the project than in either the traditional approach, or in the contractor-led and other architect-led design–build projects. == Contracts == A single set of integrated contracts combining design and construction responsibilities, rather than two discrete contracts for each, acknowledges the interdependence of the architects' and construction trades' project responsibilities, and reduces the likelihood of disputes. == Design–build institutes == In 1993, the Design-Build Institute of America (DBIA) was formed. Its membership is composed of design and construction industry professionals as well as project owners. DBIA promotes the value of design–build project delivery and teaches the effective integration of design and construction services to ensure success for owners and design and construction practitioners. The Design-Build Institute of America is an organization that defines, teaches and promotes best practices in design–build. The Canadian Design-Build Institute (CDBI) describes itself as "The recognized voice of Design-Build practitioners in Canada, promoting and enhancing the proper use of Design-Build method of procurement and contracting". === Advocacy === Not all design–build projects are alike. Here, there is a distinction between design–build projects led by contractors and those led by architects. Architect-led Design Build is a form of 'design–build' that, according to the DBIA, has been rapidly gaining market share in the United States over the past 15 years. The Design Build Institute of America describes the design–build process as follows: Taking singular responsibility, the design–build team is accountable for cost, schedule and performance, under a single contract and with reduced administrative paperwork, clients can focus on the project rather than managing disparate contracts. And, by closing warranty gaps, building owners also virtually eliminate litigation claims. The DBIA's 2005 chart shows the uptake of design–build methods in non-residential design and construction in the United States. Architect-led design–build is sometimes known by the more generic name "designer-led design–build". Although employed primarily by architects, architectural technologists and other architectural professions, the design–build structure works similarly for interior design projects led by an interior designer who is not an architect, and also for engineering projects where the design–build team is led by a professional structural, civil, mechanical or other engineers. In addition, it is common for the design professional who leads the design–build team to create a separate corporation or similar business entity through which the professional performs the construction and other related non-professional services. In 2011, design–build continued to gain ground as a significant trend in design and construction. In March 2011, industry consultants ZweigWhite published "Design-Bid-Build meets the opposition". In it, they suggest that while Design-Bid-Build "still rules", the traditional approach is losing favor as "alternative project delivery methods threaten [the] design-bid-build model." While not referencing the architect-led design–build approach specifically, the article states that D/B already accounts for 27% of projects, according to their 2010 Project Management Survey and goes on to argue that, The emerging trends in delivery seem to point to a return to the primordial concept of the masterbuilder, as exemplified by D/B and IPD [Integrated Project Delivery]. According to the DBIA, the design–build approach offers advantages to owners, including: "One team, one contract, one unified flow of work from initial concept through completion." == Debate on the merits of design–build vs. design–bid–build == The rise of design–build project delivery has threatened the traditional hierarchies and silos of the design and construction industry. As a result, a debate has emerged over the value of design–build as a method of project delivery. Critics of the design–build approach claim that design–build limits the clients' involvement in the design and allege that contractors often make design decisions outside their area of expertise. They also suggest that a designer—rather than a construction professional—is a better advocate for the client or project owner and/or that by representing different perspectives and remaining in their separate spheres, designers and builders ultimately create better buildings. Proponents of design–build counter that design–build saves time and money for the owner, while providing the opportunity to achieve innovation in the delivered facility. They note that value is added because design-build brings value engineering into the design process at the onset of a project. Design–build allows the contractor, engineers and specialty trade contractors (subcontractors) to propose best-value solutions for various construction elements before the design is complete. Design–build brings all members of a project team together early in the process to identify and address issues of cost, schedule and constructability. Proponents suggest that as a result, design-build alleviates conflict between architects and contractors and reduces owner risk for design errors. They argue that once design is finalized and construction begins, the greatest opportunity to achieve cost savings has already been lost, and the potential for design errors is greater, leading to change orders that create cost growth and schedule delays. Proponents note that design–build allows owners to avoid being placed directly between the architect/engineer and the contractor. Under design–bid–build, the owner takes on significant risks because of that position. Design–build places the responsibility for design errors and omissions on the design–builder, relieving the owner of major legal and managerial responsibilities. The burden for these costs and associated risks are transferred to the design–build team. The cost and schedule reduction and decreased litigation associated with design–build project delivery have been demonstrated repeatedly. Researches on Selecting Project Delivery Systems by Victor Sanvido and Mark Konchar of Pennsylvania State University found that design–build projects are delivered 33.5% faster than projects that are designed and built under separate contracts (design-bid-build). Sanvido and Konchar also showed that design–build projects are constructed 12% faster* and have a unit cost that is 6.1% lower than design-bid-build projects. Similar cost and time savings were found in a comparison study of design–build, and design-bid-build for the water/wastewater construction industry, a peer-reviewed paper authored by Smith Culp Consulting that will be published in July 2011 by the American Society of Civil Engineers. A benchmarking and claims study by Victor O. Schinnerer, one of the world's largest firms underwriting professional liability and specialty insurance programs, found that, from 1995 to 2004, only 1.3% of claims against A/E firms were made by design–build contractors. Advantages have been summarized as: Efficiency: Typically led by contractors, 'design–build' has evolved as an efficient way to deliver projects primarily where the building project goals are straightforward, either constrained by budget, or the outcome is prescribed by functional requirements (for example, a highway, sports facility, or brewery). Construction industry commentators have described design–build as a high performance 'construction project delivery system', a dynamic approach to making buildings that presents an alternative to the traditional design-bid-build approach. Single-source: Design–build is growing because of the advantages of single-source management: Unlike traditional design-bid-build, it allows for the owner to contract with just one party who acts as a single point of contact, is responsible for delivering the project and coordinates the rest of the team. Depending on the phasing of the project, there may be multiple sequential contracts between the owner and the design–builder. The owner benefits because if something turns out to be wrong with the project, there is a single entity that is responsible for fixing the problem, rather than a separate designer and constructor each blaming the other. === Advantages for less-prescriptive projects === Architect-led design–build is suited primarily to less prescriptive architectural projects (private residences, non-profit institutions, museums), for the efficiencies it yields and the sophisticated design interpretation it affords, particularly: Where the primary project goals are design-driven or visionary rather than prescribed by budgetary constraint or functional requirements Where the project is specifically "Capital A"-artistically/creatively driven, in a way that traditionally yields the highest level of cost overruns. Where the efficiencies of design–build approach and an architect's interpretive skill are equally important These less prescriptive projects need not be stuck with the "broken buildings and busted budgets" described by Barry Lepatner. Rather, the less prescriptive the project, the more the client needs an architect to steward an emergent design from vision to completion. So it follows that for the broadest range of building projects, the rigors of architect-led design–build is compelling and preferable where design is of paramount importance to the client. === Recursive knowledge === The process and the knowledge it produces is recursive: Since subcontractors are engaged early and often in an architect-led design build project, to assess efficiencies, opportunity costs, payback rates and quality options. Their input informs overall design decisions from the outset. Cost-benefit is also a constant consideration that informs design decisions from the outset. Building performance is measured early too, so that trade offs between budget, schedule, functionality and usability can inform specification and continuous refinement of the design. Architects engaged in this dynamic process understand and keep up to date with the potential of contemporary technology and materials available to building professionals, and translate what they learn into their design work. This knowledge is fed back, not just to the specific project but can be shared to other project teams, throughout a studio, or more broadly to the profession, and can become an active source of insight in and of itself. == Growth of design–build method == A 2011 study analyzing the design–build project delivery method in the United States shows design–build was used on about 40 percent of non-residential construction projects in 2010, a ten percent increase since 2005. The study was commissioned by the Design-Build Institute of America (DBIA) and was completed by RSMeans Reed Construction Data Market Intelligence. A study from the US Department of Transportation claims that: "Design-build delivery has been steadily increasing in the U.S. public building sector for more than 10 years, but it is still termed experimental in transportation. To date, under Special Experimental Project 14 (SEP-14) the FHWA has approved the use of design–build in more than 150 projects, representing just over half of the States. The European countries visited have used design–build delivery for a longer time than the United States and provided the scan team with many valuable insights. The primary lessons learned on this scan tour relate to the types of projects using design–build, the use of best-value selection, percentage of design in the solicitation, design and construction administration, third-party risks, the use of warranties, and the addition of maintenance and operation to design–build contracts." == Criticisms of design–build == During the design–build procedure, the contractor is deciding on design issues as well as issues related to cost, profits and time exigencies. Whilst the traditional method of construction procurement dissociates the designers from the contractors' interests, design–build does not. On these grounds it is considered that the design–build procedure is poorly adapted to projects that require complex designs for technical, programmatic or aesthetic purposes. If the designer/architect is 'kept' by the construction company, they probably will never push the envelope as to what might be possible. A notable design–build project that received significant criticism, not only for excessive cost but for environmental issues, was the Belmont Learning Center. The scandal involved alleged contaminated soil that caused significant delays and massive cost overruns. In Los Angeles, District Attorney Steve Cooley, who investigated the Los Angeles Unified School District's Belmont project, produced a final investigative report, released March 2003. This report concluded that the design–build process caused a number of issues relating to the Belmont scandal: Design–build does not make use of competitive bidding where prospective builders bid on the same design. Criteria to select contractor are subjective and difficult to evaluate and to justify later. The design and price selected arouses public suspicion, true or not. This can lead to loss of public confidence. The design brief is subject to different interpretations from both the client and contractor, creating a conflict of interest. It concluded the "design–build" approach and "mixed-use concept" together caused controversy, uncertainty, and complexity of the Belmont project which helped increase the potential for project failure. While the Belmont investigation cleared the Los Angeles Unified School District of any criminal wrongdoing, the task force recommends strict oversight, including written protocols, a vigorous Office of the Inspector General, and other recommendations if it decides to continue to use the design–build approach. During the period in question, the ex-Superintendent of LAUSD, Ramon C. Cortines, working with the LAUSD Board of Education, whose president is Monica Garcia, actively tried to cut the Office of Inspector General by 75% (compromising on 25%) and subsequently removed the Inspector General Jerry Thornton after he produced critical audits that showed misuse of construction funds. Others have argued that architect-led design–build still does: Typical project management issues (establishing liability, writing contracts, scoping estimates and schedule) or Variation across different states' licensing laws or Conflict of interest and ethical issues It also imposes: Greater business and financial risks associated with architect taking on general contractor responsibilities Changes to the way architects do business, so they Establish a construction company as a separate corporation that signs a separate construction contract, so they are able to insure and simplify liability insurance coverage Either they have, or are able to acquire, the skills of a design–builder Recognize the parties' different incentives Modify how they prepare Contract Documents, relying more on performance specifications than they do currently, to facilitate substitutions for the benefit of the constructor. == Project examples == Examples of contractor-led design–build projects include: Dena'ina Civic & Convention Center, Anchorage, AK, Neeser Construction, Inc.: In 2010, it won the 2010 DBIA Design Build Merit Award for a public sector project over $50 million. Walter Cronkite School of Journalism and Mass Communication: Phoenix, AZ, Ehrlich Architects. In 2009, it won the 2009 DBIA National Design Build Award for a public sector project over $25 million. Federal Law Enforcement Training Center Dormitory, North Charleston, SC, The Korte Company. In 2012, it won the 2012 DBIA Design Build Merit Award for a public sector project over $15 million. == See also == Architectural management Design–bid–build == References == == Further reading == "A New Solution For Public Construction Projects: Sequential Designer-Led Design-Build", by Mark Friedlander "Design-Build and Integrated Project Delivery: Narrowing the Gap" American Institute of Architects (AIA), issue 21, August 21, 2009 When Is Hiring Professionals Worth It? The Bottom Line: It Depends.; Architects vs. Contractors vs. Design-Build Firms . . . There Are Several Options and No Easy Answers, by Denise DiFulco, The Washington Post, July 17, 2008 'Design-Build' Trend Sweeps Redo Market; 2-Step Approach Unites Architects, Contractors by Ann Marie Moriarty, The Washington Post, March 27, 2002 == External links == The Design/Build Institute of America The Design/Build Institute of Canada Design-Build Definition
Wikipedia/Design–build
Low-level design (LLD) is a component-level design process that follows a step-by-step refinement process. This process can be used for designing data structures, required software architecture, source code and ultimately, performance algorithms. Overall, the data organization may be defined during requirement analysis and then refined during data design work. Post-build, each component is specified in detail. The LLD phase is the stage where the actual software components are designed. During the detailed phase the logical and functional design is done and the design of application structure is developed during the high-level design phase. == Design phase == A design is the order of a system that connects individual components. Often, it can interact with other systems. Design is important to achieve high reliability, low cost, and good maintain-ability. We can distinguish two types of program design phases: Architectural or high-level design Detailed or low-level design Structured flow charts and HIPO diagrams typify the class of software design tools and these provide a high-level overview of a program. The advantages of such a design tool are that it yields a design specification understandable to non-programmers and provides a good pictorial display of the module dependencies. A disadvantage is that it may be difficult for software developers to go from a graphic-oriented representation of software design to implementation. Therefore, it is necessary to provide little insight into the algorithmic structure describing procedural steps to facilitate the early stages of software development, generally using Program Design Languages (PDLs). == Purpose == The goal of LLD or a low-level design document (LLDD) is to give the internal logical design of the actual program code. Low-level design is created based on the high-level design. LLD describes the class diagrams with the methods and relations between classes and program specs. It describes the modules so that the programmer can directly code the program from the document. A good low-level design document makes the program easy to develop when proper analysis is utilized to create a low-level design document. The code can then be developed directly from the low-level design document with minimal debugging and testing. Other advantages include lower cost and easier maintenance. == References ==
Wikipedia/Low-level_design
Design–bid–build (or design/bid/build, and abbreviated D–B–B or D/B/B accordingly), also known as Design–tender (or "design/tender"), traditional method, or hardbid, is a project delivery method in which the agency or owner contracts with separate entities for the design and construction of a project. Design–bid–build is the traditional method for project delivery and differs in several substantial aspects from design–build. There are three main sequential phases to the design–bid–build delivery method: The design phase The bidding (or tender) phase The construction phase == Design phase == In this phase, the owner retains an architect (or consulting engineer for infrastructure works) to design and produce bid documents, including construction drawings and technical specifications, on which various general contractors will in turn bid to construct the project. For building projects, the architect will work with the owner to identify the owner's needs, develop a written program documenting those needs and then produce a conceptual and/or schematic design. This early design is then developed, and the architect will usually bring in other design professionals including a structural engineer, sometimes a civil engineer, mechanical, electrical, and plumbing engineers ("MEP engineers"), a fire protection engineer and often a landscape architect to help complete the construction drawings and technical specifications. The finished bid documents are coordinated by the architect and owner for issuance to general contractors during the bid phase. Design fees are typically between 5-10% of the total project cost. == Bid (or tender) phase == Bidding can be "open", in which any qualified bidder may participate, or "select", in which a limited number of pre-selected contractors are invited to bid. The various general contractors bidding on the project obtain copies of the bid (or tender) documents, and then put them out to multiple subcontractors for bids on sub-components of the project. Sub-components include items such as the concrete work, structural steel frame, electrical systems, HVAC, and landscaping. Questions may arise during the bid (or tender) period, and the architect will typically issue clarifications or corrections to the bid documents in the form of addenda. From these elements, the contractor compiles a complete bid (or "tender price") for submission by the established closing date and time (i.e., bid date). Bids can be based on the quantities of materials in the completed construction (e.g., as in the UK with bills of quantities), the operations needed to build it (e.g., as in operational bills), or simply as a lump sum cost; however, these bid requirements are elucidated within the bid documents. Once bids are received, the architect typically reviews the bids, seeks any clarifications required of the bidders, investigates contractor qualifications, ensures all documentation is in order (including bonding if required), and advises the owner as to the ranking of the bids. If the bids fall in a range acceptable to the owner, the owner and architect discuss the suitability of various bidders and their proposals. The owner is not obligated to accept the lowest bid, and it is customary for other factors including past performance and quality of other work to influence the selection process. However, the project is typically awarded to the general contractor with the lowest bid. In the event that all of the bids do not satisfy the needs of the owner, whether for financial reasons or otherwise, the owner may choose to reject all bids. The following options become available to the owner: Re-bid (or re-tender) the construction of the project on a future date when the owner's needs are met, such as when money becomes available and/or construction costs go down. Abandon the project entirely. Issue a work order to have the architect revise the design (sometimes at no cost to the Owner, if previously negotiated), so as to make the project smaller or more efficient, or reduce features or elements of the project to bring the cost down. The revised bid documents can then be issued again for bid (or re-tendered). Select a general contractor, such as the lowest bidder, or an experienced cost estimator to assist the architect with design changes aimed at cost reduction. This process is often referred to as value engineering. The revised bid documents can then be issued again for bid (or re-tendered). == Construction phase == Once the construction of the project has been awarded to the contractor, the bid documents (e.g., approved construction drawings and technical specifications) may not be altered. The necessary permits (for example, a building permit) must be achieved from all jurisdictional authorities in order for the construction process to begin. Should design changes be necessary during construction, whether initiated by the contractor, owner, or as discovered by the architect, the architect may issue sketches or written clarifications. The contractor may be required to document "as built" conditions to the owner. In most instances, nearly every component of a project is supplied and installed by sub-contractors. The general contractor may provide work with its own forces, but it is common for a general contractor to limit its role primarily to managing the construction process and daily activity on a construction site (see also construction management). During the construction phase the architect also acts as the owner's agent to review the progress of the work as it relates to pay requests from the Contractor, and to issue site instructions, change orders (or field orders), or other documentation necessary to facilitate the construction process and certify that the project is built to the approved construction drawings. == Potential problems of design–bid–build == Failure of the design team to retain current familiarity with construction costs, and any potential cost increases during the design phase could cause project delays if the construction documents must be redone to reduce costs. Redesign expense can be disputed should the architect's contract not specifically address the issue of revisions required to reduce costs. Development of a "cheaper is better" mentality amongst the general contractors bidding the project so there is the tendency to seek out the lowest cost sub-contractors in a given market. In strong markets, general contractors will be able to be selective about which projects to bid, but in lean times, the desire for work usually forces the low bidder of each trade to be selected. This usually results in increased risk (for the general contractor) and can also compromise the quality of construction. In the extreme, it can lead to serious disputes involving quality of the final product, or bankruptcy of a sub-contractor who was on the brink of insolvency desperate for work. As the general contractor is brought to the team post-design, there is little opportunity for input on effective alternates being presented. Pressures may be exerted on the design and construction teams due to competing interests (e.g., economy versus acceptable quality), which may lead to disputes between the architect and the general contractor, and associated delays in construction. == Benefits of design–bid–build == The design team looks out for the interests of the owner. The design team prepares documents on which all general contractors place bids. With this in mind, the "cheaper is better" argument is rendered invalid since the bids are based on complete documents. Incomplete, incorrect or missed items are usually discovered and addressed during the bid process in the form of addenda. Ensures fairness to potential bidders and improves decision making by the owner by providing a range of potential options. It also identifies new potential contractors. Assists the owner in establishing reasonable prices for the project. Uses competition both in the selection of the architect and the contractor to improve the efficiency and quality for owners. == See also == Architectural services Architectural engineering Civil engineering Construction engineering Construction software Construction management Joint Contracts Tribunal Submittals (construction) Shop drawing Public–private partnership == References ==
Wikipedia/Design–bid–build
Experiential interior design (EID) is the practice of employing experiential or phenomenological values in interior experience design. EID is a human-centered design approach to interior architecture based on modern environmental psychology emphasizing human experiential needs. The notion of EID emphasizes the influence of the designed environments on human total experiences including sensorial, cognitive, emotional, social, and behavioral experiences triggered by environmental cues. One of the key promises of EID is to offer values beyond the functional or mechanical experiences afforded by the environment. == Definition == Cognitive scholars claim that the human mind has a modular structure (against one central processor) by which her evaluate and respond to environmental triggers. This evaluation of built environment leads to the a multifacet perception of that environment that renders Sensorial Experiences, Emotional Experiences, Intellectual Experiences, Pragmatic Experiences, and Social Experiences. These five categories collectively define the experiential values of the environment. Accordingly, EID can be defined as the process of understanding and embedding experiential values in interior design to engage users in a higher level of sensing, thinking, feeling, interacting, and/or doing. == Outcomes == Experiential design can help to improve the user's evaluation and perception of an environment in different settings such as retail store. For example, three central feelings that can be targeted by EID include pleasure, arousal, and dominance. These feelings are the results of a well-designed environment by practicing EID. Pleasure refers to the degree of happiness, arousal to the degree of excitement, and dominance to the sense of control. These emotions lead to behavioral responses such as approach (vs avoidance). Approach behavior is a positive attitude toward a place, which results in intention to stay, explore, affiliate, or interact. ∞== In business literature == Business literature emphasizes the relationship between interior design and customer experience. For example, Schmitt's experiential marketing framework suggests that commercial environments should consider customers' experiential needs (functional, emotional, behavioral, social, and symbolic/lifestyle) in addition to sensorial experiences. EID does not recommend a specific style of design, rather it emphasizes a design thinking process in which customers' experiential needs are prioritized. Marketing literature has demonstrated that experiential values can differentiate the offerings. EID helps firms providing symbolic meanings, differentiating brand, and communicating values with unique (branded) environmental experience. The values that associate with this positive experience enhance loyalty and fervent advocacy. == References ==
Wikipedia/Experiential_interior_design
Design for All in the context of information and communications technology (ICT) is the conscious and systematic effort to proactively apply principles, methods and tools to promote universal design in computer-related technologies, including Internet-based technologies, thus avoiding the need for a posteriori adaptations, or specialised design. Design for All is design for human diversity (such as that described in the diversity in the workplace or business), social inclusion and equality. It should not be conceived of as an effort to advance a single solution for everybody, but as a user-centred approach to providing products that can automatically address the possible range of human abilities, skills, requirements, and preferences. Consequently, the outcome of the design process is not intended to be a singular design, but a design space populated with appropriate alternatives, together with the rationale underlying each alternative, that is, the specific user and usage context characteristics for which each alternative has been designed. Traditionally, accessibility problems have been solved with adaptations and the use of assistive technology products has been a technical approach to obtain adaptations. Universal Access implies the accessibility and usability of information and telecommunications technologies by anyone at any place and at any time and their inclusion in any living context. It aims to enable equitable access and active participation of potentially all people in existing and emerging computer-mediated human activities, by developing universally accessible and usable products and services and suitable support functionalities in the environment. These products and services must be capable of accommodating individual user requirements in different contexts of use, independent of location, target machine, or runtime environment. Therefore, the approach aiming to grant the use of equipment or services is generalized, seeking to give access to the Information Society as such. Citizens are supposed to live in environments populated with intelligent objects, where the tasks to be performed and the way of performing them are completely redefined, involving a combination of activities of access to information, interpersonal communication, and environmental control. Citizens must be given the possibility of carrying them out easily and pleasantly. For a thorough discussion of the challenges and benefits of Design for All in the context of ICT, see also the EDeAN White Paper (2005) and the "Report on the impact of technological developments on eAccessibility" of the DfA@eInclusion project. == Benefits and challenges == The European Commission Communication on e-Accessibility, identified a core of practical challenges, as well as market, legal and policy issues towards improving eAccessibility and e-Inclusion in Europe, and elaborated a three-fold approach based on: accessibility requirements in public procurement accessibility certification and better use of existing legislation. In that respect, the challenges that need to be addressed include: the introduction of specific legislative measures to complement and enhance existing legislation, addressing and motivating the industry, effective benchmarking, providing harmonised standardisation, the creation of a curriculum for DfA and, addressing future research activities. == Legislative and regulative background == The present policy context of accessibility in the Information Society in Europe is the i2010 initiative. The "i2010 – A European Information Society for growth and employment" initiative was launched by the European Commission as a framework for addressing the main challenges and developments in the information society and media sectors up to 2010. It promotes an open and competitive digital economy and emphasises ICT as a driver of inclusion and quality of life. The initiative contains a range of EU policy instruments to encourage the development of the digital economy, such as regulatory instruments, research and partnerships with stakeholders. === Equality and non-discrimination === The goal of the European Union Disability Strategy is a society that is open and accessible to all. The barriers need to be identified and removed. The European Union Disability Strategy has three main focuses: co-operation between the Commission and the Member States, full participation of people with disabilities, and mainstreaming disability in policy formulation. Non-discrimination is also one of the general principles of the "Convention on the Rights of Persons with Disabilities", adopted by the United Nations General Assembly on 13 December 2006 and was opened for signatures on 30 March 2007. === Telecommunications and information society === There is a long tradition of European legislation with regard to telecommunications. In 2002, the European Union adopted a new regulatory framework for electronic communications networks and services, covering all forms of fixed and wireless telecoms, data transmission and broadcasting. From a Design for All perspective, the most important Directives are the Directive on a common regulatory framework and the Directive on universal service and users' rights relating to electronic communications networks and services (Universal Service Directive). === Public procurement === Public procurement is an important economic force, and therefore it is an important tool to promote accessibility. The legislative package of public procurement Directives, approved in 2004 by the European Parliament and the EU's Council of Ministers, will help simplify and modernize procurement procedures. The new directives make it possible to take accessibility needs into account at several stages of a procurement process. It is most convenient to refer to standards when making technical specifications. There are already many CEN, ETSI and ITU standards which can be used for this purpose and many sources which can be useful in practice. Likewise, guidelines like the WAI guidelines, for example, or national guidelines have been used. In the future it will be easier to find suitable standards. Mandate M/376 has been given by the European Commission to the European Standardisation Organisations CEN, CENELEC and ETSI, to come up with a solution for common requirements and conformance assessment. === Copyright === Not all products are accessible for persons with disabilities. When producing audio books, or certain other accessible works, an additional copy is created, and copyright can be a problem in this situation. On the other hand, copyright is an essential part of the sustainability of a creative society. This conflict of interests must be solved somehow in order to ensure the Information Society is a Society for All. There is international and European legislation in this field. The objectives of the Directive on the harmonisation of certain aspects of copyright and related rights in the information society are to adapt legislation on copyright and related rights to reflect technological developments and to transpose into Community law the main international obligations arising from the two treaties on copyright and related rights adopted within the framework of the World Intellectual Property Organisation (WIPO) in December 1996. === Protection of privacy === The relationship between design and privacy is not necessarily obvious. Modern technology, which is a result of design, is able to collect significant amounts of personal information. The user has an interest in that information being correct and in it being used appropriately. The person may want to keep something confidential and have access to the information that has been collected. In other words, privacy is desired. In 1995 the European Union adopted a Directive on the processing of personal data. This directive established the basic principles for the collection, storage and use of personal data which should be respected by governments, businesses and any other organizations or individuals engaged in handling personal data. Within the context of Design for All (in ICT), privacy protection is called Privacy by Design. == Relevant guidelines and standards == In the US, Australia, Japan and in the European Union more and more legislative actions are put in place to require public bodies and companies to make sure that their products and services are accessible and usable not only by "standard" users but also by others such as elderly persons or people with an impairment. As it would be unwise to write down technical – and therefore time-bound – requirements into a law, legislative texts preferably refer to (international) standards. === Standardisation: general overview === Standardisation, i.e., in very general terms, producing a "standard" (French: norme, standard; German: Norm; Spanish: norma) is a voluntary action set up in the past, almost uniquely, by commercial partners who believe that the standardisation will permit easier exchanges of products and goods. This implied very often that the acceptance of the standards is also voluntary and triggered by expected commercial benefits. Only to a very limited extent consumer representatives did participate in standardisation. On the other hand, laws in many countries are referring more and more to the required acceptance of several standards (e.g. on safety or on ecological aspects). The net result of this need for standards is that nowadays many standardisation initiatives are stimulated (= subsidised) by public bodies or, in Europe, directly and indirectly by the European Commission. Also many guidelines have been created by stakeholder groups. === Recent developments in DfA related standardisation (formal standards) === As DfA standardisation was explicitly mentioned in the eEurope2002 and i2010 Action Plans of the European Union, several new actions were established since then. Four major recent strategies can be distinguished: the set up of coordinating working groups and organisations; the democratisation of the standardisation processes themselves; the increasing impact of non-formal standardisation bodies and; the establishment of standardisation related discussion fora open for non-specialists. === DfA in ICT related standards === ETSI EG 202 116 V1.2.2 (2009-03) ETSI Guide Human Factors (HF); Guidelines for ICT products and services; "Design for All". Web Content Accessibility Guidelines 2.0 The Web Content Accessibility Guidelines (WCAG) 2.0 is a technical standard that covers a wide range of recommendations for making Web content more accessible. Following these guidelines will make content accessible to a wider range of people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, limited movement, speech disabilities, photosensitivity and combinations of these. Following these guidelines will also often make your Web content more usable to users in general. BS 8878:2010 Web accessibility – Code of Practice BS 8878:2010 Web accessibility – Code of Practice provides guidance on how to embed accessibility concerns into organisation's policies and digital production processes. The Standard provides non-technical website owners a better understanding of the value of inclusive design, and a framework for how to use guidelines like WCAG 2.0 to help them create products which are Designed for All. The Standard's lead-author, Jonathan Hassell, has created a summary of BS 8878 to help organisations better understand how the standard can help them. == Application domains == The application domains of Design for All in the context of ICT, practically include every field involving Information and Communication Technologies. The significance of the application domains reflects their role in establishing a coherent and socially acceptable Information Society, but also the diverse range of human activities affected. The critical application domains for Design for All, can be summarised as follows: Life-long learning Public information systems, terminals and information appliances (e.g. kiosks, smart home environments) Transaction services (e.g., banking) Electronic commerce applications and services Social services for the citizens (e.g., administration, elderly, transport, health care, awareness) Tools to allow for added-value information services (e.g., creation, storage, retrieval and exchange of user experiences, traces and views) Security The White Paper "Toward an Information Society for All: An International R&D Agenda" (1998) published by the International Scientific Forum "Towards an Information Society for All" (ISF-IS4ALL), has discussed the significance of these application domains: "Life-long learning is a critical area where emphasis should be placed, in the "knowledge" society of the future. It entails a continuous engagement in the acquisition of knowledge and skills to facilitate and sustain equitable participation in the Information Society. New technologies may play a catalytic role in providing new educational mechanisms and structures, thus allowing learning to become an inseparable part of life-long human activities in the context of knowledge-intensive learning communities, and social interaction amongst groups of people. Another important application area and a critical short-term target is the development of general purpose public information systems, terminals and information appliances, (e.g., information kiosks for access to community-wide information services). These are expected to be used in increasingly different contexts, including public places, homes, classrooms, etc., and provide the means for ubiquitous and nomadic access. Environmental control will also become increasingly important. Smart environments will progressively penetrate a wide range of human activities in hospitals, hotels, public administration buildings, etc. Teleoperation of such environments will also gain increasing attention to facilitate responsiveness to unforeseen events, enhanced mobility and security. Finally, a broad range of transaction services (e.g., banking, advertising, entertainment), social services for the citizens (e.g., administration, health care, education, transport), and electronic commerce applications, will become increasingly important in reshaping business and residential human activities (...) security, privacy and control are central themes in the evolution of a socially acceptable Information Society and should receive immediate attention. At the same time, they will increasingly constitute more complex targets to accomplish, as they span across different levels of the telecommunications infrastructure, from network services to application services (such as business transactions and entertainment), terminals and information appliances." == Education and training == One major lever to improve awareness and practice in Design for All is the development of education and training programs. Professionals are needed who have acquired comprehensive specialist knowledge and skills in Design for All; in addition those professionals who currently work in ICT industry need to acquire additional knowledge and skills concerning Design for All. Little evidence can be found of university degree programmes that specialize in Design for All (or Universal Design) or that explicitly includes a module about this. This lack was tackled in the project DfA@eInclusion, which devised curricula: A bachelor level introductory course which aims to enable students to have an understanding of the ethical and social issues of Design for All, and the role of Design for All as an enabler of accessibility and participation in the information society A masters level programme which aims to enable students to have the relevant knowledge, personal and professional skills & competencies to design, develop, implement, evaluate and manage a wide range of ICT systems products and services that adhere to the principles and practices of Design for All. The implementation of such programmes is already under way in a few places, for example at Oslo and Akershus University College of Applied Sciences, the Middlesex University, UK, University of Linz, Austria and the University of Trás-os-Montes e Alto Douro, Portugal. Core topics include an understanding of the principles of human rights, the development of standards, regulations and legislation, the design and development of assistive technologies as well as improved access of mainstream products and services. Web accessibility is an important component of accessing the information society and information and guidance is offered by the World Wide Web Consortium's Web Accessibility Initiative (WAI) as well as online tutorials (for example, Opera's Web Standards Curriculum). The complementary approach of training for professionals in ICT industry has also been tackled by the DfA@eInclusion project. A comprehensive curriculum for such trainings has been recommended and is currently subject to a CEN workshop negotiation. The CEN workshop "Curriculum for training professionals in Universal Design (UD-Prof)" has been implemented in May 2009. Following the general rules for CEN workshops, it offers all interested stakeholders an opportunity to discuss and improve this DfA curriculum for ICT professionals. == Examples of good practice == Opera (web browser) was designed with the commitment to be used by as many people as possible thus following a Design for All approach. Audiobooks are good examples for Design for All because they enable people to read a book. Virtually anyone who does not have a hearing disability can use audiobooks for leisure, learning, and information. e-Government uses information and communication (ICT) technology to provide and improve government services, transactions and interactions with citizens, businesses, and other arms of government. Elevators provide an alternative way to reach different floor levels. Modern accessible elevators use information and communication technology to adapt themselves to any user imaginable. The closing speed of the doors is adjustable so people can safely enter quickly or slowly as required. Controls of the elevator provide visual and audible feedback to the user so that people with different sensory abilities can operate the elevator without assistance. Blind people profit from tactile keys. Braille labeling is located besides the keys so that they are not accidentally pushed while reading them. The emergency intercom system operates aurally and visually. Wireless tagging (e.g. RFID), facial recognition, remote controls further enhance the capabilities of a modern elevator which can be used by almost anyone. The Inclusive Design Toolkit presents examples of how Design for All principles can be implemented. Other examples of Design for All in ICT are presented in EDeAN's Education and Training Resource. == Related networks and projects == === European Design for all eAccessibility Network === The European Design for All e-Accessibility Network – EDeAN is a network of 160 organisations in European Union member states. The goal of the network is to support all citizens' access to the Information Society. EDeAN provides: a European forum for Design for All issues, supporting EU's e-inclusion goals awareness raising in the public and private sectors online resources on Design for All The network is coordinated by the EDeAN Secretariat, which rotates annually and the corresponding National Contact Centres which are the contact points for EDeAN in each EU member state. === Design for All Europe === EIDD – Design for All Europe is a 100% self-financed European organisation that covers the entire area of theory and practice of Design for All, from the built environment and tangible products to communication, service and system design. Originally set up in 1993 as the European Institute for Design and Disability (EIDD),to enhance the quality of life through Design for All, it changed its name in 2006 to bring it into line with its core business. EIDD – Design for All Europe disseminates the application of Design for All to business and administration communities previously unaware of its benefits and currently (2009) has active member organisations in 22 European countries. The aim of EIDD is to encourage active interaction and communication between professionals interested in the theory and practice of Design for All and to build bridges between, on the one hand, these and other members of the design community and, on the other hand, all those other communities where Design for All can make a real difference to the quality of life for everyone. === Examples of EU-funded research projects addressing ICT and inclusion === Design for all for e-Inclusion This is a support project to EDeAN. The project aims to develop an exemplary training course for Design for all targeted to the Industry, course structures and curricula for studying Design for All in undergraduate and postgraduate levels as well as an online knowledge base on Design for All. DIADEM: Delivering Inclusive Access for Disabled or Elderly Members of the Community The project aims to develop an adaptable web browser interface for people with reduced cognitive skills, which can be used at home and at work. I2Home: Intuitive interaction for everyone with home appliances based on industry standards The project seeks to develop a universal remote console that will allow networked access to everyday appliances in the home. SHARE-IT: Supported Human Autonomy for Recovery and Enhancement of cognitive and motor abilities using Information Technologies This project is developing scalable and adaptive 'add-ons' which will allow assistive technologies to be integrated into intelligent ICTs for the home. HaH: Hearing at Home This project is looking at the next generation of assistive devices which will help hearing-impaired people to participate fully in the Information Society. CogKnow: Helping people with mild dementia navigate their day CogKnow aims to develop and prototype a cognitive prosthetic device to help those struggling with dementia to perform their daily activities. MonAmi: Mainstreaming Ambient Intelligence The project seeks to mainstream the accessibility of consumer goods and services. The aim is to develop technology platforms that allow elderly and disabled people to continue living in their own homes and stay in their communities. USEM: User Empowerment in Standardisation The project aims to train end-users in standardisation related issues and to enable them to participate in standardisation activities in the area of ICT. VAALID: Accessibility and Usability Validation Framework for AAL Interaction Design Process The project aims at creating modeling and simulation supporting tools to optimize user interaction design and accessibility and usability validation process when developing Ambient Assisted Living solutions. PERSONA: Perceptive Spaces promoting Independent Aging The project aims at further develop Ambient Assisted Living products and services that are affordable, easy to use and commercially viable. The project develops an integrated technological platform that seamlessly links up the different products and services for social inclusion, for support in daily life activities, for early risk detection, for personal protection from health and environmental risks, for support in mobility and displacements within his neighbourhood/town, all of which make a life of freedom worth living within their families and within the society. == See also == Design for All (design philosophy) Universal Design Computer accessibility Accessibility Knowbility == References == == External links == Website of the EU-funded Project "DfA@eInclusion" Website of the European Design for All e-Accessibility Network (EDeAN) Website of EIDD – Design for All Europe European Commission – Information Society Portal, Design for All
Wikipedia/Design_for_All_(in_ICT)
Domain-driven design (DDD) is a major software design approach, focusing on modeling software to match a domain according to input from that domain's experts. DDD is against the idea of having a single unified model; instead it divides a large system into bounded contexts, each of which have their own model. Under domain-driven design, the structure and language of software code (class names, class methods, class variables) should match the business domain. For example: if software processes loan applications, it might have classes like "loan application", "customers", and methods such as "accept offer" and "withdraw". Domain-driven design is predicated on the following goals: placing the project's primary focus on the core domain and domain logic layer; basing complex designs on a model of the domain; initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems. Critics of domain-driven design argue that developers must typically implement a great deal of isolation and encapsulation to maintain the model as a pure and helpful construct. While domain-driven design provides benefits such as maintainability, Microsoft recommends it only for complex domains where the model provides clear benefits in formulating a common understanding of the domain. The term was coined by Eric Evans in his book of the same name published in 2003. == Overview == Domain-driven design articulates a number of high-level concepts and practices. Of primary importance is a domain of the software, the subject area to which the user applies a program. Software's developers build a domain model: a system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain. These aspects of domain-driven design aim to foster a common language shared by domain experts, users, and developers—the ubiquitous language. The ubiquitous language is used in the domain model and for describing system requirements. Ubiquitous language is one of the pillars of DDD together with strategic design and tactical design. In domain-driven design, the domain layer is one of the common layers in an object-oriented multilayered architecture. === Kinds of models === Domain-driven design recognizes multiple kinds of models. For example, an entity is an object defined not by its attributes, but its identity. As an example, most airlines assign a unique number to seats on every flight: this is the seat's identity. In contrast, a value object is an immutable object that contains attributes but has no conceptual identity. When people exchange business cards, for instance, they only care about the information on the card (its attributes) rather than trying to distinguish between each unique card. Models can also define events (something that happened in the past). A domain event is an event that domain experts care about. Models can be bound together by a root entity to become an aggregate. Objects outside the aggregate are allowed to hold references to the root but not to any other object of the aggregate. The aggregate root checks the consistency of changes in the aggregate. Drivers do not have to individually control each wheel of a car, for instance: they simply drive the car. In this context, a car is an aggregate of several other objects (the engine, the brakes, the headlights, etc.). === Working with models === In domain-driven design, an object's creation is often separated from the object itself. A repository, for instance, is an object with methods for retrieving domain objects from a data store (e.g. a database). Similarly, a factory is an object with methods for directly creating domain objects. When part of a program's functionality does not conceptually belong to any object, it is typically expressed as a service. == Event types == There are different types of events in DDD, and opinions on their classification may vary. According to Yan Cui, there are two key categories of events: === Domain Events === Domain events signify important occurrences within a specific business domain. These events are restricted to a bounded context and are vital for preserving business logic. Typically, domain events have lighter payloads, containing only the necessary information for processing. This is because event listeners are generally within the same service, where their requirements are more clearly understood. === Integration Events === On the other hand, integration events serve to communicate changes across different bounded contexts. They are crucial for ensuring data consistency throughout the entire system. Integration events tend to have more complex payloads with additional attributes, as the needs of potential listeners can differ significantly. This often leads to a more thorough approach to communication, resulting in overcommunication to ensure that all relevant information is effectively shared. == Context Mapping patterns == Context Mapping identifies and defines the boundaries of different domains or subdomains within a larger system. It helps visualize how these contexts interact and relate to each other. Below are some patterns, according to Eric Evans: Partnership: "forge a partnership between the teams in charge of the two contexts. Institute a process for coordinated planning of development and joint management of integration", when "teams in two contexts will succeed or fail together" Shared Kernel: "Designate with an explicit boundary some subset of the domain model that the teams agree to share. Keep this kernel small." Customer/Supplier Development: "Establish a clear customer/supplier relationship between the two teams", when "two teams are in [a] upstream-downstream relationship" Conformist: "Eliminate the complexity of translation [...] choosing conformity enormously simplifies integration", when a custom interface for a downstream subsystem isn't likely to happen Anticorruption Layer: "create an isolating layer to provide your system with functionality of the upstream system in terms of your own domain model" Open-host Service: "a protocol that gives access to your subsystem as a set of services", in case it's necessary to integrate one subsystem with many others, making custom translations between subsystems infeasible Published Language: "a well-­‐documented shared language that can express the necessary domain information as a common medium of communication", e.g. data interchange standards in various industries Separate Ways": "a bounded context [with] no connection to the others at all, allowing developers to find simple, specialized solutions within this small scope" Big Ball of Mud: "a boundary around the entire mess" when there's no real boundaries to be found when surveying an existing system == Relationship to other ideas == Although domain-driven design is not inherently tied to object-oriented approaches, in practice, it exploits the advantages of such techniques. These include entities/aggregate roots as receivers of commands/method invocations, the encapsulation of state within foremost aggregate roots, and on a higher architectural level, bounded contexts. As a result, domain-driven design is often associated with Plain Old Java Objects and Plain Old CLR Objects, which are technical implementation details, specific to Java and the .NET Framework respectively. These terms reflect a growing view that domain objects should be defined purely by the business behavior of the domain, rather than by a more specific technology framework. Similarly, the naked objects pattern holds that the user interface can simply be a reflection of a good enough domain model. Requiring the user interface to be a direct reflection of the domain model will force the design of a better domain model. Domain-driven design has influenced other approaches to software development. Domain-specific modeling, for instance, is domain-driven design applied with domain-specific languages. Domain-driven design does not specifically require the use of a domain-specific language, though it could be used to help define a domain-specific language and support domain-specific multimodeling. In turn, aspect-oriented programming makes it easy to factor out technical concerns (such as security, transaction management, logging) from a domain model, letting them focus purely on the business logic. === Model-driven engineering and architecture === While domain-driven design is compatible with model-driven engineering and model-driven architecture, the intent behind the two concepts is different. Model-driven architecture is more concerned with translating a model into code for different technology platforms than defining better domain models. However, the techniques provided by model-driven engineering (to model domains, to create domain-specific languages to facilitate the communication between domain experts and developers,...) facilitate domain-driven design in practice and help practitioners get more out of their models. Thanks to model-driven engineering's model transformation and code generation techniques, the domain model can be used to generate the actual software system that will manage it. === Command Query Responsibility Segregation === Command Query Responsibility Segregation (CQRS) is an architectural pattern for separating reading data (a 'query') from writing to data (a 'command'). CQRS derives from Command and Query Separation (CQS), coined by Bertrand Meyer. Commands mutate state and are approximately equivalent to method invocation on aggregate roots or entities. Queries read state but do not mutate it. While CQRS does not require domain-driven design, it makes the distinction between commands and queries explicit with the concept of an aggregate root. The idea is that a given aggregate root has a method that corresponds to a command and a command handler invokes the method on the aggregate root. The aggregate root is responsible for performing the logic of the operation and either yielding a failure response or just mutating its own state that can be written to a data store. The command handler pulls in infrastructure concerns related to saving the aggregate root's state and creating needed contexts (e.g., transactions). === Event storming === Event storming is a collaborative, workshop-based modeling technique which can be used as a precursor in the context of Domain-Driven Design (DDD) to identify and understand domain events. This interactive discovery process involves stakeholders, domain experts, and developers working together to visualize the flow of domain events, their causes, and their effects, fostering a shared understanding of the domain. The technique often uses color-coded sticky notes to represent different elements, such as domain events, aggregates, and external systems, facilitating a clear and structured exploration of the domain. Event storming can aid in discovering subdomains, bounded contexts, and aggregate boundaries, which are key constructs in DDD. By focusing on 'what happens' in the domain, the technique can help uncover business processes, dependencies, and interactions, providing a foundation for implementing DDD principles and aligning system design with business goals. === Event sourcing === Event sourcing is an architectural pattern in which entities track their internal state not by means of direct serialization or object-relational mapping, but by reading and committing events to an event store. When event sourcing is combined with CQRS and domain-driven design, aggregate roots are responsible for validating and applying commands (often by having their instance methods invoked from a Command Handler), and then publishing events. This is also the foundation upon which the aggregate roots base their logic for dealing with method invocations. Hence, the input is a command and the output is one or many events which are saved to an event store, and then often published on a message broker for those interested (such as an application's view). Modeling aggregate roots to output events can isolate internal state even further than when projecting read-data from entities, as in standard n-tier data-passing architectures. One significant benefit is that axiomatic theorem provers (e.g. Microsoft Contracts and CHESS) are easier to apply, as the aggregate root comprehensively hides its internal state. Events are often persisted based on the version of the aggregate root instance, which yields a domain model that synchronizes in distributed systems through optimistic concurrency. == Mapping Bounded Contexts to Microservices == A bounded context, a fundamental concept in Domain-Driven Design (DDD), defines a specific area within which a domain model is consistent and valid, ensuring clarity and separation of concerns. In microservices architecture, a bounded context often maps to a microservice, but this relationship can vary depending on the design approach. A one-to-one relationship, where each bounded context is implemented as a single microservice, is typically ideal as it maintains clear boundaries, reduces coupling, and enables independent deployment and scaling. However, other mappings may also be appropriate: a one-to-many relationship can arise when a bounded context is divided into multiple microservices to address varying scalability or other operational needs, while a many-to-one relationship may consolidate multiple bounded contexts into a single microservice for simplicity or to minimize operational overhead. The choice of relationship should balance the principles of DDD with the system's business goals, technical constraints, and operational requirements. == Notable tools == Although domain-driven design does not depend on any particular tool or framework, notable examples include: Actifsource, a plug-in for Eclipse which enables software development combining DDD with model-driven engineering and code generation. Context Mapper, a Domain-specific language and tools for strategic and tactic DDD. CubicWeb, an open source semantic web framework entirely driven by a data model. High-level directives allow to refine the data model iteratively, release after release. Defining the data model is enough to get a functioning web application. Further work is required to define how the data is displayed when the default views are not sufficient. OpenMDX, an open-source, Java-based, MDA Framework supporting Java SE, Java EE, and .NET. OpenMDX differs from typical MDA frameworks in that "use models to directly drive the runtime behavior of operational systems". Restful Objects, a standard for mapping a Restful API onto a domain object model (where the domain objects may represent entities, view models, or services). Two open source frameworks (one for Java, one for .NET) can create a Restful Objects API from a domain model automatically, using reflection. == See also == Data mesh, a domain-oriented data architecture Event storming Knowledge representation Ontology (information science) Semantic analysis (knowledge representation) Semantic networks Semantics C4 model Strongly typed identifier Integrated design Systems science == References == == External links == Domain Driven Design, Definitions and Pattern Summaries (PDF), Eric Evans, 2015 DDD Crew on GitHub: Bounded Context Canvas, Aggregate Canvas, Modeling Process and more repositories An Introduction to Domain Driven Design, Methods & tools Implementing Aggregate root in C# language Context Mapper: A Modeling Framework for Strategic Domain-driven Design (Tool, Tutorials, DDD Modeling Examples) Strategic DDC Activity in Design Practice Repository (DPR) and Tactic DDC Activity in Design Practice Repository (DPR)
Wikipedia/Domain-driven_design
The Design and Industries Association is a United Kingdom charity whose object is to engage with all those who share a common interest in the contribution that design can make to the delivery of goods and services that are sustainable and enhance the quality of life for communities and the individual." == 20th century == Shortly before the Great War there was a growing awareness, among British designers, of the extent to which German industrial design had taken the ideals of the Arts and Crafts movement (that had originated with William Morris and others in Britain in the late 19th century) and had successfully moved these into the age of mass, mechanised, production. The German Deutscher Werkbund organisation's Cologne exhibition, held before the outbreak of war in 1914, had been visited by many of those designers, architects, retailers and industrialists who were later to found the Design and Industries Association. In March 1915 an exhibition of German manufactures was held at Goldsmiths' Hall in London. Shortly afterwards a meeting under the chairmanship of Lord Aberconway led to the foundation of the Design and Industries Association (DIA), with the express intention of raising the standard of British industrial design, under the slogan of "Fitness for Purpose". DIA promoted its ideals through lectures, journals and exhibitions. Exhibitions included: 1920: Household Things - Whitechapel Gallery, London 1942 - 1945: Design Round The Clock - travelling 1953: Register your Choice - Charing Cross Underground Station The journals published varied through the period and included: 1932: Design In Industry 1933 - 1935: Design for Today 1936: Trends in Everyday Life In its early years there was considerable tension between the attachment of some members to the principals of the Arts and Crafts movement and the desire to promote the clearly 20th-century outlook of the Modern Movement. Having been heavily involved with the British government's Utility Scheme in the Second World War, DIA had campaigned for the greater involvement of government in the promotion of good design. Ironically, DIA itself was to be somewhat eclipsed by the foundation of the government funded Council for Industrial Design, now the Design Council, in 1944. == DIA Today == Despite the predominance of the Design Council in the latter half of the 20th century, DIA continues its work today as an independent body, organising competitions, events and offering bursaries. In 1978 DIA, together with The Royal College of Art, The Faculty of Royal Designers for Industry and The Royal Academy of Engineering established the Sir Misha Black Awards to recognise excellence and innovation in design education. == Membership == DIA office bearers and members have included some of the most notable 20th-century British designers and manufacturers: Lord Aberconway Wenman Joseph Bassett-Lowke Sir Misha Black Cecil Brewer Noel Carrington Serge Ivan Chermayeff Harold Curwen Nanna Ditzel Ambrose Heal Charles Holden Minnie McLeish Harry Peach Nikolaus Pevsner Frank Pick Jack Pritchard Sir (Sydney) Gordon Russell George Wilson-Crowe Sir Lawrence Weaver Hamilton T Smith [first director of Heals, designer] == How to Choose the Right Association == Community – If the individuals engages in a community that is considered active then it can enhance the value of membership. Relevance – Consider the association’s relevance with oneself, this will ensure that it’s aligned with your personal interest with the design industry. Cost vs Value – Make sure to determine the fees against the benefits. This will ensure that the investment is worth it. Research – Doing research on the association’s reputation can influences one’s decision. == References == "Design and Industries Association." A Dictionary of Modern Design. Oxford University Press, 2004, 2005. Answers.com 13 Oct. 2008. http://www.answers.com/topic/design-and-industries-association "Nothing Need Be Ugly", The first 70 years of the Design & Industries Association. Plumber, Raymond. DIA London 1985 == External links == The Design and Industries Association Archived 2021-09-10 at the Wayback Machine
Wikipedia/Design_and_Industries_Association
Building design, also called architectural design, refers to the broadly based architectural, engineering and technical applications to the design of buildings. All building projects require the services of a building designer, typically a licensed architect. Smaller, less complicated projects often do not require a licensed professional, and the design of such projects is often undertaken by building designers, draftspersons, interior designers (for interior fit-outs or renovations), or contractors. Larger, more complex building projects require the services of many professionals trained in specialist disciplines, usually coordinated by an architect. == Occupations == === Architect === An architect is a person trained in the planning, design and supervision of the construction of buildings. Professionally, an architect's decisions affect public safety, and thus an architect must undergo specialized training consisting of advanced education and a practicum (or internship) for practical experience to earn a license to practice architecture. In most of the world's jurisdictions, the professional and commercial use of the term "architect" is legally protected. === Building engineer === Building engineering typically includes the services of electrical, mechanical and structural engineers. === Draftsperson === A draftsperson or documenter has attained a certificate or diploma in architectural drafting (or equivalent training), and provides services relating to preparing construction documents rather than building design. Some draftspersons are employed by architectural design firms and building contractors, while others are self-employed. === Building designer === In many places, building codes and legislation of professions allow persons to design single family residential buildings and, in some cases, light commercial buildings without an architectural license. As such, "Building designer" is a common designation in the United States, Canada, Australia and elsewhere for someone who offers building design services but is not a licensed architect or engineer. Anyone may use the title of "building designer" in the broadest sense. In many places, a building designer may achieve certification demonstrating a higher level of training. In the U.S., the National Council of Building Designer Certification (NCBDC), an offshoot of the American Institute of Building Design, administers a program leading to the title of Certified Professional Building Designer (CPBD). Usually, building designers are trained as architectural technologists or draftspersons; they may also be architecture school graduates that have not completed licensing requirements. Many building designers are known as "residential" or "home designers", since they focus mainly on residential design and remodeling. In the U.S. state of Nevada, "Residential Designer" is a regulated term for those who are registered as such under Nevada State Board of Architecture, Interior Design and Residential Design, and one may not legally represent oneself in a professional capacity without being currently registered. In Australia where use of the term architect and some derivatives is highly restricted but the architectural design of buildings has very few restrictions in place, the term building designer is used extensively by people or design practices who are not registered by the relevant State Board of Architects. In Queensland the term building design is used in legislation which licenses practitioners as part of a broader building industry licensing system. In Victoria there is a registration process for building designers and in other States there is currently no regulation of the profession. A Building Designers Association operates in each state to represent the interests of building designers. === Building surveyor === Building surveyors are technically minded general practitioners in the United Kingdom, Australia and elsewhere, trained much like architectural technologists. In the UK, the knowledge and expertise of the building surveyor is applied to various tasks in the property and construction markets, including building design for smaller residential and light commercial projects. This aspect of the practice is similar to other European occupations, most notably the geometra in Italy, but also the géomètre in France, Belgium and Switzerland. the building surveyors are also capable on establishment of bills of quantities for the new works and renovation or maintenance or rehabilitation works. The profession of Building Surveyor does not exist in the US. The title Surveyor refers almost exclusively to Land surveyors. Architects, Building Designers, Residential Designers, Construction Managers, and Home Inspectors perform some or all of the work of the U.K. Building Surveyor. == See also == Architectural designer Architectural design values - intentions which influence design decision of architects Facility management Landscape architect Urban design == References ==
Wikipedia/Architectural_design
In film and television, a production designer is the individual responsible for the overall aesthetic of the story. The production design gives the viewers a sense of the time period, the plot location, and character actions and feelings. Working directly with the director, cinematographer, and producer, production designers have a key creative role in the creation of motion pictures and television. The term production designer was coined by William Cameron Menzies while he was working on the film Gone with the Wind. Production designers are commonly confused with art directors as the roles have similar responsibilities. Production designers decide the visual concept and deal with the many and varied logistics of filmmaking including, schedules, budgets, and staffing. Art directors manage the process of making the visuals, which is done by concept artists, graphic designers, set designers, costume designers, lighting designers, etc. The production designer and the art director lead a team of individuals to assist with the visual component of the film. Depending on the size of the production the rest of the team can include runners, graphic designers, drafts people, props makers, and set builders. Productions Designers create a framework for the visual aesthetic of a project and work in partnership and collaboration with the Set Decorator & Set Decorating department to execute the desired look. == Process == The production designer will read the script and allocate categories based on the required visual components such as interior, exterior, location, graphic, vehicles, etc. Discussion with the director is essential in the beginning of the production design process. In this discussion, the production designer will clarify the approach and focus required for the visual design of each scene. The production designer will move to researching which is important in every design process. They will use mood board which consists of images, sketches, inspiration, color swatches, photos, textiles, etc. that help with the ideation. Learning about the time period, the place and the culture also assists with coming up with an idea. Moreover, the PD has to plan to create a convincing space within a budget, therefore, it is important that the space can speak about the character or enhance the flow of the story, rather than being filled with unnecessary decoration. Additionally, it also affects the location of filming, whether it is in a studio or at a specific location. The production designer will ensure that all visual components of the film are complete in all stages of the production process. == The importance of production design == Production design plays an essential role in storytelling, for instance, in the movie Titanic, when the characters Jack and Rose are in the cold water after the ship sank, we know that they are cold because of the setting: it is nighttime and there is ice on their hair. A more specific example is The Wizard of Oz, in which we know the story takes place on a farm because of the bale of hay Dorothy leans on and the animals around, as well as the typical wooden fence. In the scene in which Dorothy's dog is taken away, we know that it happens in her aunt and uncle's house, which adds more tension because her beloved friend, Toto is not killed, lost or kidnapped on the street, but is forced to leave by an outsider, Ms. Gulch, who enters Dorothy's private and safe zone (her home). Jane Barnwell states that the place the characters exist in gives information about them and enhances the fluency of the narrative (175). Imagine Dorothy's home was dirty and everyone in her house were dressed untidily, the viewer would have supported the outsider instead, perhaps thinking that the outsider in a way, rescued the dog from an unhealthy environment. Additionally, the characters' clothing, especially that of Ms. Gulch, makes the description "own half the county" more reliable in portraying Ms. Gulch, and also supports the reason why Dorothy cannot rebel against Ms. Gulch by making the dog stay. However, this does not mean that the setting or costume should be extremely detailed and cluttered with information. The goal is to not let the viewer notice these elements, which, however, is how production design works. Jon Boorstin states in his book, Making Movies Work Thinking Like a Filmmaker, that the background, the camera motion or even the sound effect is considered well-done if the viewer does not notice their appearance. == Societies and trade organizations == In the United States and British Columbia, production designers are represented by several local unions of the International Alliance of Theatrical Stage Employees (IATSE). Local 800, the Art Directors Guild, represents production designers in the U.S., with the exception of New York City and its vicinity. Those members are represented by Local 829, the United Scenic Artists. In the rest of Canada, production designers are represented by the Directors Guild of Canada. In the United Kingdom, members of the art department are represented by the non-union British Film Designers Guild. The production design credit must be requested by a film's producer, prior to completion of photography, and submitted to the Art Directors Guild Board of Directors for the credit approval. == See also == Academy Award for Best Production Design Art Directors Guild Hall of Fame List of British production designers Category:Production designers Category:Women production designers Scenic design == References == === Footnotes === === Bibliography === == Further reading == == External links == Art Directors Guild, IATSE Local 800 Union local representing Art Directors and Production Designers ADG Art Direction Wiki Online community and knowledge base relating to film design British Film Designers Guild The Austrian Filmdesigners Association (VÖF - Verband Österreichischer FilmausstatterInnen). Production Design Training Toi Whakaari: NZ Drama School Production Design training in Auckland NZ at Unitec Performing and Screen Arts
Wikipedia/Production_designer
The argument from poor design, also known as the dysteleological argument, is an argument against the assumption of the existence of a creator God, based on the reasoning that any omnipotent and omnibenevolent deity or deities would not create organisms with the perceived suboptimal designs that occur in nature. The argument is structured as a basic modus ponens: if "creation" contains many defects, then design appears an implausible theory for the origin of earthly existence. Proponents most commonly use the argument in a weaker way, however: not with the aim of disproving the existence of God, but rather as a reductio ad absurdum of the well-known argument from design (which suggests that living things appear too well-designed to have originated by chance, and so an intelligent God or gods must have deliberately created them). Although the phrase "argument from poor design" has seen little use, this type of argument has been advanced many times using words and phrases such as "poor design", "suboptimal design", "unintelligent design" or "dysteleology/dysteleological". The nineteenth-century biologist Ernst Haeckel applied the term "dysteleology" to the implications of organs so rudimentary as to be useless to the life of an organism. In his 1868 book Natürliche Schöpfungsgeschichte (The History of Creation), Haeckel devoted most of a chapter to the argument, ending with the proposition (perhaps with tongue slightly in cheek) of "a theory of the unsuitability of parts in organisms, as a counter-hypothesis to the old popular doctrine of the suitability of parts". In 2005, Donald Wise of the University of Massachusetts Amherst popularised the term "incompetent design" (a play on "intelligent design"), to describe aspects of nature seen as flawed in design. Traditional Christian theological responses generally posit that God constructed a perfect universe but that humanity's misuse of its free will to rebel against God has resulted in the corruption of divine good design. == Overview == The argument runs that: An omnipotent, omniscient, omnibenevolent creator God would create organisms that have optimal design. Organisms have features that are suboptimal. Therefore, God either did not create these organisms or is not omnipotent, omniscient and omnibenevolent. It is sometimes used as a reductio ad absurdum of the well-known argument from design, which runs as follows: Living things are too well-designed to have originated by chance. Therefore, life must have been created by an intelligent creator. This creator is God. "Poor design" is consistent with the predictions of the scientific theory of evolution by means of natural selection. This predicts that features that were evolved for certain uses are then reused or co-opted for different uses, or abandoned altogether; and that suboptimal state is due to the inability of the hereditary mechanism to eliminate the particular vestiges of the evolutionary process. In fitness landscape terms, natural selection will always push "up the hill", but a species cannot normally get from a lower peak to a higher peak without first going through a valley. The argument from poor design is one of the arguments that was used by Charles Darwin; modern proponents have included Stephen Jay Gould, Richard Dawkins, and Nathan H. Lents. They argue that such features can be explained as a consequence of the gradual, cumulative nature of the evolutionary process. Theistic evolutionists generally reject the argument from design, but do still maintain belief in the existence of God. == Examples == === In humans === ==== Fatal flaws ==== American scientist Nathan H. Lents published his book on poor design in the human body and genome in 2018 titled Human Errors. The book ignited a firestorm of criticism from the creationist community but was well received by the scientific community and received unanimously favorable reviews in the dozens of non-creationist media outlets that covered it. Several defects in human anatomy can result in death, especially without modern medical care: In the human female, a fertilized egg can implant into the fallopian tube, cervix or ovary rather than the uterus causing an ectopic pregnancy. The existence of a cavity between the ovary and the fallopian tube could indicate a flawed design in the female reproductive system. Prior to modern surgery, ectopic pregnancy invariably caused the deaths of both mother and baby. Even in modern times, in almost all cases the pregnancy must be aborted to save the life of the mother. In the human female, the birth canal passes through the pelvis. The prenatal skull will deform to a surprising extent. However, if the baby's head is significantly larger than the pelvic opening, the baby cannot be born naturally. Prior to the development of modern surgery (caesarean section), such a complication would lead to the death of the mother, the baby, or both. Other birthing complications such as breech birth are worsened by this position of the birth canal. In the human male, testes develop initially within the abdomen. Later during gestation, they migrate through the abdominal wall into the scrotum. This causes two weak points in the abdominal wall where hernias can later form. Prior to modern surgical techniques, complications from hernias, such as intestinal blockage and gangrene, usually resulted in death. The existence of the pharynx, a passage used for both ingestion and respiration, with the consequent drastic increase in the risk of choking. The breathing reflex is stimulated not directly by the absence of oxygen but indirectly by the presence of carbon dioxide. This means that high concentrations of inert gases, such as nitrogen and helium, can cause suffocation without any biological warning. Furthermore, at high altitudes, oxygen deprivation can occur in unadapted individuals who do not consciously increase their breathing rate. The human appendix is a vestigial organ thought to serve no purpose. Appendicitis, an infection of this organ, is a certain death without medical intervention. "During the past few years, however, several studies have suggested its immunological importance for the development and preservation of the intestinal immune system." Tinnitus, a phantom auditory sensation, is a maladaptation resulting from hearing loss most often caused by exposure to loud noise. Tinnitus serves no practical purpose, reduces quality of life, may cause depression, and when severe can lead to suicide. ==== Other flaws ==== Barely used nerves and muscles, such as the plantaris muscle of the foot, that are missing in part of the human population and are routinely harvested as spare parts if needed during operations. Another example is the muscles that move the ears, which some people can learn to control to a degree, but serve no purpose in any case. The common malformation of the human spinal column, leading to scoliosis, sciatica and congenital misalignment of the vertebrae. The spinal cord cannot ever properly heal if it is damaged, because neurons have become so specialized that they are no longer able to regrow once they reach their mature state. The spinal cord, if broken, will never repair itself and will result in permanent paralysis. The route of the recurrent laryngeal nerve is such that it travels from the brain to the larynx by looping around the aortic arch. This same configuration holds true for many animals; in the case of the giraffe, this results in about twenty feet of extra nerve. Almost all animals and plants synthesize their own vitamin C, but humans cannot because the gene for this enzyme is defective (Pseudogene ΨGULO). Lack of vitamin C results in scurvy and eventually death. The gene is also non-functional in other primates and in guinea pigs, but is functional in most other animals. The prevalence of congenital diseases and genetic disorders such as Huntington's disease. The male urethra passes directly through the prostate, which can produce urinary difficulties if the prostate becomes swollen. Crowded teeth and poor sinus drainage, as human faces are significantly flatter than those of other primates although humans share the same tooth set. This results in a number of problems, most notably with wisdom teeth, which can damage neighboring teeth or cause serious infections of the mouth. The structure of human eyes (as well as those of all vertebrates). The retina is 'inside out'. The nerves and blood vessels lie on the surface of the retina instead of behind it as is the case in many invertebrate species. This arrangement forces a number of complex adaptations and gives mammals a blind spot. Having the optic nerve connected to the side of the retina that does not receive the light, as is the case in cephalopods, would avoid these problems. Lents and colleagues have proposed that the tapetum lucidum, the reflective surface behind vertebrate retinas, has evolved to overcome the limitations of the inverted retina, as cephalopods have never evolved this structure. However, an 'inverted' retina actually improves image quality through müller cells by reducing distortion. The effects of the blind spots resulting from the inverted retina are cancelled by binocular vision, as the blind spots in both eyes are oppositely angled. Additionally, as cephalopod eyes lack cone cells and might be able to judge color by bringing specific wavelengths to a focus on the retina, an inverted retina might interfere with this mechanism. Humans are attracted to junk food's non-nutritious ingredients, and even wholly non-nutritious psychoactive drugs, and can experience physiological adaptations to prefer them to nutrients. === Other life === In the African locust, nerve cells start in the abdomen but connect to the wing. This leads to unnecessary use of materials. Intricate reproductive devices in orchids, apparently constructed from components commonly having different functions in other flowers. The use by pandas of their enlarged radial sesamoid bones in a manner similar to how other creatures use thumbs. The existence of unnecessary wings in flightless birds, e.g. ostriches. The enzyme RuBisCO has been described as a "notoriously inefficient" enzyme, as it is inhibited by oxygen, has a very slow turnover and is not saturated at current levels of carbon dioxide in the atmosphere. The enzyme is inhibited as it is unable to distinguish between carbon dioxide and molecular oxygen, with oxygen acting as a competitive enzyme inhibitor. However, RuBisCO remains the key enzyme in carbon fixation, and plants overcome its poor activity by having massive amounts of it inside their cells, making it the most abundant protein on Earth. Sturdy but heavy bones, suited for non-flight, occurring in animals like bats. Or, on the converse: unstable, light, hollow bones, suited for flight, occurring in birds like penguins and ostriches, which cannot fly. Various vestigial body parts, like the femur and pelvis in whales (evolution indicates the ancestors of whales lived on land). Turritopsis dohrnii and species of the genus Hydra have biological immortality, but most animals do not. Many species have strong instincts to behave in response to a certain stimulus. Natural selection can leave animals behaving in detrimental ways when they encounter a supernormal stimulus - like a moth flying into a flame. Plants are green and not black, as chlorophyll absorbs green light poorly, even though black plants would absorb more light energy. Whales and dolphins breathe air, but live in the water, meaning they must swim to the surface frequently to breathe. Albatrosses cannot take off or land properly. == Counterarguments == === Specific examples === Intelligent design proponent William Dembski questions the first premise of the argument, claiming that "intelligent design" does not need to be optimal. While the appendix has been previously credited with very little function, research has shown that it serves an important role in the fetus and young adults. Endocrine cells appear in the appendix of the human fetus at around the 11th week of development, which produce various biogenic amines and peptide hormones, compounds that assist with various biological control (homeostatic) mechanisms. In young adults, the appendix has some immune functions. === Responses to counterarguments === In response to the claim that uses have been found for "junk" DNA, proponents note that the fact that some non-coding DNA has a purpose does not establish that all non-coding DNA has a purpose, and that the human genome does include pseudogenes that are nonfunctional "junk", with others noting that some sections of DNA can be randomized, cut, or added to with no apparent effect on the organism in question. The original study that suggested that the Makorin1-p1 served some purpose has been disputed. However, the original study is still frequently cited in newer studies and articles on pseudogenes previously thought to be nonfunctional. == As an argument regarding God == The argument from poor design is sometimes interpreted, by the argumenter or the listener, as an argument against the existence of God, or against characteristics commonly attributed to a creator deity, such as omnipotence, omniscience, or personality. In a weaker form, it is used as an argument for the incompetence of God. The existence of "poor design" (as well as the perceived prodigious "wastefulness" of the evolutionary process) would seem to imply a "poor" designer, or a "blind" designer, or no designer at all. In Gould's words, "If God had designed a beautiful machine to reflect his wisdom and power, surely he would not have used a collection of parts generally fashioned for other purposes. Orchids are not made by an ideal engineer; they are jury-rigged...." The apparently suboptimal design of organisms has also been used by proponents of theistic evolution to argue in favour of a creator deity who uses natural selection as a mechanism of his creation. Arguers from poor design regard counter-arguments as a false dilemma, imposing that either a creator deity designed life on earth well or flaws in design indicate the life is not designed. This allows proponents of intelligent design to cherry pick which aspects of life constitute design, leading to the unfalsifiability of the theory. Christian proponents of both intelligent design and creationism may claim that good design indicates the creative intelligence of their God, while poor design indicates corruption of the world as a result of free will that caused the fall of man (for example, in Genesis 3:16 Yahweh says to Eve "I will increase your trouble in pregnancy"). == See also == Atavism Vestigiality Maladaptation Human vestigiality == References == == Further reading == Avise, John C. (2010), Inside the Human Genome: A Case for Non-Intelligent Design, Oxford University Press. ISBN 0-19-539343-0. Dawkins, Richard (1986). The Blind Watchmaker. ISBN 0-393-30448-5 Gould, Stephen Jay (1980). The Panda's Thumb: More Reflections in Natural History. ISBN 0-393-30023-4 Gurney, Peter W.G. (1999). "Is our 'inverted' retina really 'bad design'?". Creation Ex Nihilo Technical Journal/TJ. 13 (1): 37–44. Leonard, P. (1993). "Too much light," New Scientist, 139. Martin, B.; Martin, F. (2003). "Neither intelligent nor designed". Skeptical Inquirer. 27: 6. Perakh, Mark Unintelligent Design (ISBN 1-59102-084-0 – December 2003) Williams, Robyn (1 February 2007). Unintelligent Design: Why God Isn't as Smart as She Thinks She Is. Allen & Unwin. ISBN 978-1-74114-923-4. Witt, Jonathan. "The Gods Must Be Tidy!", Touchstone, July/August 2004. Woodmorappe, J. (1999). "Why Weren't Plants Created 100% Efficient at Photosynthesis? (OR: Why Aren't Plants Black?)" Woodmorappe, J. (2003). "Pseudogene function: more evidence" Creation Ex Nihilo Technical Journal/TJ 17(2):15?18. == External links == A short interview with prof. Don Wise at Really Magazine (2006) Unintelligent Design Network satirical site
Wikipedia/Argument_from_poor_design
Configuration design is a kind of design where a fixed set of predefined components that can be interfaced (connected) in predefined ways is given, and an assembly (i.e. designed artifact) of components selected from this fixed set is sought that satisfies a set of requirements and obeys a set of constraints. The associated design configuration problem consists of the following three constituent tasks: Selection of components, Allocation of components, and Interfacing of components (design of ways the components interface/connect with each other). Types of knowledge involved in configuration design include: Problem-specific knowledge: Input knowledge: Requirements Constraints Technology Case knowledge Persistent knowledge (knowledge that remains valid over multiple problem solving sessions): Case knowledge Domain-specific, method-independent knowledge Method-specific domain knowledge Search-control knowledge == See also == Systems design Modular design Morphological analysis (problem-solving) Constraint satisfaction problem == References == Mittal, S. and Frayman, F. (1989), Towards a generic model of configuration tasks, Proceedings of the 11th IJCAI, San Mateo, CA, USA, Morgan Kaufmann, pages 1395-1401. Levin, Mark Sh. (2015) Modular systems design and evaluation. Springer. B. Wielinga and G. Schreiber (1997), Configuration Design Problem Solving, IEEE Intelligent Systems, Vol. 12, pages 49–56.
Wikipedia/Configuration_design
Nuclear weapons design are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are three existing basic design types: Pure fission weapons are the simplest, least technically demanding, were the first nuclear weapons built, and so far the only type ever used in warfare, by the United States on Japan in World War II. Boosted fission weapons are fission weapons that also generate a limited amount of nuclear fusion reactions. These serve primarily as a supplemental neutron source, enhancing the fission chain reaction and increasing its efficiency. Boosting can more than double the weapon's fission energy yield. Staged thermonuclear weapons are arrangements of two or more "stages", most usually two, where the weapon derives a significant fraction of its energy from nuclear fusion (as well as, usually, nuclear fission), . The first stage is typically a boosted fission weapon (except for the earliest thermonuclear weapons, which used a pure fission weapon). Its detonation causes it to shine intensely with X-rays, which illuminate and implode the second stage filled with fusion fuel. This initiates a sequence of events which results in a thermonuclear, or fusion, burn. This process affords potential yields hundred or thousands of times greater than those of fission weapons. Pure fission weapons have been the first type to be built by new nuclear powers. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option, once the necessary technical base and industrial infrastructure are built. Most known innovations in nuclear weapon design originated in the United States, though some were later developed independently by other states. In early news accounts, pure fission weapons were called atomic bombs or A-bombs and weapons involving fusion were called hydrogen bombs or H-bombs. Practitioners of nuclear policy, however, favor the terms nuclear and thermonuclear, respectively. == Nuclear reactions == Nuclear fission separates or splits heavier atoms to form lighter atoms. Nuclear fusion combines lighter atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs, which a French patent claimed in May 1939. In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations. === Fission === When a free neutron hits the nucleus of a fissile atom like uranium-235 (235U), the uranium nucleus splits into two smaller nuclei called fission fragments, plus more neutrons (for 235U three about as often as two; an average of just under 2.5 per fission). The fission chain reaction in a supercritical mass of fuel can be self-sustaining because it produces enough surplus neutrons to offset losses of neutrons escaping the supercritical assembly. Most of these have the speed (kinetic energy) required to cause new fissions in neighboring uranium nuclei. The uranium-235 nucleus can split in many ways, provided the atomic numbers add up to 92 and the mass numbers add up to 236 (uranium-235 plus the neutron that caused the split). The following equation shows one possible split, namely into strontium-95 (95Sr), xenon-139 (139Xe), and two neutrons (n), plus energy: 235 U + n ⟶ 236 U ∗ ⟶ 95 S r + 139 X e + 2 n + 180 M e V {\displaystyle \ {}^{235}\mathrm {U} +\mathrm {n} \longrightarrow {}^{236}\mathrm {U} ^{*}\longrightarrow {}^{95}\mathrm {Sr} +{}^{139}\mathrm {Xe} +2\ \mathrm {n} +180\ \mathrm {MeV} } The immediate energy release per atom is about 180 million electron volts (MeV); i.e., 74 TJ/kg. Only 7% of this is gamma radiation and kinetic energy of fission neutrons. The remaining 93% is kinetic energy (or energy of motion) of the charged fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). This initial kinetic energy is 67 TJ/kg, imparting an initial speed of about 12,000 kilometers per second (i.e. 1.2 cm per nanosecond). The charged fragments' high electric charge causes many inelastic coulomb collisions with nearby nuclei, and these fragments remain trapped inside the bomb's fissile pit and tamper until their kinetic energy is converted into heat. Given the speed of the fragments and the mean free path between nuclei in the compressed fuel assembly (for the implosion design), this takes about a millionth of a second (a microsecond), by which time the core and tamper of the bomb have expanded to a ball of plasma several meters in diameter with a temperature of tens of millions of degrees Celsius. This is hot enough to emit black-body radiation in the X-ray spectrum. These X-rays are absorbed by the surrounding air, producing the fireball and blast of a nuclear explosion. Most fission products have too many neutrons to be stable so they are radioactive by beta decay, converting neutrons into protons by throwing off beta particles (electrons), neutrinos and gamma rays. Their half-lives range from milliseconds to about 200,000 years. Many decay into isotopes that are themselves radioactive, so from 1 to 6 (average 3) decays may be required to reach stability. In reactors, the radioactive products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global. Meanwhile, inside the exploding bomb, the free neutrons released by fission carry away about 3% of the initial fission energy. Neutron kinetic energy adds to the blast energy of a bomb, but not as effectively as the energy from charged fragments, since neutrons do not give up their kinetic energy as quickly in collisions with charged nuclei or electrons. The dominant contribution of fission neutrons to the bomb's power is the initiation of subsequent fissions. Over half of the neutrons escape the bomb core, but the rest strike 235U nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one atom, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium or plutonium up to hundreds of tons by the hundredth link in the chain. Typically in a modern weapon, the weapon's pit contains 3.5 to 4.5 kilograms (7.7 to 9.9 lb) of plutonium and at detonation produces approximately 5 to 10 kilotonnes of TNT (21 to 42 TJ) yield, representing the fissioning of approximately 0.5 kilograms (1.1 lb) of plutonium. Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: 235U, also known as highly enriched uranium (HEU), "oralloy" meaning "Oak Ridge alloy", or "25" (a combination of the last digit of the atomic number of uranium-235, which is 92, and the last digit of its mass number, which is 235); and 239Pu, also known as plutonium-239, or "49" (from "94" and "239"). Uranium's most common isotope, 238U, is fissionable but not fissile, meaning that it cannot sustain a chain reaction because its daughter fission neutrons are not (on average) energetic enough to cause follow-on 238U fissions. However, the neutrons released by fusion of the heavy hydrogen isotopes deuterium and tritium will fission 238U. This 238U fission reaction in the outer jacket of the secondary assembly of a two-stage thermonuclear bomb produces by far the greatest fraction of the bomb's energy yield, as well as most of its radioactive debris. For national powers engaged in a nuclear arms race, this fact of 238U's ability to fast-fission from thermonuclear neutron bombardment is of central importance. The plenitude and cheapness of both bulk dry fusion fuel (lithium deuteride) and 238U (a byproduct of uranium enrichment) permit the economical production of very large nuclear arsenals, in comparison to pure fission weapons requiring the expensive 235U or 239Pu fuels. === Fusion === Fusion produces neutrons which dissipate energy from the reaction. In weapons, the most important fusion reaction is called the D-T reaction. Using the heat and pressure of fission, hydrogen-2, or deuterium (2D), fuses with hydrogen-3, or tritium (3T), to form helium-4 (4He) plus one neutron (n) and energy: 2 D + 3 T ⟶ 5 H e ∗ ⟶ 4 H e + n + 17.6 M e V {\displaystyle {}^{2}\mathrm {D} +{}^{3}\mathrm {T} \longrightarrow {}^{5}\mathrm {He} ^{*}\longrightarrow {}^{4}\mathrm {He} +n+17.6\ \mathrm {MeV} } The total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are only one-fiftieth as massive, so the energy output per unit mass is approximately five times as great. In this fusion reaction, 14 of the 17.6 MeV (80% of the energy released in the reaction) shows up as the kinetic energy of the neutron, which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire. The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (of either isotope; 14 MeV is high enough to fission both 235U and 238U) or plutonium, the result is fission and the release of 180 MeV of fission energy, multiplying the energy output tenfold. For weapon use, fission is necessary to start fusion, helps to sustain fusion, and captures and multiplies the energy carried by the fusion neutrons. In the case of a neutron bomb (see below), the last-mentioned factor does not apply, since the objective is to facilitate the escape of neutrons, rather than to use them to increase the weapon's raw power. === Tritium production === An essential nuclear reaction is the one that creates tritium, or hydrogen-3. Tritium is employed in two ways. First, pure tritium gas is produced for placement inside the cores of boosted fission devices in order to increase their energy yields. This is especially so for the fission primaries of thermonuclear weapons. The second way is indirect, and takes advantage of the fact that the neutrons emitted by a supercritical fission "spark plug" in the secondary assembly of a two-stage thermonuclear bomb will produce tritium in situ when these neutrons collide with the lithium nuclei in the bomb's lithium deuteride fuel supply. Elemental gaseous tritium for fission primaries is also made by bombarding lithium-6 (6Li) with neutrons (n), only in a nuclear reactor. This neutron bombardment will cause the lithium-6 nucleus to split, producing an alpha particle, or helium-4 (4He), plus a triton (3T) and energy: 6 L i + n ⟶ 4 H e + 3 T + 5 M e V {\displaystyle {}^{6}\mathrm {Li} +n\longrightarrow {}^{4}\mathrm {He} +{}^{3}\mathrm {T} +5\ \mathrm {MeV} } But as was discovered in the first test of this type of device, Castle Bravo, when lithium-7 is present, one also has some amounts of the following two net reactions: 7Li + 1n → 3T + 4He + 1n 7Li + 2H → 2 4He + 1n + 15.123 MeV Most lithium is 7Li, and this gave Castle Bravo a yield 2.5 times larger than expected. The neutrons are supplied by the nuclear reactor in a way similar to production of plutonium 239Pu from 238U feedstock: target rods of the 6Li feedstock are arranged around a uranium-fueled core, and are removed for processing once it has been calculated that most of the lithium nuclei have been transmuted to tritium. Of the four basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three. == Pure fission weapons == The first task of a nuclear weapon design is to rapidly assemble a supercritical mass of fissile (weapon grade) uranium or plutonium. A supercritical mass is one in which the percentage of fission-produced neutrons captured by other neighboring fissile nuclei is large enough that each fission event, on average, causes more than one follow-on fission event. Neutrons released by the first fission events induce subsequent fission events at an exponentially accelerating rate. Each follow-on fissioning continues a sequence of these reactions that works its way throughout the supercritical mass of fuel nuclei. This process is conceived and described colloquially as the nuclear chain reaction. To start the chain reaction in a supercritical assembly, at least one free neutron must be injected and collide with a fissile fuel nucleus. The neutron joins with the nucleus (technically a fusion event) and destabilizes the nucleus, which explodes into two middleweight nuclear fragments (from the severing of the strong nuclear force holding the mutually-repulsive protons together), plus two or three free neutrons. These race away and collide with neighboring fuel nuclei. This process repeats over and over until the fuel assembly goes sub-critical (from thermal expansion), after which the chain reaction shuts down because the daughter neutrons can no longer find new fuel nuclei to hit before escaping the less-dense fuel mass. Each following fission event in the chain approximately doubles the neutron population (net, after losses due to some neutrons escaping the fuel mass, and others that collide with any non-fuel impurity nuclei present). For the gun assembly method (see below) of supercritical mass formation, the fuel itself can be relied upon to initiate the chain reaction. This is because even the best weapon-grade uranium contains a significant number of 238U nuclei. These are susceptible to spontaneous fission events, which occur randomly (it is a quantum mechanical phenomenon). Because the fissile material in a gun-assembled critical mass is not compressed, the design need only ensure the two sub-critical masses remain close enough to each other long enough that a 238U spontaneous fission will occur while the weapon is in the vicinity of the target. This is not difficult to arrange as it takes but a second or two in a typical-size fuel mass for this to occur. (Still, many such bombs meant for delivery by air (gravity bomb, artillery shell or rocket) use injected neutrons to gain finer control over the exact detonation altitude, important for the destructive effectiveness of airbursts.) This condition of spontaneous fission highlights the necessity to assemble the supercritical mass of fuel very rapidly. The time required to accomplish this is called the weapon's critical insertion time. If spontaneous fission were to occur when the supercritical mass was only partially assembled, the chain reaction would begin prematurely. Neutron losses through the void between the two subcritical masses (gun assembly) or the voids between not-fully-compressed fuel nuclei (implosion assembly) would sap the bomb of the number of fission events needed to attain the full design yield. Additionally, heat resulting from the fissions that do occur would work against the continued assembly of the supercritical mass, from thermal expansion of the fuel. This failure is called predetonation. The resulting explosion would be called a "fizzle" by bomb engineers and weapon users. Plutonium's high rate of spontaneous fission makes uranium fuel a necessity for gun-assembled bombs, with their much greater insertion time and much greater mass of fuel required (because of the lack of fuel compression). There is another source of free neutrons that can spoil a fission explosion. All uranium and plutonium nuclei have a decay mode that results in energetic alpha particles. If the fuel mass contains impurity elements of low atomic number (Z), these charged alphas can penetrate the coulomb barrier of these impurity nuclei and undergo a reaction that yields a free neutron. The rate of alpha emission of fissile nuclei is one to two million times that of spontaneous fission, so weapon engineers are careful to use fuel of high purity. Fission weapons used in the vicinity of other nuclear explosions must be protected from the intrusion of free neutrons from outside. Such shielding material will almost always be penetrated, however, if the outside neutron flux is intense enough. When a weapon misfires or fizzles because of the effects of other nuclear detonations, it is called nuclear fratricide. For the implosion-assembled design, once the critical mass is assembled to maximum density, a burst of neutrons must be supplied to start the chain reaction. Early weapons used a modulated neutron generator code named "Urchin" inside the pit containing polonium-210 and beryllium separated by a thin barrier. Implosion of the pit crushes the neutron generator, mixing the two metals, thereby allowing alpha particles from the polonium to interact with beryllium to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better timing of the first fission events in the chain reaction, which optimally should occur at the point of maximum compression/supercriticality. Timing of the neutron injection is a more important parameter than the number of neutrons injected: the first generations of the chain reaction are vastly more effective due to the exponential function by which neutron multiplication evolves. The critical mass of an uncompressed sphere of bare metal is 50 kg (110 lb) for uranium-235 and 16 kg (35 lb) for delta-phase plutonium-239. In practical applications, the amount of material required for criticality is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons. To avoid a premature chain reaction during handling, the fissile material in the weapon must be kept subcritical. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching criticality. Another method of reducing criticality risk is to incorporate material with a large cross-section for neutron capture, such as boron (specifically 10B comprising 20% of natural boron). Naturally this neutron absorber must be removed before the weapon is detonated. This is easy for a gun-assembled bomb: the projectile mass simply shoves the absorber out of the void between the two subcritical masses by the force of its motion. The use of plutonium affects weapon design due to its high rate of alpha emission. This results in Pu metal spontaneously producing significant heat; a 5 kilogram mass produces 9.68 watts of thermal power. Such a piece would feel warm to the touch, which is no problem if that heat is dissipated promptly and not allowed to build up the temperature. But this is a problem inside a nuclear bomb. For this reason bombs using Pu fuel use aluminum parts to wick away the excess heat, and this complicates bomb design because Al plays no active role in the explosion processes. A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the thermal expansion of the fissioning fuel mass, keeping it supercritical for longer. Often the same layer serves both as tamper and as neutron reflector. === Gun-type assembly === Little Boy, the Hiroshima bomb, used 64 kg (141 lb) of uranium with an average enrichment of around 80%, or 51 kg (112 lb) of uranium-235, just about the bare-metal critical mass (see Little Boy article for a detailed drawing). When assembled inside its tamper/reflector of tungsten carbide, the 64 kg (141 lb) was more than twice critical mass. Before the detonation, the uranium-235 was formed into two sub-critical pieces, one of which was later fired down a gun barrel to join the other, starting the nuclear explosion. Analysis shows that less than 2% of the uranium mass underwent fission; the remainder, representing most of the entire wartime output of the giant Y-12 factories at Oak Ridge, scattered uselessly. The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun). Such warheads were deployed by the United States until 1992, accounting for a significant fraction of the 235U in the arsenal, and were some of the first weapons dismantled to comply with treaties limiting warhead numbers. The rationale for this decision was undoubtedly a combination of the lower yield and grave safety issues associated with the gun-type design. === Implosion-type === For both the Trinity device and the Fat Man (Nagasaki) bomb, nearly identical plutonium fission through implosion designs were used. The Fat Man device specifically used 6.2 kg (14 lb), about 350 ml or 12 US fl oz in volume, of Pu-239, which is only 41% of bare-sphere critical mass (see Fat Man article for a detailed drawing). Surrounded by a U-238 reflector/tamper, the Fat Man's pit was brought close to critical mass by the neutron-reflecting properties of the U-238. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation, as with the "Trinity" test detonation three weeks earlier, of the conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission; the rest, about 5 kg (11 lb), was scattered. An implosion shock wave might be of such short duration that only part of the pit is compressed at any instant as the wave passes through it. To prevent this, a pusher shell may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backward, thereby having the effect of lengthening its duration. It is made out of a low density metal – such as aluminium, beryllium, or an alloy of the two metals (aluminium is easier and safer to shape, and is two orders of magnitude cheaper; beryllium has high neutron-reflective capability). Fat Man used an aluminium pusher. The series of RaLa Experiment tests of implosion-type fission weapon design concepts, carried out from July 1944 through February 1945 at the Los Alamos Laboratory and a remote site 14.3 km (8.9 mi) east of it in Bayo Canyon, proved the practicality of the implosion design for a fission device, with the February 1945 tests positively determining its usability for the final Trinity/Fat Man plutonium implosion design. The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper. (The natural uranium tamper did not undergo fission from thermal neutrons, but did contribute perhaps 20% of the total yield from fission by fast neutrons). After the chain reaction started in the plutonium, it continued until the explosion reversed the momentum of the implosion and expanded enough to stop the chain reaction. By holding everything together for a few hundred nanoseconds more, the tamper increased the efficiency. ==== Plutonium pit ==== The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium, but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s. Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases. As plutonium cools, changes in phase result in distortion and cracking. This distortion is normally overcome by alloying it with 30–35 mMol (0.9–1.0% by weight) gallium, forming a plutonium-gallium alloy, which causes it to take up its delta phase over a wide temperature range. When cooling from molten it then has only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium. Because plutonium is chemically reactive it is common to plate the completed pit with a thin layer of inert metal, which also reduces the toxic hazard. The gadget used galvanic silver plating; afterward, nickel deposited from nickel tetracarbonyl vapors was used, but thereafter and since, gold became the preferred material. Recent designs improve safety by plating pits with vanadium to make the pits more fire-resistant. === Levitated-pit implosion === The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, supported on a hollow cone inside the tamper cavity, was said to be "levitated". The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man. It was immediately clear that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was 1.5 metres (5 ft) wide vs 61 centimetres (2 ft) for Little Boy. The Pu-239 pit of Fat Man was only 9.1 centimetres (3.6 in) in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminium, and high explosives. The key to reducing that girth was the two-point implosion design. === Two-point linear implosion === In the two-point linear implosion, the nuclear fuel is cast into a solid shape and placed within the center of a cylinder of high explosive. Detonators are placed at either end of the explosive cylinder, and a plate-like insert, or shaper, is placed in the explosive just inside the detonators. When the detonators are fired, the initial detonation is trapped between the shaper and the end of the cylinder, causing it to travel out to the edges of the shaper where it is diffracted around the edges into the main mass of explosive. This causes the detonation to form into a ring that proceeds inward from the shaper. Due to the lack of a tamper or lenses to shape the progression, the detonation does not reach the pit in a spherical shape. To produce the desired spherical implosion, the fissile material itself is shaped to produce the same effect. Due to the physics of the shock wave propagation within the explosive mass, this requires the pit to be a prolate spheroid, that is, roughly egg shaped. The shock wave first reaches the pit at its tips, driving them inward and causing the mass to become spherical. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion. The lack of compression makes such designs inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions – ADMs – also known as backpack or suitcase nukes; an example is the W48 artillery shell, the smallest nuclear weapon ever built or deployed. All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches (15 and 25 cm). === Hollow-pit implosion === A more efficient implosion system uses a hollow pit. A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time constraints, but it required a heavy U-238 tamper, a thick aluminium pusher, and three tons of high explosives. After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminium pusher, and less high explosive. == Fusion-boosted fission == The next step in miniaturization was to speed up the fissioning of the pit to reduce the minimum inertial confinement time. This would allow the efficient fission of the fuel with less mass in the form of tamper or the fuel itself. The key to achieving faster fission would be to introduce more neutrons, and among the many ways to do this, adding a fusion reaction was relatively easy in the case of a hollow pit. The easiest fusion reaction to achieve is found in a 50–50 mixture of tritium and deuterium. For fusion power experiments this mixture must be held at high temperatures for relatively lengthy times in order to have an efficient reaction. For explosive use, however, the goal is not to produce efficient fusion, but simply provide extra neutrons early in the process. Since a nuclear explosion is supercritical, any extra neutrons will be multiplied by the chain reaction, so even tiny quantities introduced early can have a large effect on the outcome. For this reason, even the relatively low compression pressures and times (in fusion terms) found in the center of a hollow pit warhead are enough to create the desired effect. In the boosted design, the fusion fuel in gas form is pumped into the pit during arming. This will fuse into helium and release free neutrons soon after fission begins. The neutrons will start a large number of new chain reactions while the pit is still critical or nearly critical. Once the hollow pit is perfected, there is little reason not to boost; deuterium and tritium are easily produced in the small quantities needed, and the technical aspects are trivial. The concept of fusion-boosted fission was first tested on May 25, 1951, in the Item shot of Operation Greenhouse, Eniwetok, yield 45.5 kilotons. Boosting reduces diameter in three ways, all the result of faster fission: Since the compressed pit does not need to be held together as long, the massive U-238 tamper can be replaced by a light-weight beryllium shell (to reflect escaping neutrons back into the pit). The diameter is reduced. The mass of the pit can be reduced by half, without reducing yield. Diameter is reduced again. Since the mass of the metal being imploded (tamper plus pit) is reduced, a smaller charge of high explosive is needed, reducing diameter even further. The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device. It had a cylindrical shape with a diameter of 29 cm (11.6 in) and a length of 58 cm (22.8 in). It was first tested standalone and then as the primary of a two-stage thermonuclear device during Operation Redwing. It was weaponized as the Robin primary and became the first off-the-shelf, multi-use primary, and the prototype for all that followed. After the success of Swan, 28 or 30 centimetres (11 or 12 in) seemed to become the standard diameter of boosted single-stage devices tested during the 1950s. Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only 38 centimetres (15 in) long. One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile. It had a dimension of just 28 centimetres (11 in), and is shown here in comparison to its Fat Man predecessor (150 centimetres or 60 inches). Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to predetonation. It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial predetonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate problem). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect. == Two-stage thermonuclear == Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to add a second independent stage, called a secondary. In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquefied or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it would not work, even with large amounts of expensive tritium added. The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January 1951, when Edward Teller and Stanislaw Ulam invented radiation implosion – for nearly three decades known publicly only as the Teller-Ulam H-bomb secret. The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons. In radiation implosion, the burst of X-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. The radiation quickly turns the plastic foam that had been filling the channel into a plasma which is mostly transparent to X-rays, and the radiation is absorbed in the outermost layers of the pusher/tamper surrounding the secondary, which ablates and applies a massive force (much like an inside out rocket engine) causing the fusion fuel capsule to implode much like the pit of the primary. As the secondary implodes a fissile "spark plug" at its center ignites and provides neutrons and heat which enable the lithium deuteride fusion fuel to produce tritium and ignite as well. The fission and fusion chain reactions exchange neutrons with each other and boost the efficiency of both reactions. The greater implosive force, enhanced efficiency of the fissile "spark plug" due to boosting via fusion neutrons, and the fusion explosion itself provide significantly greater explosive yield from the secondary despite often not being much larger than the primary. For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was 38 centimetres (15 in) in diameter and 59 centimetres (23.4 in) long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons vs 15 kilotons). Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile. A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons. In engineering terms, radiation implosion allows for the exploitation of several known features of nuclear bomb materials which heretofore had eluded practical application. For example: The optimal way to store deuterium in a reasonably dense state is to chemically bond it with lithium, as lithium deuteride. But the lithium-6 isotope is also the raw material for tritium production, and an exploding bomb is a nuclear reactor. Radiation implosion will hold everything together long enough to permit the complete conversion of lithium-6 into tritium, while the bomb explodes. So the bonding agent for deuterium permits use of the D-T fusion reaction without any pre-manufactured tritium being stored in the secondary. The tritium production constraint disappears. For the secondary to be imploded by the hot, radiation-induced plasma surrounding it, it must remain cool for the first microsecond, i.e., it must be encased in a massive radiation (heat) shield. The shield's massiveness allows it to double as a tamper, adding momentum and duration to the implosion. No material is better suited for both of these jobs than ordinary, cheap uranium-238, which also happens to undergo fission when struck by the neutrons produced by D-T fusion. This casing, called the pusher, thus has three jobs: to keep the secondary cool; to hold it, inertially, in a highly compressed state; and, finally, to serve as the chief energy source for the entire bomb. The consumable pusher makes the bomb more a uranium fission bomb than a hydrogen fusion bomb. Insiders never used the term "hydrogen bomb". Finally, the heat for fusion ignition comes not from the primary but from a second fission bomb called the spark plug, embedded in the heart of the secondary. The implosion of the secondary implodes this spark plug, detonating it and igniting fusion in the material around it, but the spark plug then continues to fission in the neutron-rich environment until it is fully consumed, adding significantly to the yield. In the ensuing fifty years, no one has come up with a more efficient way to build a thermonuclear bomb. It is the design of choice for the United States, Russia, the United Kingdom, China, and France, the five thermonuclear powers. On 3 September 2017 North Korea carried out what it reported as its first "two-stage thermo-nuclear weapon" test. According to Dr. Theodore Taylor, after reviewing leaked photographs of disassembled weapons components taken before 1986, Israel possessed boosted weapons and would require supercomputers of that era to advance further toward full two-stage weapons in the megaton range without nuclear test detonations. The other nuclear-armed nations, India and Pakistan, probably have single-stage weapons, possibly boosted. === Interstage === In a two-stage thermonuclear weapon the energy from the primary impacts the secondary. An essential energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary's fusion fuel from heating too quickly, which could cause it to explode in a conventional (and small) heat explosion before the fusion and fission reactions get a chance to start. There is very little information in the open literature about the mechanism of the interstage. Its first mention in a U.S. government document formally released to the public appears to be a caption in a graphic promoting the Reliable Replacement Warhead Program in 2007. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage. This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner. There is also some speculation that this interstage material, which may be code-named Fogbank, might be an aerogel, possibly doped with beryllium and/or other substances. The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing. The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens", "reflector/neutron gun carriage", and "reflector wrap". The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation. == Specific designs == While every nuclear weapon design falls into one of these categories, specific designs have occasionally become the subject of news accounts and public discussion, often with incorrect descriptions about how they work and what they do. Examples: === Alarm Clock/Sloika === One of the early efforts to use fusion reactions in a weapon, pursued in both both the United States and the Soviet Union, involved an implosion bomb design that contained alternating thin layers of fission fuel. As a single-stage device, it was only capable of generating a limited amount of fusion reactions and could not be scaled up indefinitely, and it was very expensive in terms of both fissile material and tritium. The U.S. name for the design, "Alarm Clock," came from Teller: he called it that because it might "wake up the world" to the possibility of the potential of the Super. The Russian name for the same design was more descriptive of its design: Sloika (Russian: Слойка), a layered pastry cake. The United States never developed or tested the design in this form, because its inherent limitations made it unappealing compared to the "Classical Super" design, despite it being a fairly straightforward development compared to the "Classical Super." In the Soviet Union, however, the Sloika was tested as RDS-6s on August 12, 1953, with a yield of 400 kilotons of TNT, of which 15-20% was from thermonuclear fusion reactions. Because the Soviet Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1954), it was sometimes claimed that the USSR won the race to make the first deliverable hydrogen bomb, as the first U.S. thermonuclear test (Ivy Mike, 1952) was of an undeliverable "device." Those who push back against some claims make a distinction between "true," staged thermonuclear weapons and "boosted" weapons, and include the "Sloika" with the latter. The first Soviet test of a staged thermonuclear weapon design was not until 1955. It has been argued that the Sloika, rather than a dead-end, was integral to the Soviet development of staged thermonuclear weapons, as efforts to better implode Sloika-style designs were the Soviet path towards radiation implosion. === Clean bombs === ==== Designs with lead tampers ==== On March 1, 1954, the largest-ever U.S. nuclear test explosion, the 15-megaton Castle Bravo shot of Operation Castle at Bikini Atoll, delivered a promptly lethal dose of fission-product fallout to more than 6,000 square miles (16,000 km2) of Pacific Ocean surface. Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs. In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. The energy produced by the fissioning of unenriched natural uranium, when used as the tamper material in the secondary and subsequent stages in the Teller-Ulam design, can far exceed the energy released by fusion, as was the case in the Castle Bravo test. Replacing the fissionable material in the tamper with another high-Z material (lead) is essential to producing a "clean" bomb. In such a device, the tamper no longer contributes energy, so for any given weight, a clean bomb will have less yield. This was called the "materials substitution method". The earliest known incidence of a three-stage device being tested, with the third stage, called the tertiary, being ignited by the secondary, was May 27, 1956, in the Bassoon device. This device was tested in the Zuni shot of Operation Redwing. This shot used non-fissionable tampers; an inert substitute material such as tungsten or lead was used. Its yield was 3.5 megatons, 85% fusion and only 15% fission. On July 19, 1956, AEC Chairman Lewis Strauss said that the Redwing Zuni shot clean bomb test "produced much of importance ... from a humanitarian aspect." However, less than two days after this announcement, the dirty version of Bassoon, called Bassoon Prime, with a uranium-238 tamper in place, was tested on a barge off the coast of Bikini Atoll as the Redwing Tewa shot. The Bassoon Prime produced a 5-megaton yield, of which 87% came from fission. Data obtained from this test, and others, culminated in the eventual deployment of the highest-yielding US nuclear weapon known, and the highest yield-to-weight weapon ever mass produced, a three-stage thermonuclear weapon with a maximum "dirty" yield of 25 megatons, designated as the B41 nuclear bomb, which was to be carried by U.S. Air Force bombers until it was decommissioned; this weapon was never fully tested. In the Soviet peaceful nuclear explosion program "Nuclear Explosions for the National Economy", "clean" bombs were used for a 1971 triple salvo test related to the Pechora–Kama Canal project. It was reported that about 250 nuclear devices might be used to get the final goal. The Taiga test was to demonstrate the feasibility of the project. Three of these devices of 15 kiloton yield each were placed in separate boreholes, simultaneously detonated, catapulting a radioactive plume into the air that was carried eastward by wind. The resulting trench was around 700 metres (2,300 ft) long and 340 metres (1,120 ft) wide, with an unimpressive depth of just 10 to 15 metres (30 to 50 ft). Despite their "clean" nature, the area still exhibits a noticeably higher (albeit mostly harmless) concentration of fission products, the intense neutron bombardment of the soil, the device itself and the support structures also activated their stable elements to create a significant amount of man-made radioactive elements like 60Co. A larger scale project as was envisioned, however, would have had significant consequences both from the fallout of radioactive plume and the radioactive elements created by the neutron bombardment. Other high fusion yield fraction tests include the 50-megaton Tsar Bomba at 97% fusion, the 9.3-megaton Hardtack Poplar test at 95%, and the 4.5-megaton Redwing Navajo test at 95% fusion. ==== Designs with no tampers ==== The Ripple concept, which used ablation to achieve fusion using very little fission, was and still is by far the cleanest design. Unlike previous clean bombs, which were clean simply by replacing the uranium-238 tamper with lead, Ripple was inherently clean. The fission sparkplug was replaced by a large deuterium-tritium gas core, surrounded by a tamper-like lithium deuteride shell. It is assumed that thin concentric shells of a high-Z material like lead, driven by the small Kinglet primary allowed propagated sustained shockwaves to the core, sustaining the thermonuclear burn and giving the device its name. The design was influenced by the nascent field of inertial confinement fusion. Ripple was also extremely efficient; plans for a 15 kt/kg were made during Operation Dominic. Shot Androscoggin featured a proof-of-concept Ripple design, resulting in a 63-kiloton fizzle (significantly lower than the predicted 15 megatons). It was repeated in shot Housatonic, which featured a 9.96 megaton explosion that was reportedly >99.9% fusion. === Third generation === First and second generation nuclear weapons release energy as omnidirectional blasts. Third generation nuclear weapons are experimental special effect warheads and devices that can release energy in a directed manner, some of which were tested during the Cold War but were never deployed. These include: Project Prometheus, also known as "Nuclear Shotgun", which would have used a nuclear explosion to accelerate kinetic penetrators against ICBMs. Project Excalibur, a nuclear-pumped X-ray laser to destroy ballistic missiles. Nuclear shaped charges that focus their energy in particular directions. Project Orion explored the use of nuclear explosives for rocket propulsion. === Fourth generation === The idea of "4th-generation" nuclear weapons has been proposed as a possible successor to the examples of weapons designs listed above. These methods tend to revolve around using non-nuclear primaries to set off further fission or fusion reactions. For example, if antimatter were usable and controllable in macroscopic quantities, a reaction between a small amount of antimatter and an equivalent amount of matter could release energy comparable to a small fission weapon, and could in turn be used as the first stage of a very compact thermonuclear weapon. Extremely-powerful lasers could also potentially be used this way, if they could be made powerful-enough, and compact-enough, to be viable as a weapon. Most of these ideas are versions of pure fusion weapons, and share the common property that they involve hitherto unrealized technologies as their "primary" stages. While many nations have invested significantly in inertial confinement fusion research programs, since the 1970s it has not been considered promising for direct weapons use, but rather as a tool for weapons- and energy-related research that can be used in the absence of full-scale nuclear testing. Whether any nations are aggressively pursuing "4th-generation" weapons is not clear. In many case (as with antimatter) the underlying technology is presently thought to be very far from being viable, and if it was viable would be a powerful weapon in and of itself, outside of a nuclear weapons context, and without providing any significant advantages above existing nuclear weapons designs === Pure fusion weapons === Since the 1950s, the United States and Soviet Union investigated the possibility of releasing significant amounts of nuclear fusion energy without the use of a fission primary. Such "pure fusion weapons" were primarily imagined as low-yield, tactical nuclear weapons whose advantage would be their ability to be used without producing fallout on the scale of weapons that release fission products. In 1998, the United States Department of Energy declassified the following: (1) Fact that the DOE made a substantial investment in the past to develop a pure fusion weapon (2) That the U.S. does not have and is not developing a pure fusion weapon; and (3) That no credible design for a pure fusion weapon resulted from the DOE investment. Red mercury, a likely hoax substance, has been hyped as a catalyst for a pure fusion weapon. === Cobalt bombs === A doomsday bomb, made popular by Nevil Shute's 1957 novel, and subsequent 1959 movie, On the Beach, the cobalt bomb is a hydrogen bomb with a jacket of cobalt. The neutron-activated cobalt would have maximized the environmental damage from radioactive fallout. These bombs were popularized in the 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb; the material added to the bombs is referred to in the film as 'cobalt-thorium G'. Such "salted" weapons were investigated by U.S. Department of Defense. Fission products are as deadly as neutron-activated cobalt. Initially, gamma radiation from the fission products of an equivalent size fission-fusion-fission bomb are much more intense than Cobalt-60 (60Co): 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that 60Co fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the 60Co again after about 75 years. The triple "taiga" nuclear salvo test, as part of the preliminary March 1971 Pechora–Kama Canal project, produced a small amount of fission products and therefore a comparatively large amount of case material activated products are responsible for most of the residual activity at the site today, namely 60Co. As of 2011, fusion generated neutron activation was responsible for about half of the gamma dose at the test site. That dose is too small to cause deleterious effects, and normal green vegetation exists all around the lake that was formed. === Arbitrarily large multi-staged devices === The idea of a device which has an arbitrarily large number of Teller-Ulam stages, with each driving a larger radiation-driven implosion than the preceding stage, is frequently suggested, but technically disputed. There are "well-known sketches and some reasonable-looking calculations in the open literature about two-stage weapons, but no similarly accurate descriptions of true three stage concepts." During the mid-1950s through early 1960s, scientists working in the weapons laboratories of the United States investigated weapons concepts as large as 1,000 megatons, and Edward Teller announced the design of a 10,000-megaton weapon code-named SUNDIAL at a meeting of the General Advisory Committee of the Atomic Energy Commission. Much of the information about these efforts remains classified, but such "gigaton" range weapons do not appear to have made it beyond theoretical investigations. While both the US and Soviet Union investigated (and in the case of the Soviets, tested) "very high yield" (e.g. 50 to 100-megaton) weapons designs in the 1950s and early 1960s, these appear to represent the upper-limit of Cold War weapon yields pursued seriously, and were so physically heavy and massive that they could not be carried entirely within the bomb bays of the largest bombers. Cold War warhead development trends from the mid-1960s onward, and especially after the Limited Test Ban Treaty, instead resulted in highly-compact warheads with yields in the range from hundreds of kilotons to the low megatons that gave greater options for deliverability. Following the concern caused by the estimated gigaton scale of the 1994 Comet Shoemaker-Levy 9 impacts on the planet Jupiter, in a 1995 meeting at Lawrence Livermore National Laboratory (LLNL), Edward Teller proposed to a collective of U.S. and Russian ex-Cold War weapons designers that they collaborate on designing a 1,000-megaton nuclear explosive device for diverting extinction-class asteroids (10+ km in diameter), which would be employed in the event that one of these asteroids were on an impact trajectory with Earth. === Neutron bombs === A neutron bomb, technically referred to as an enhanced radiation weapon (ERW), is a type of tactical nuclear weapon designed specifically to release a large portion of its energy as energetic neutron radiation. This contrasts with standard thermonuclear weapons, which are designed to capture this intense neutron radiation to increase its overall explosive yield. In terms of yield, ERWs typically produce about one-tenth that of a fission-type atomic weapon. Even with their significantly lower explosive power, ERWs are still capable of much greater destruction than any conventional bomb. Meanwhile, relative to other nuclear weapons, damage is more focused on biological material than on material infrastructure (though extreme blast and heat effects are not eliminated). ERWs are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, 700 m (2,300 ft), is less than that from its neutron radiation. However, the blast is more than potent enough to destroy most structures, which are less resistant to blast effects than even unprotected human beings. Blast pressures of upwards of 20 psi (140 kPa) are survivable, whereas most buildings will collapse with a pressure of only 5 psi (30 kPa). Commonly misconceived as a weapon designed to kill populations and leave infrastructure intact, these bombs (as mentioned above) are still very capable of leveling buildings over a large radius. The intent of their design was to kill tank crews – tanks giving excellent protection against blast and heat, surviving (relatively) very close to a detonation. Given the Soviets' vast tank forces during the Cold War, this was the perfect weapon to counter them. The neutron radiation could instantly incapacitate a tank crew out to roughly the same distance that the heat and blast would incapacitate an unprotected human (depending on design). The tank chassis would also be rendered highly radioactive, temporarily preventing its re-use by a fresh crew. Neutron weapons were also intended for use in other applications, however. For example, they are effective in anti-nuclear defenses – the neutron flux being capable of neutralising an incoming warhead at a greater range than heat or blast. Nuclear warheads are very resistant to physical damage, but are very difficult to harden against extreme neutron flux. ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s. A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1–10 kilotons, with fission proportion varying from 50% at 1 kiloton to 25% at 10 kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10 to 15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88. === Minor actinide fission weapons === Some isotopes of protactinium, neptunium, americium, curium, californium, berkelium, and einsteinium have calculated critical mass values, ranging in the kilograms to tens of kilograms. Few possess an adequate combination of high fission cross section (for detonation), low spontaneous fission rate (to limit predetonation), low alpha or gamma decay rate (to allow handling). All suffer from a far higher cost of production compared to standard fissile material. This is due to both production of the quantity required, often in high flux reactors, and complex chemical separation procedures. For elements curium and, total global production has never exceeded a single critical mass of separated material. Neptunium-237 is considered the most immediately concerning minor actinide isotope for weaponization. Comprising ~0.05% of spent nuclear fuel, ~5 tons are produced annually worldwide. The International Atomic Energy Agency has established monitoring for facilities capable of separation of the isotope, but is yet to classify it as a "special fissionable material", alongside plutonium-239, and high enrichments of uranium-235 and uranium-233. In September 2002, researchers at the Los Alamos National Laboratory briefly produced the first known nuclear critical mass involving a significant quantity of neptunium, in combination with shells of enriched uranium (uranium-235), discovering that the critical mass of a bare sphere of neptunium-237 "ranges from kilogram weights in the high fifties to low sixties," showing that it "is about as good a bomb material as [uranium-235]." The United States federal government made plans in March 2004 to move America's supply of separated neptunium to a nuclear-waste disposal site in Nevada. Certain isotopes of americium are also considered weaponizable, despite considerable challenge, based on the testimony of nuclear weapons physicists. == Weapon design laboratories == All the nuclear weapon design innovations discussed in this article originated from the following three labs in the manner described. Other nuclear weapon design labs in other countries duplicated those design innovations independently, reverse-engineered them from fallout analysis, or acquired them by espionage. === Lawrence Berkeley === The first systematic exploration of nuclear weapon design concepts took place in mid-1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 cyclotron-made production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference. By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the (classified but now fully declassified and widely available online as a PDF) Los Alamos Primer. The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods", the one approach that turned out to be a dead end. === Los Alamos === At Los Alamos, it was found in April 1944 by Emilio Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man, the implosion-type bomb, was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford Site in Washington. In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946. All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved. Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do. === Lawrence Livermore === With its original mission no longer available, the Livermore lab tried radical new designs that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly. Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles. In 1957 and 1958, both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos. Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium-235 into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs. Starting with the Nova facility at Livermore in the mid-1980s, nuclear design activity pertaining to radiation-driven implosion was informed by research with indirect drive laser fusion. This work was part of the effort to investigate Inertial Confinement Fusion. Similar work continues at the more powerful National Ignition Facility. The Stockpile Stewardship and Management Program also benefited from research performed at NIF. == Explosive testing == Nuclear weapons are in large part designed by trial and error. The trial often involves test explosion of a prototype. In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and simulation software are not adequate. It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass-produced. It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place. === Light pipes === An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded by instruments located at the far end of a long, very straight pipe. The picture below shows the Shrimp device, detonated on March 1, 1954, at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are actually diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when X-rays from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel. From the shot cab, the pipes turned horizontally and traveled 2.3 km (7,500 ft) along a causeway built on the Bikini reef to a remote-controlled data collection bunker on Namu Island. While x-rays would normally travel at the speed of light through a low-density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary creates a relatively opaque radiation front in the channel filler, which acts like a slow-moving logjam to retard the passage of radiant energy. While the secondary is being compressed via radiation-induced ablation, neutrons from the primary catch up with the x-rays, penetrate into the secondary, and start breeding tritium via the third reaction noted in the first section above. This 6Li + n reaction is exothermic, producing 5 MeV per event. The spark plug has not yet been compressed and thus remains subcritical, so no significant fission or fusion takes place as a result. If enough neutrons arrive before implosion of the secondary is complete, though, the crucial temperature differential between the outer and inner parts of the secondary can be degraded, potentially causing the secondary to fail to ignite. The first Livermore-designed thermonuclear weapon, the Morgenstern device, failed in this manner when it was tested as Castle Koon on April 7, 1954. The primary ignited, but the secondary, preheated by the primary's neutron wave, suffered what was termed as an inefficient detonation;: 165  thus, a weapon with a predicted one-megaton yield produced only 110 kilotons, of which merely 10 kt were attributed to fusion.: 316  These timing effects, and any problems they cause, are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications. It is not clear from the public record how successful the Shrimp light pipes were. The unmanned data bunker was far enough back to remain outside the mile-wide crater, but the 15-megaton blast, two and a half times as powerful as expected, breached the bunker by blowing its 20-ton door off the hinges and across the inside of the bunker. (The nearest people were 32 kilometres (20 mi) farther away, in a bunker that survived intact.) === Fallout analysis === The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an (n, 2n) reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high. Fallout analysis revealed to designers that, with the (n, 2n) reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly. As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat, Daigo Fukuryū Maru, steamed home with enough fallout on her decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons. === Underground testing === The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing literally underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the United States conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas. The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over radioactive caverns created by nuclear explosions (see photo). After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the tritium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation. == Production facilities == When two-stage weapons became standard in the early 1950s, weapon design determined the layout of the new, widely dispersed U.S. production facilities, and vice versa. Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant near Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility. The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries. Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries. The Hanford Site near Richland WA operated Plutonium production nuclear reactors and separations facilities during World War 2 and the Cold War. Nine Plutonium production reactors were built and operated there. The first being the B-Reactor which began operations in September 1944 and the last being the N-Reactor which ceased operations in January 1987. The Savannah River Site in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and converted lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride. == Warhead design safety == Because even low-yield nuclear warheads have astounding destructive power, weapon designers have always recognised the need to incorporate mechanisms and associated procedures intended to prevent accidental detonation. === Gun-type === It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the propellant in Little Boy (four bags of cordite) was inserted into the bomb in flight, shortly after takeoff on August 6, 1945. This was the first time a gun-type nuclear weapon had ever been fully assembled. If the weapon falls into water, the moderating effect of the water can also cause a criticality accident, even without the weapon being physically damaged. Similarly, a fire caused by an aircraft crashing could easily ignite the propellant, with catastrophic results. Gun-type weapons have always been inherently unsafe. === In-flight pit insertion === Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern. On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to use in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system. In-flight pit insertion will not work with a hollow pit in contact with its tamper. === Steel ball safety method === As shown in the diagram above, one method used to decrease the likelihood of accidental detonation employed metal balls. The balls were emptied into the pit: this prevented detonation by increasing the density of the hollow pit, thereby preventing symmetrical implosion in the event of an accident. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon, which was used in the Violet Club and Yellow Sun Mk.1 bombs. === Chain safety method === Alternatively, the pit can be "safed" by having its normally hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit cannot be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not cause a nuclear explosion. === One-point safety === While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern. In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical. It is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests. Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 t to 500 t (severe failure), and the rest had unacceptable yields between those extremes. Of particular concern was Livermore's W47, which generated unacceptably high yields in one-point testing. To prevent an accidental detonation, Livermore decided to use mechanical safing on the W47. The wire safety scheme described below was the result. When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing. === Wire safety method === In the last test before the 1958 moratorium the W47 warhead for the Polaris SLBM was found to not be one-point safe, producing an unacceptably high nuclear yield of 200 kg (440 lb) of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. A solution was devised consisting of a boron-coated wire inserted into the weapon's hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. Once withdrawn, the wire could not be re-inserted. The wire had a tendency to become brittle during storage, and break or get stuck during arming, preventing complete removal and rendering the warhead a dud. It was estimated that 50–75% of warheads would fail. This required a complete rebuild of all W47 primaries. The oil used for lubricating the wire also promoted corrosion of the pit. === Strong link/weak link === Under the strong link/weak link system, "weak links" are constructed between critical nuclear weapon components (the "hard links"). In the event of an accident the weak links are designed to fail first in a manner that precludes energy transfer between them. Then, if a hard link fails in a manner that transfers or releases energy, energy can't be transferred into other weapon systems, potentially starting a nuclear detonation. Hard links are usually critical weapon components that have been hardened to survive extreme environments, while weak links can be both components deliberately inserted into the system to act as a weak link and critical nuclear components that can fail predictably. An example of a weak link would be an electrical connector that contains electrical wires made from a low melting point alloy. During a fire, those wires would melt, breaking any electrical connection. === Permissive action link === A permissive action link is an access control device designed to prevent unauthorised use of nuclear weapons. Early PALs were simple electromechanical switches and have evolved into complex arming systems that include integrated yield control options, lockout devices and anti-tamper devices. == References == === Notes === === Bibliography === This article incorporates text from a free content work. Text taken from Nuclear Weapons FAQ: 1.6​, Carey Sublette. == External links == Carey Sublette's Nuclear Weapon Archive is a reliable source of information and has links to other sources. Nuclear Weapons Frequently Asked Questions: Section 4.0 Engineering and Design of Nuclear Weapons The Federation of American Scientists provides solid information on weapons of mass destruction, including nuclear weapons and their effects More information on the design of two-stage fusion bombs Militarily Critical Technologies List (MCTL), Part II (1998) (PDF) from the US Department of Defense at the Federation of American Scientists website. "Restricted Data Declassification Decisions from 1946 until Present", Department of Energy report series published from 1994 until January 2001 which lists all known declassification actions and their dates. Hosted by Federation of American Scientists. The Holocaust Bomb: A Question of Time is an update of the 1979 court case USA v. The Progressive, with links to supporting documents on nuclear weapon design. Annotated bibliography on nuclear weapons design from the Alsos Digital Library for Nuclear Issues The Woodrow Wilson Center's Nuclear Proliferation International History Project or NPIHP is a global network of individuals and institutions engaged in the study of international nuclear history through archival documents, oral history interviews and other empirical sources.
Wikipedia/Nuclear_weapon_design
Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems. The concept of CAutoD perhaps first appeared in 1963, in the IBM Journal of Research and Development, where a computer program was written. to search for logic circuits having certain constraints on hardware design to evaluate these logics in terms of their discriminating ability over samples of the character set they are expected to recognize. More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically-inspired machine learning, including heuristic search techniques such as evolutionary computation, and swarm intelligence algorithms. == Guiding designs by performance improvements == To meet the ever-growing demand of quality and competitiveness, iterative physical prototyping is now often replaced by 'digital prototyping' of a 'good design', which aims to meet multiple objectives such as maximised output, energy efficiency, highest speed and cost-effectiveness. The design problem concerns both finding the best design within a known range (i.e., through 'learning' or 'optimisation') and finding a new and better design beyond the existing ones (i.e., through creation and invention). This is equivalent to a search problem in an almost certainly, multidimensional (multivariate), multi-modal space with a single (or weighted) objective or multiple objectives. == Normalized objective function: cost vs. fitness == Using single-objective CAutoD as an example, if the objective function, either as a cost function J ∈ [ 0 , ∞ ) {\displaystyle J\in [0,\infty )} , or inversely, as a fitness function f ∈ ( 0 , 1 ] {\displaystyle f\in (0,1]} , where f = J 1 + J {\displaystyle f={\tfrac {J}{1+J}}} , is differentiable under practical constraints in the multidimensional space, the design problem may be solved analytically. Finding the parameter sets that result in a zero first-order derivative and that satisfy the second-order derivative conditions would reveal all local optima. Then comparing the values of the performance index of all the local optima, together with those of all boundary parameter sets, would lead to the global optimum, whose corresponding 'parameter' set will thus represent the best design. However, in practice, the optimization usually involves multiple objectives and the matters involving derivatives are a lot more complex. == Dealing with practical objectives == In practice, the objective value may be noisy or even non-numerical, and hence its gradient information may be unreliable or unavailable. This is particularly true when the problem is multi-objective. At present, many designs and refinements are mainly made through a manual trial-and-error process with the help of a CAD simulation package. Usually, such a posteriori learning or adjustments need to be repeated many times until a ‘satisfactory’ or ‘optimal’ design emerges. == Exhaustive search == In theory, this adjustment process can be automated by computerised search, such as exhaustive search. As this is an exponential algorithm, it may not deliver solutions in practice within a limited period of time. == Search in polynomial time == One approach to virtual engineering and automated design is evolutionary computation such as evolutionary algorithms. === Evolutionary algorithms === To reduce the search time, the biologically-inspired evolutionary algorithm (EA) can be used instead, which is a (non-deterministic) polynomial algorithm. The EA based multi-objective "search team" can be interfaced with an existing CAD simulation package in a batch mode. The EA encodes the design parameters (encoding being necessary if some parameters are non-numerical) to refine multiple candidates through parallel and interactive search. In the search process, 'selection' is performed using 'survival of the fittest' a posteriori learning. To obtain the next 'generation' of possible solutions, some parameter values are exchanged between two candidates (by an operation called 'crossover') and new values introduced (by an operation called 'mutation'). This way, the evolutionary technique makes use of past trial information in a similarly intelligent manner to the human designer. The EA based optimal designs can start from the designer's existing design database, or from an initial generation of candidate designs obtained randomly. A number of finely evolved top-performing candidates will represent several automatically optimized digital prototypes. There are websites that demonstrate interactive evolutionary algorithms for design. allows you to evolve 3D objects online and have them 3D printed. allows you to do the same for 2D images. == See also == Electronic design automation Design Automation Design Automation Conference Generative design Genetic algorithm (GA) applications - automated design == References == == External links == An online interactive GA based CAutoD demonstrator. Learn step by step or watch global convergence in 2-parameter CAutoD
Wikipedia/Computer-automated_design
Design–build (or design/build, and abbreviated D–B or D/B accordingly), also known as alternative delivery, is a project delivery system used in the construction industry. It is a method to deliver a project in which the design and construction services are contracted by a single entity known as the design–builder or design–build contractor. It can be subdivided into architect-led design–build (ALDB, sometimes known as designer-led design–build) and contractor-led design–build. In contrast to "design–bid–build" (or "design–tender"), design–build relies on a single point of responsibility contract and is used to minimize risks for the project owner and to reduce the delivery schedule by overlapping the design phase and construction phase of a project. Design–build also has a single point responsibility. The design-build contractor is responsible for all work on the project, so the client can seek legal remedies for any fault from one party. The traditional approach for construction projects consists of the appointment of a designer on one side, and the appointment of a contractor on the other side. The design–build procurement route changes the traditional sequence of work. It answers the client's wishes for a single point of responsibility in an attempt to reduce risks and overall costs. Although the use of subcontractors to complete more specialized work is common, the design-build contractor remains the primary contact and primary force behind the work. It is now commonly used in many countries and forms of contracts are widely available. Design–build is sometimes compared to the "master builder" approach, one of the oldest forms of construction procedure. Comparing design–build to the traditional method of procurement, the authors of Design-build Contracting Handbook noted that: "from a historical perspective the so-called traditional approach is actually a very recent concept, only being in use approximately 150 years. In contrast, the design–build concept—also known as the "master builder" concept—has been reported as being in use for over four millennia." Although the Design-Build Institute of America (DBIA) takes the position that design–build can be led by a contractor, a designer, a developer or a joint venture, as long as a design–build entity holds a single contract for both design and construction, some architects have suggested that architect-led design–build is a specific approach to design–build. Design-build plays an important role in pedagogy, both at universities and in independently organised events such as Rural Studio or ArchiCamp. == Design–build contractor == The "design–builder" is often a general contractor, but in many cases a project is led by a design professional (architect, engineer, architectural technologist or other professional designers). Some design–build firms employ professionals from both the design and construction sector. Where the design–builder is a general contractor, the designers are typically retained directly by the contractor. Partnership or a joint venture between a design firm and a construction firm may be created on a long-term basis or for one project only. Until 1979, the AIA American Institute of Architects' code of ethics and professional conduct prohibited their members from providing construction services. However today many architects in the United States and elsewhere aspire to provide integrated design and construction services, and one approach towards this goal is design–build. The AIA has acknowledged that design–build is becoming one of the main approaches to construction. In 2003, the AIA endorsed "The architect's guide to design–build services", which was written to help their members acting as design–build contractors. This publication gives guidance through the different phases of the process: design services, contracts, management, insurances, and finances. == Contractor-led design–build projects: the architect's role == On contractor-led design–build projects, management is structured so that the owner works directly with a contractor who, in turn, coordinates subcontractors. Architects contribute to contractor-led design–build projects in one of several ways, with varying degrees of responsibility (where "A/E" in each diagram represents the architect/engineer): Architect as employee of contractor: The architect works for the contractor as an in-house employee. The architect still bears professional risk and is likely to have less control than in other contractor-led design–build approaches. Architect as a subcontractor: Here, the architect is one of the many subcontractors on the team led by the contractor. The architect bears similar professional risk but still with little control. Architect as second party in contractor-led integrated project delivery (IPD): The architect and contractor work together in a joint venture, both coordinating the subcontractors to get the project built. The building owner has a single contract with this joint venture. The contractor leads the joint venture so in supervising the subs, the architect might defer to the contractor. The architect bears the same risk as they do in the traditional approach but has more control in IPD, even if they were to defer to the contractor. == Architect-led design–build projects == Architect-led design–build projects are those in which interdisciplinary teams of architects and building trades professionals collaborate in an agile management process, where design strategy and construction expertise are seamlessly integrated, and the architect, as owner-advocate, project-steward and team-leader, ensures high fidelity between project aims and outcomes. In architect-led design–build projects, the architect works directly with the owner (the client), acts as the designer and builder, coordinating a team of consultants, subcontractors and materials suppliers throughout the project lifecycle. Architects lead design–build projects in several ways, with varying degrees of responsibility (where "A/E" in each diagram represents the architect/engineer): Architect as provider of extended services: Contracted to the owner, the architect extends his or her services beyond the design phase, taking responsibility for managing the subcontractors on behalf of the owner. The architect bears similar risk but has more control over the project than in the traditional approach or on contractor-led design–build projects. Architect as primary party in architect-led integrated project delivery (IPD): Again, as in working together in a joint venture, both coordinating the subcontractors to get the project built. Again, the building owner has a single contract with this joint venture. This time, the architect leads the joint venture so in supervising the subs, the contractor might defer to the architect. The architect might bear more risk than they do in the traditional approach but risk is shared with the owner and the contractor, as outlined in their agreement. An alternative approach to effectuating this delivery structure is for the architect to contract directly with the owner to design and build the project, and then to subcontract the procurement and construction responsibilities to its allied general contractor, who enters into further subcontracts with the trades. This is a difference in form, rather than in substance, because the business and legal terms of the agreement between the architect and the general contractor may be the same regardless of whether they are characterized as a joint venture or as a subcontract. It is the "flip side of the coin" of the contractor-led approach described above in which the general contractor subcontracts the design to the architect. Architect as full service leader of design build process: Contracted to the owner, the architect offers full service to the owner, taking responsibility for managing the subcontractors, consultants and vendors, and involving them throughout the project, start to finish, from design through construction. The architect's role shifts during the project, from designer to site supervisor (effectively taking the role of a general contractor), but monitors the project vision, and is able to call upon subcontractors' construction expertise throughout. The architect bears the greatest risk but also has more control over the project than in either the traditional approach, or in the contractor-led and other architect-led design–build projects. == Contracts == A single set of integrated contracts combining design and construction responsibilities, rather than two discrete contracts for each, acknowledges the interdependence of the architects' and construction trades' project responsibilities, and reduces the likelihood of disputes. == Design–build institutes == In 1993, the Design-Build Institute of America (DBIA) was formed. Its membership is composed of design and construction industry professionals as well as project owners. DBIA promotes the value of design–build project delivery and teaches the effective integration of design and construction services to ensure success for owners and design and construction practitioners. The Design-Build Institute of America is an organization that defines, teaches and promotes best practices in design–build. The Canadian Design-Build Institute (CDBI) describes itself as "The recognized voice of Design-Build practitioners in Canada, promoting and enhancing the proper use of Design-Build method of procurement and contracting". === Advocacy === Not all design–build projects are alike. Here, there is a distinction between design–build projects led by contractors and those led by architects. Architect-led Design Build is a form of 'design–build' that, according to the DBIA, has been rapidly gaining market share in the United States over the past 15 years. The Design Build Institute of America describes the design–build process as follows: Taking singular responsibility, the design–build team is accountable for cost, schedule and performance, under a single contract and with reduced administrative paperwork, clients can focus on the project rather than managing disparate contracts. And, by closing warranty gaps, building owners also virtually eliminate litigation claims. The DBIA's 2005 chart shows the uptake of design–build methods in non-residential design and construction in the United States. Architect-led design–build is sometimes known by the more generic name "designer-led design–build". Although employed primarily by architects, architectural technologists and other architectural professions, the design–build structure works similarly for interior design projects led by an interior designer who is not an architect, and also for engineering projects where the design–build team is led by a professional structural, civil, mechanical or other engineers. In addition, it is common for the design professional who leads the design–build team to create a separate corporation or similar business entity through which the professional performs the construction and other related non-professional services. In 2011, design–build continued to gain ground as a significant trend in design and construction. In March 2011, industry consultants ZweigWhite published "Design-Bid-Build meets the opposition". In it, they suggest that while Design-Bid-Build "still rules", the traditional approach is losing favor as "alternative project delivery methods threaten [the] design-bid-build model." While not referencing the architect-led design–build approach specifically, the article states that D/B already accounts for 27% of projects, according to their 2010 Project Management Survey and goes on to argue that, The emerging trends in delivery seem to point to a return to the primordial concept of the masterbuilder, as exemplified by D/B and IPD [Integrated Project Delivery]. According to the DBIA, the design–build approach offers advantages to owners, including: "One team, one contract, one unified flow of work from initial concept through completion." == Debate on the merits of design–build vs. design–bid–build == The rise of design–build project delivery has threatened the traditional hierarchies and silos of the design and construction industry. As a result, a debate has emerged over the value of design–build as a method of project delivery. Critics of the design–build approach claim that design–build limits the clients' involvement in the design and allege that contractors often make design decisions outside their area of expertise. They also suggest that a designer—rather than a construction professional—is a better advocate for the client or project owner and/or that by representing different perspectives and remaining in their separate spheres, designers and builders ultimately create better buildings. Proponents of design–build counter that design–build saves time and money for the owner, while providing the opportunity to achieve innovation in the delivered facility. They note that value is added because design-build brings value engineering into the design process at the onset of a project. Design–build allows the contractor, engineers and specialty trade contractors (subcontractors) to propose best-value solutions for various construction elements before the design is complete. Design–build brings all members of a project team together early in the process to identify and address issues of cost, schedule and constructability. Proponents suggest that as a result, design-build alleviates conflict between architects and contractors and reduces owner risk for design errors. They argue that once design is finalized and construction begins, the greatest opportunity to achieve cost savings has already been lost, and the potential for design errors is greater, leading to change orders that create cost growth and schedule delays. Proponents note that design–build allows owners to avoid being placed directly between the architect/engineer and the contractor. Under design–bid–build, the owner takes on significant risks because of that position. Design–build places the responsibility for design errors and omissions on the design–builder, relieving the owner of major legal and managerial responsibilities. The burden for these costs and associated risks are transferred to the design–build team. The cost and schedule reduction and decreased litigation associated with design–build project delivery have been demonstrated repeatedly. Researches on Selecting Project Delivery Systems by Victor Sanvido and Mark Konchar of Pennsylvania State University found that design–build projects are delivered 33.5% faster than projects that are designed and built under separate contracts (design-bid-build). Sanvido and Konchar also showed that design–build projects are constructed 12% faster* and have a unit cost that is 6.1% lower than design-bid-build projects. Similar cost and time savings were found in a comparison study of design–build, and design-bid-build for the water/wastewater construction industry, a peer-reviewed paper authored by Smith Culp Consulting that will be published in July 2011 by the American Society of Civil Engineers. A benchmarking and claims study by Victor O. Schinnerer, one of the world's largest firms underwriting professional liability and specialty insurance programs, found that, from 1995 to 2004, only 1.3% of claims against A/E firms were made by design–build contractors. Advantages have been summarized as: Efficiency: Typically led by contractors, 'design–build' has evolved as an efficient way to deliver projects primarily where the building project goals are straightforward, either constrained by budget, or the outcome is prescribed by functional requirements (for example, a highway, sports facility, or brewery). Construction industry commentators have described design–build as a high performance 'construction project delivery system', a dynamic approach to making buildings that presents an alternative to the traditional design-bid-build approach. Single-source: Design–build is growing because of the advantages of single-source management: Unlike traditional design-bid-build, it allows for the owner to contract with just one party who acts as a single point of contact, is responsible for delivering the project and coordinates the rest of the team. Depending on the phasing of the project, there may be multiple sequential contracts between the owner and the design–builder. The owner benefits because if something turns out to be wrong with the project, there is a single entity that is responsible for fixing the problem, rather than a separate designer and constructor each blaming the other. === Advantages for less-prescriptive projects === Architect-led design–build is suited primarily to less prescriptive architectural projects (private residences, non-profit institutions, museums), for the efficiencies it yields and the sophisticated design interpretation it affords, particularly: Where the primary project goals are design-driven or visionary rather than prescribed by budgetary constraint or functional requirements Where the project is specifically "Capital A"-artistically/creatively driven, in a way that traditionally yields the highest level of cost overruns. Where the efficiencies of design–build approach and an architect's interpretive skill are equally important These less prescriptive projects need not be stuck with the "broken buildings and busted budgets" described by Barry Lepatner. Rather, the less prescriptive the project, the more the client needs an architect to steward an emergent design from vision to completion. So it follows that for the broadest range of building projects, the rigors of architect-led design–build is compelling and preferable where design is of paramount importance to the client. === Recursive knowledge === The process and the knowledge it produces is recursive: Since subcontractors are engaged early and often in an architect-led design build project, to assess efficiencies, opportunity costs, payback rates and quality options. Their input informs overall design decisions from the outset. Cost-benefit is also a constant consideration that informs design decisions from the outset. Building performance is measured early too, so that trade offs between budget, schedule, functionality and usability can inform specification and continuous refinement of the design. Architects engaged in this dynamic process understand and keep up to date with the potential of contemporary technology and materials available to building professionals, and translate what they learn into their design work. This knowledge is fed back, not just to the specific project but can be shared to other project teams, throughout a studio, or more broadly to the profession, and can become an active source of insight in and of itself. == Growth of design–build method == A 2011 study analyzing the design–build project delivery method in the United States shows design–build was used on about 40 percent of non-residential construction projects in 2010, a ten percent increase since 2005. The study was commissioned by the Design-Build Institute of America (DBIA) and was completed by RSMeans Reed Construction Data Market Intelligence. A study from the US Department of Transportation claims that: "Design-build delivery has been steadily increasing in the U.S. public building sector for more than 10 years, but it is still termed experimental in transportation. To date, under Special Experimental Project 14 (SEP-14) the FHWA has approved the use of design–build in more than 150 projects, representing just over half of the States. The European countries visited have used design–build delivery for a longer time than the United States and provided the scan team with many valuable insights. The primary lessons learned on this scan tour relate to the types of projects using design–build, the use of best-value selection, percentage of design in the solicitation, design and construction administration, third-party risks, the use of warranties, and the addition of maintenance and operation to design–build contracts." == Criticisms of design–build == During the design–build procedure, the contractor is deciding on design issues as well as issues related to cost, profits and time exigencies. Whilst the traditional method of construction procurement dissociates the designers from the contractors' interests, design–build does not. On these grounds it is considered that the design–build procedure is poorly adapted to projects that require complex designs for technical, programmatic or aesthetic purposes. If the designer/architect is 'kept' by the construction company, they probably will never push the envelope as to what might be possible. A notable design–build project that received significant criticism, not only for excessive cost but for environmental issues, was the Belmont Learning Center. The scandal involved alleged contaminated soil that caused significant delays and massive cost overruns. In Los Angeles, District Attorney Steve Cooley, who investigated the Los Angeles Unified School District's Belmont project, produced a final investigative report, released March 2003. This report concluded that the design–build process caused a number of issues relating to the Belmont scandal: Design–build does not make use of competitive bidding where prospective builders bid on the same design. Criteria to select contractor are subjective and difficult to evaluate and to justify later. The design and price selected arouses public suspicion, true or not. This can lead to loss of public confidence. The design brief is subject to different interpretations from both the client and contractor, creating a conflict of interest. It concluded the "design–build" approach and "mixed-use concept" together caused controversy, uncertainty, and complexity of the Belmont project which helped increase the potential for project failure. While the Belmont investigation cleared the Los Angeles Unified School District of any criminal wrongdoing, the task force recommends strict oversight, including written protocols, a vigorous Office of the Inspector General, and other recommendations if it decides to continue to use the design–build approach. During the period in question, the ex-Superintendent of LAUSD, Ramon C. Cortines, working with the LAUSD Board of Education, whose president is Monica Garcia, actively tried to cut the Office of Inspector General by 75% (compromising on 25%) and subsequently removed the Inspector General Jerry Thornton after he produced critical audits that showed misuse of construction funds. Others have argued that architect-led design–build still does: Typical project management issues (establishing liability, writing contracts, scoping estimates and schedule) or Variation across different states' licensing laws or Conflict of interest and ethical issues It also imposes: Greater business and financial risks associated with architect taking on general contractor responsibilities Changes to the way architects do business, so they Establish a construction company as a separate corporation that signs a separate construction contract, so they are able to insure and simplify liability insurance coverage Either they have, or are able to acquire, the skills of a design–builder Recognize the parties' different incentives Modify how they prepare Contract Documents, relying more on performance specifications than they do currently, to facilitate substitutions for the benefit of the constructor. == Project examples == Examples of contractor-led design–build projects include: Dena'ina Civic & Convention Center, Anchorage, AK, Neeser Construction, Inc.: In 2010, it won the 2010 DBIA Design Build Merit Award for a public sector project over $50 million. Walter Cronkite School of Journalism and Mass Communication: Phoenix, AZ, Ehrlich Architects. In 2009, it won the 2009 DBIA National Design Build Award for a public sector project over $25 million. Federal Law Enforcement Training Center Dormitory, North Charleston, SC, The Korte Company. In 2012, it won the 2012 DBIA Design Build Merit Award for a public sector project over $15 million. == See also == Architectural management Design–bid–build == References == == Further reading == "A New Solution For Public Construction Projects: Sequential Designer-Led Design-Build", by Mark Friedlander "Design-Build and Integrated Project Delivery: Narrowing the Gap" American Institute of Architects (AIA), issue 21, August 21, 2009 When Is Hiring Professionals Worth It? The Bottom Line: It Depends.; Architects vs. Contractors vs. Design-Build Firms . . . There Are Several Options and No Easy Answers, by Denise DiFulco, The Washington Post, July 17, 2008 'Design-Build' Trend Sweeps Redo Market; 2-Step Approach Unites Architects, Contractors by Ann Marie Moriarty, The Washington Post, March 27, 2002 == External links == The Design/Build Institute of America The Design/Build Institute of Canada Design-Build Definition
Wikipedia/Architect-led_design–build
Design theory is a subfield of design research concerned with various theoretical approaches towards understanding and delineating design principles, design knowledge, and design practice. == History == Design theory has been approached and interpreted in many ways, from designers' personal statements of design principles, through constructs of the philosophy of design to a search for a design science. The essay "Ornament and Crime" by Adolf Loos from 1908 is one of the early 'principles' design-theoretical texts. Others include Le Corbusier's Vers une architecture (1923), and Victor Papanek's Design for the real world (1972). In a 'principles' approach to design theory, the De Stijl movement (founded in 1917) promoted a geometrical abstract, "ascetic" form of purism that was limited to functionality. This modernist attitude underpinned the Bauhaus movement (1919 onwards). Principles were drawn up for design that were applicable to all areas of modern aesthetics. For an introduction to the philosophy of design see the article by Per Galle at the Royal Danish Academy. An example of early design science was Altshuller's Theory of inventive problem solving, known as TRIZ, which originated in the Soviet Union in the 1940s. Herbert Simon's 1969 The sciences of the artificial developed further foundations for a science of design. Since then the further development of fields such as design methods, design research, design science, design studies and design thinking has promoted a wider understanding of design theory. == See also == Design history Design research Design science == References == == Sources == Adolf Loos, Ornament and Crime, 1908 Walter Gropius, The capacity of the Bauhaus idea, 1922 Raymond Loewy, The Mayan threshold, 1951 Roland Barthes, Mythologies, 1957, Frankfurt am Main, Suhrkamp, 2003 (in 1964) ISBN 3-518-12425-0 [Excerpt from: Mythologies, 1957] Tomás Maldonado, New developments in the industry, 1958 Marshall McLuhan, The medium is the message, 1964 Abraham Moles, The crisis of functionalism, 1968 Herbert A. Simon, The Science of Design, 1969 Horst Rittel, Dilemmas in a general theory of planning, 1973 Lucius Burckhardt, design is invisible, 1980 Annika Frye, Design und Improvisation: Produkte, Prozesse und Methoden, transcript, Bielefeld, 2017 ISBN 978-3837634938 Maurizio Vitta, The Meaning of Design, 1985 Andrea Branzi, We are the primitives, 1985 Dieter Rams, Ramsifikation, 1987 Maurizio Morgantini, Man Confronted by the Third Technological Generation, 1989 Otl Aicher, Bauhaus and Ulm, 1991 Gui Bonsiepe, On Some virtues of Design Claudia Mareis, design as a knowledge culture, 2011 Bruce Sterling,today Tomorrow composts, 2005 Tony Fry, Design Beyond the Limits, 2011 Tom Bieling, Design (&) Activism – Perspectives on Design as Activism and Activism as Design. Mimesis, Milano, 2019, ISBN 978-88-6977-241-2 Nigel Cross, design thinking, Berg, Oxford, 2011 ISBN 9781847886361 Victor Margolin, The Politics of the Artificial: Essays on Design and Design Studies, 2002 Yana Milev, D.A.: A Transdisciplinary Handbook of Design Anthropology, 2013 Michael Schulze, concept and concept of the work. The sculptural design in architectural education, Zurich vdf, Hochschulverlag AG at the ETH Zurich, 2013, ISBN 978-3-7281-3481-3 Dieter Pfister, Atmospheric style. On the importance of atmosphere and design for a socially sustainable interior design, Basel, 2013, ISBN 978-3-906129-84-6 Tim Parsons, Thinking: Objects, Contemporary Approaches to Product Design (AVA Academia Advanced), Juli 2009, ISBN 978-2940373741 == External links == http://backspace.com/notes/2009/07/design-manifestos.php
Wikipedia/Design_theory
Video game design is the process of designing the rules and content of video games in the pre-production stage and designing the gameplay, environment, storyline and characters in the production stage. Some common video game design subdisciplines are world design, level design, system design, content design, and user interface design. Within the video game industry, video game design is usually just referred to as "game design", which is a more general term elsewhere. The video game designer is like the director of a film; the designer is the visionary of the game and controls the artistic and technical elements of the game in fulfillment of their vision. However, with complex games, such as MMORPGs or a big budget action or sports title, designers may number in the dozens. In these cases, there are generally one or two principal designers and multiple junior designers who specify subsets or subsystems of the game. As the industry has aged and embraced alternative production methodologies such as agile, the role of a principal game designer has begun to separate - some studios emphasizing the auteur model while others emphasizing a more team oriented model. In larger companies like Electronic Arts, each aspect of the game (control, level design) may have a separate producer, lead designer and several general designers. Video game design requires artistic and technical competence as well as sometimes including writing skills. Historically, video game programmers have sometimes comprised the entire design team. This is the case of such noted designers as Sid Meier, John Romero, Chris Sawyer and Will Wright. A notable exception to this policy was Coleco, which from its very start separated the function of design and programming. As video games became more complex, computers and consoles became more powerful, the job of the game designer became separate from the lead programmer. Soon, game complexity demanded team members focused on game design. A number of early veterans chose the game design path eschewing programming and delegating those tasks to others. == Overview == Video game design starts with an idea, often a variation or modification on an existing concept. The game idea will fall within one or several genres and designers will often experiment with mixing genres. The game designer usually produces an initial game proposal document containing the concept, gameplay, feature list, setting and story, target audience, requirements and schedule, staff and budget estimates. Multiple design decisions are made during the course of a game's development; it is the responsibility of the designer to decide which elements should be implemented. For example, consistency with the game's vision, budget or hardware limitations. Design changes will have a significant impact on required resources. The designer may use scripting languages to implement and preview design ideas without necessarily modifying the game's codebase. A game designer often plays video games and demos to follow the markets' development. Over time, it has become common for a game designer's name to misleadingly be given an undue amount of association to the game, neglecting the rest of the development team. This is in stark contrast to the industries' origins, when creators were often given little to no recognition. Coincidentally, this lack of credit lead Warren Robinett to create the first Easter egg in a video game. Funding, traditionally provided by game publishers, who may have specific expectations from a game, must be taken into account, as most video games are market-driven — developed to sell for profit. However, if financial issues do not influence designer's decisions, the game can become design- or designer-driven; but few games are designed this way, with it becoming more common among indie game developers, alongside alternative sources of funding, like Early Access or Crowdfunding. Alternatively, a game may be technology-driven, such as Quake (1996), to show off a particular hardware achievement or to market the game engine. Finally, a game may be art-driven, such as Myst (1993) and Journey (2012), mainly to show off impressive visuals designed by artists. In Rules of Play (2004), Katie Salen and Eric Zimmermann write: A game designer is a particular kind of designer, much like a graphic designer, industrial designer or architect. A game designer is not necessarily a programmer, visual designer or project manager, although sometimes he or she can also play these roles in the creation of a game. A game designer might work alone or as part of a larger team. A game designer might create card games, social games, video games or any other kind of game. The focus of a game designer is designing game play, conceiving and designing rules and structures that result in an experience for players. Thus game design, as a discipline, requires a focus on games in and of themselves. Rather than placing games in the service of another field such as sociology, literary criticism, or computer science, our aim is to study games within their own disciplinary space. Because game design is an emerging discipline, we often borrow from other areas of knowledge — from mathematics and cognitive science; from semiotics and cultural studies. We may not borrow in the most orthodox manner, but we do so in the service of helping to establish a field of game design proper. == Game designer == A game designer is a person who designs gameplay, conceiving and designing the rules and structure of a game. Multiple designers start their career in testing departments, other roles in game development or in classroom conditions, where mistakes by others can be seen first-hand. A lead designer coordinates the work of other designers and is the main visionary of the game. The lead designer ensures team communication, makes large design decisions and presents design outside of the team. Often the lead designer is technically and artistically astute. Keeping well-presented documentation also falls within lead designer responsibilities. A lead designer may be the founder of a game development company or a promoted employee. A game mechanics designer or systems designer designs and balances the game's rules. A level designer or environment designer is a position becoming prominent in recent years. A level designer is a person responsible for creating the game environment, levels and missions. Planner is a term used in the Japanese video game industry where game designers are typically credited as planners. === Compensation === In 2010, a game designer with more than six years of experience earned an average of US$65,000 (£44,761.22 sterling), US$54,000 (£37,186.24) with three to six years of experience and $44,000 (£30,299.90) with less than 3 years of experience. Lead designers earned $75,000 (£51,647.56) with three to six years of experience and $95,000 (£65,420.24) with more than six years of experience. In 2013, a game designer with less than 3 years of experience earned, on average, $55,000 (£37,874.88). A game designer with more than 6 years of experience made, on average, $105,000 (£72,306.58). The average salary of these designers varies depending on their region. As of 2015 the salary of experienced workers has shifted to approximately US$87,000 (£59,911.17) As of January 17, 2020, the average annual pay for a game designer in the United States is $130,000 a year. == Disciplines == === World design === World design is the creation of a backstory, setting and theme for the game; often done by a lead designer. World design can also be the creation of a universe or a map, as well as topics or areas that are likely to be pursued by the player. It is a map referenced for creation of everything as it shows where it is and allows for the most logistical design in any given game. World design shapes the direction the game goes towards. === System design === System design is the creation of game rules and underlying mathematical patterns. System design is the enacted simulation of a game designed to interact or react with the player. The "experience" a player has with a game is attributed to how the game's system is designed. A complex system with depth leads to a more unpredictable strand of events to immerse the player into the video game. === Content design === Content design is the creation of characters, items, puzzles, missions, or any aspect of the game that is not required for it to function properly and meet the minimum viable product standard. In essence, content is the complexity added to a minimum viable product to increase its value. === Game writing === Game writing involves writing dialogue, text and story. This is one of the first steps that go into making a video game. This encompasses a number of different elements of the process. Writing in video games also includes the elements in which the literature is presented. Voice acting, text, picture editing and music are all elements of game writing. === Level design === Level design is the construction of world levels and its features. Level design makes use of a range of different fields to create a game world. Lighting, space, framing, color and contrast are used to draw a player's attention. A designer can then use these elements to guide or direct the player in a specific direction through the game world or mislead them. === User interface design === User interface (UI) design deals with the construction the user interactions and feedback interface, like menus or heads-up displays. The user interface also incorporates game mechanics design. Deciding how much information to give the player and in what way allows the designer to inform the player about the world, or perhaps leave them uninformed. Another aspect to consider is the method of input a game will use and deciding to what degree a player can interact with a game with these inputs. These choices have a profound effect on the mood of the game, as it directly affects the player in both noticeable and subtle ways. User interface design in video games has unique goals. A conscious decision has to be made regarding the amount of information to relay to the player. However, the UI in games do not have to be absolutely streamlined. Players expect challenges and are willing to accept them as long as the experience is sufficiently rewarding. By the same token, navigating or interaction with a game's UI can be satisfying without the need to be effortless. === Audio design === Audio design involves the process of creating or incorporating all of the sounds that are in the game, like music, sound effects or voice acting. This includes the theme song and jingles used in title screens and menus. === User experience design === The disciplines listed above all combine to form the discipline of game feel. It ensures that the flow of the game and the user interaction with the game elements are functioning smoothly. == Game elements == === Narrative === Numerous games have narrative elements which give a context to an event in a game, making the activity of playing it less abstract and enhance its entertainment value, although narrative elements are not always clearly present or present at all. The original version of Tetris is an example of a game apparently without narrative. Some narratologists claim that all games have a narrative element. Some go further and claim that games are essentially a form of narrative. Narrative in practice can be the starting point for the development of a game or can be added to a design that started as a set of game mechanics. === Gameplay === Gameplay is the interactive aspects of video game design. Gameplay involves player interaction with the game, usually for the purpose of gameplay is entertainment, education or training. == Design process == === Conceptualization === The design process varies from designer to designer and companies have different formal procedures and philosophies. The typical "textbook" approach is to start with a concept or a previously completed game and from there create a game design document. This document is intended to map out the complete game design and acts as a central resource for the development team. This document should ideally be updated as the game evolves throughout the production process. === Role Adaptation === Designers are frequently expected to adapt to multiple roles of widely varying nature; for example, concept prototyping can be assisted with the use of pre-existing engines and tools like GameMaker Studio, Unity, Godot or Construct. Level designs might be done first on paper and again for the game engine using a 3D modeling tool. Scripting languages are used for multiple elements—AI, cutscenes, GUI, environmental processes, and a number of other behaviors and effects—that designers would want to tune without a programmer's assistance. Setting, story and character concepts require a research and writing process. Designers may oversee focus testing, write up art and audio asset lists and write game documentation. In addition to the skillset, designers are ideally clear communicators with attention to detail and ability to delegate responsibilities appropriately. === Design Approval === Design approval in the commercial setting is a continuous process from the earliest stages until the game ships. When a new project is being discussed (either internally or as a result of dialogue with potential publishers), the designer may be asked to write a sell-sheet of short concepts, followed by a one or two-page pitch of specific features, audience, platform and other details. Designers will first meet with leads in other departments to establish agreement on the feasibility of the game given the available time, scope and budget. If the pitch is approved, early milestones focus on the creation of a fleshed-out design document. Some developers advocate a prototyping phase before the design document is written to experiment with new ideas before they become part of the design. === Production and Decision-Making === As production progresses, designers are asked to make frequent decisions about elements missing from the design. The consequences of these decisions are hard to predict and often can only be determined after creating the full implementation. These are referred to as the unknowns of the design and the faster they are uncovered, the less risk the team faces later in the production process. Outside factors such as budget cuts or changes in milestone expectations also result in cuts to the design and while overly large cuts can take the heart out of a project, cuts can also result in a streamlined design with only the essential features, polished well. === Finalization and Quality Assurance === Towards the end of production, designers take the brunt of responsibility for ensuring that the gameplay remains at a uniform standard throughout the game, even in very long games. This task is made more difficult under "crunch" conditions, as the entire team may begin to lose sight of the core gameplay once pressured to hit a date for a finished and bug-free game. == Game design tools == Traditionally, game designers used simple tools like Word, Excel or just plain pen and paper. As the field has evolved and player agency and localization started to play a bigger role in game development, the need for professional tools has emerged for this particular field. Examples of software for narrative design and storytelling include articy:draft 3 and Twine. Tools like these often help to inform the earliest stages of the design and development process, before visual content and software development is started in earnest. There are various kinds of free 3D design software available to the public, from the mainly graphically focussed, such as Blender, to game engines and software development toolkits, such as Unreal Engine and Unity, that promote communities that self-educate as well as market 3D models and tutorials for beginners. == See also == Game art design List of video game designers List of video gaming topics List of books about video games First playable demo Educational game design Narrative Designer == References == === Sources === Adams, Ernest; Rollings, Andrew (2003). Andrew Rollings and Ernest Adams on game design. New Riders Publishing. ISBN 1-59273-001-9. Bates, Bob (2004). Game Design (2nd ed.). Thomson Course Technology. ISBN 1-59200-493-8. Bethke, Erik (2003). Game development and production. Texas: Wordware Publishing, Inc. ISBN 1-55622-951-8. Brathwaite, Brenda; Schreiber, Ian (2009). Challenges for Game Designers. Charles River Media. ISBN 978-1-58450-580-8. Moore, Michael E.; Novak, Jeannie (2010). Game Industry Career Guide. Delmar: Cengage Learning. ISBN 978-1-4283-7647-2. Oxland, Kevin (2004). Gameplay and design. Addison Wesley. ISBN 0-321-20467-0. Salen, Katie; Zimmerman, Eric (2003). Rules of Play: Game Design Fundamentals. MIT Press. ISBN 0-262-24045-9. Shahrani, Sam (April 25, 2006). "Educational Feature: A History and Analysis of Level Design in 3D Computer Games". Archived from the original on 2009-04-22. Retrieved 29 March 2010. == External links == Game design veteran Tom Sloper's game biz advice, including lessons on game design ACM Queue article "Game Development: Harder Than You Think" by Jonathan Blow The Art of Computer Game Design by Chris Crawford Example Game Design Document by Chris Taylor "So You Wanna Be a Game Designer" at GameSpot The Designer at the Wayback Machine (archived January 7, 2008) at Eurocom The Philosophy of Game Design (part 1) Archived 2013-11-05 at the Wayback Machine at The Escapist GDP2: Game Designs and Game Design Patterns collection Archived 2016-10-21 at the Wayback Machine hosted by Interactive Institute The Chemistry Of Game Design at Gamasutra - by Daniel Cook Daniel Cook: Game Design Theory I Wish I had Known When I Started video from YouTube Hunger games Archived 2015-05-11 at the Wayback Machine (January 2015). "A new wave of videogames offers lessons in powerlessness, scarcity and inevitable failure. What makes them so compelling?" Will Wiles, Aeon Investigating the Polish School of Video Gaming
Wikipedia/Video_game_design
Cadence Design Systems, Inc. (stylized as cādence) is an American multinational technology and computational software company. Headquartered in San Jose, California, Cadence was formed in 1988 through the merger of SDA Systems and ECAD. Initially specialized in electronic design automation (EDA) software for the semiconductor industry, currently the company makes software and hardware for designing products such as integrated circuits, systems on chips (SoCs), printed circuit boards, and pharmaceutical drugs, also licensing intellectual property for the electronics, aerospace, defense and automotive industries, among others. == History == === 1983–1999 === Founded in 1983 in San Jose, California, Cadence Design Systems began as an electronic design automation (EDA) company named Solomon Design Automation (SDA). SDA's cofounders included James Solomon, Richard Newton, and Alberto Sangiovanni-Vincentelli. Cadence was formed by the merger of SDA and ECAD. A public company, ECAD had been co-founded by Ping Chao, Glen Antle, and Paul Huang in 1982. Cadence Design Systems was officially formed through SDA and ECAD's 1988 merger, with Joseph Costello was appointed both CEO and president of the newly combined company. After the merger, Cadence began trading on the New York Stock Exchange and Costello oversaw further mergers and acquisitions. In 1989, the company acquired Gateway Design Automation for $72 million. In 1990 it acquired Automated Systems Inc., and in doing so added "board design to its existing line of chip design software." In 1991, Cadence acquired its rival Valid Logic Systems for around $200 million, its biggest acquisition yet. The revenues of the combined company were $390 million, making Cadence "the largest provider of the software used by electronic engineers to design computer chips and circuit boards," according to the New York Times. In 1996, Cadence acquired High Level Design Systems, at which point Cadence had 3,300 employees and $742 million in annual revenue. Following the resignation of Cadence's original CEO Joe Costello in 1997, Jack Harding was appointed CEO. Ray Bingham was named CEO in 1999. Cadence purchased Ambit Design Systems for $260 million, which made tools for system-on-a-chip technology, in 1998, and OrCAD Systems in 1999. After acquiring Quickturn Design in 1999, Cadence was described as a "white knight" for the act by the New York Times, as Quickturn had been subject to a hostile takeover by Cadence's rival Mentor Graphics. === 2000–2019 === Under urging by executives such as Jim Hogan and executive vice president Penny Herscher, between 2001 and 2003, Cadence purchased a number of implementation tools through acquisition, such as Silicon Perspective, Verplex, and Celestry Design. The acquisitions were apparently in part to counter the 2001 purchase of Avanti by Synopsys, as Synopsys had become their primary market rival. In 2004, Mike Fister became Cadence's new CEO and president, with Ray Bingham becoming chairman. The former chairman, Donald L. Lucas, remained on the Cadence board. Between 2004 and 2007, Cadence purchased four companies, including the software developer Verisity, and in 2006, it spent $1 billion in stock buybacks. In 2007, Cadence announced it would be introducing a new chip-making process that laid wires diagonally as well as horizontally and vertically, arguing it would make its designs more efficient. In June 2007, Cadence had a market value of around $6.4 billion. That year, Cadence was rumored to be in talks with Kohlberg Kravis Roberts and Blackstone Group regarding a possible sale of the company. Cadence withdrew a $1.6 billion offer to purchase Mentor Graphics in 2008. Also that year, Cadence's board appointed Lip-Bu Tan as acting CEO, after the resignation of Mike Fister; Tan had served on the Cadence board of directors since 2004. In January 2009, the board of directors of Cadence voted unanimously to confirm Lip-Bu Tan as president and CEO. In 2011, it purchased Altos Design Automation. Subsequent notable acquisitions included Cosmic Circuits and Tensilica in 2013, Forte Design Systems in 2014, and the AWR Corporation in 2019. === 2020–2025 === Cadence had 9,300 employees and annual revenue of $3 billion in 2021. Most of its revenue came from licensing its software and intellectual property. In April 2021, following a Washington Post report on the use of Cadence and Synopsys technology in the People's Liberation Army's military-civil fusion efforts, U.S. legislators Michael McCaul and Tom Cotton requested that the United States Department of Commerce tighten controls on the sales of semiconductor manufacturing software. On December 15, 2021, Anirudh Devgan assumed the role of Cadence president & CEO, after having been named Cadence president in 2017. Lip-Bu Tan retired as CEO and became executive chairman and left this position and the board in May 2023. In 2021, Cadence launched an artificial intelligence platform to streamline processor development. Although most of Cadence's customers for decades were "traditional semiconductor firms," around 40% of Cadence's revenue by 2022 came from customers who were "systems" oriented, or seeking products tailored for various industries that utilized chips in a central role. Cadence was also increasingly designing customized chips for clients and having them manufactured by third parties such as Taiwan Semiconductor Manufacturing, a practice which had become more popular in the face of worldwide chip shortages and shipping issues, according to Reuters. By late 2022, Cadence had clients such as Tesla and Apple Inc. Cadence acquired OpenEye Scientific Software for $500 million in September 2022, rebranding the company OpenEye Cadence Molecular Sciences and making it into a business unit. OpenEye signed Pfizer as a software client in October 2023. Cadence purchased various businesses from Rambus in 2023. As of September 2023, Cadence was "looking into" applying for funding from the $52 billion CHIPS and Science Act, passed in 2022 bring more of the international semiconductor supply chain into the United States. In February 2024, Cadence "quietly stepped into the supercomputer business," according to TechRadar, when it unveiled the M1, its own supercomputer designed to run computational fluid dynamics (CFD) while utilizing AI. In June 2024, Cadence purchased BETA CAE Systems. In January 2025, Cadence announced the acquisition of Secure-IC, a leading embedded security IP platform provider; the acquisition is expected to close by mid-2025, following the usual regulatory approvals and other closing conditions, and be immaterial to 2025 revenue and earnings. In 2025, the Trump administration paused the issuing of licenses for exports of Cadence software to China. == Products == Originally known as a creator of electronic design automation (EDA) software, the company currently develops software, hardware and intellectual property (IP) used to design chips, chiplet-style products, and printed circuit boards, while also selling hardware systems that run its chip design software. It also has tools for "electromagnetics, thermal and computational fluid dynamics in the high-tech electronics, aerospace and defense and automotive sectors," and according to Investor's Business Daily in 2023, it specializes in products for fields such as "artificial intelligence and machine learning, cloud computing, 3D technology, and AI-enabled big data analytics." Among market applications are "hyperscale computing, 5G communications, automotive, mobile, aerospace, consumer, industrial and health care." === Integrated circuit software === The company develops a number of technologies for creating custom integrated circuits. For example, its Virtuoso Platform, later renamed Virtuoso Studio, incorporates tools for designing full-custom integrated circuits. In 2019, Cadence introduced its Spectre X parallel circuit simulator, so that users could distribute time- and frequency-domain simulations across hundreds of CPUs for speed. Cadence also developed AWR, a radio frequency to millimeter wave design environment for designing 5G/wireless products. AWR is used for communications, aerospace and defense, semiconductor, computer, and consumer electronics. === Digital implementation and signoff === Cadence has a number of digital implementation and signoff tools, including Genus, Innovus, Tempus & Voltus, among others. In 2020, Cadence integrated its Innovus place and route engine and optimizer into Genus Synthesis. Stratus is Cadence's high-level synthesis tool, and is used to create RTL implementations from C, C++, or SystemC code. Other formal verification and signoff tools include Conformal Equivalence Checker, Joules RTL Power Solution, Quantus Extraction Solution, and Cadence's Modus DFT Software Solution. === System verification === Cadence has developed a number of formal verification products for chip design. JasperGold is a formal verification tool, initially introduced in 2003 and upgraded with machine learning in 2019. vManager is a verification management tool for tracking the verification process. Cadence announced Perspec System Verifier in 2014 for defining and verifying system-level verification scenarios, with Perspec made compatible with the Accellera Portable Test and Stimulus Standard (PSS) several years later. Introduced in 2017, Cadence's parallel simulator Xcelium is based on a multi-core parallel computing architecture. === Hardware emulation === In 2015, Cadence announced the Palladium Z1 hardware emulation platform, with over 100 million gates per hour compile speed, and greater than 1 MHz execution for billion-gate designs. which was based on emulation technology from Cadence's 1998 acquisition of Quickturn. Cadence announced Palladium Z2 in 2021, claiming a 1.5X performance and 2X capacity improvement over the Z1. The Protium FPGA prototyping platform was introduced in 2014, followed by the Protium S1 in 2017, which was built on Xilinx Virtex UltraScale FPGAs. Protium X1 rack-based prototyping was introduced in 2019, which Cadence claimed supported a 1.2 billion gate SoCs at around 5 MHz. with Palladium S1/X1 and Protium sharing a single compilation flow. In 2021, Protium X2 was announced; Cadence claimed a 1.5X performance and 2X capacity improvement over Protium X1. === SIP blocks === Cadence supplies semiconductor intellectual property (SIP) blocks, covering interface design, USB, MIPI, ethernet, memory, analog, SoC peripherals, and data plane processing units. Cadence also develops chip verification technologies including simulators and formal verification tools. Cadence develops Tensilica DSP processors for audio, vision, wireless modems, and convolutional neural nets. Tensilica DSP processors IP in 2019 included: Tensilica Vision DSPs for imaging, vision, and AI processing; Tensilica HiFi DSPs for audio processing; Tensilica Fusion DSPs for IoT; Tensilica ConnX DSPs for radar, lidar, and communications processing; and Tensilica DNA Processor Family for AI acceleration. In 2021, Cadence launched the Tensilica AI Platform to accelerate AI SoC development and improve performances. === PCB and packaging technologies === The company has a number of printed circuit board (PCB) and packaging technologies for designing circuit boards. Its Allegro Platform has tools for co-design of integrated circuits, packages, and PCBs. OrCAD/PSpice has tools for smaller design teams and individual PCB designers. OrbitIO Interconnect Designer is a die/package planning & route optimization tool. InspectAR uses augmented reality to map out complicated circuit board electronics for real-time labelling of board schematics. === Systems design and analysis === The company has a number of tools for system analysis. Sigrity has tools for signal, power integrity, and thermal integrity analysis and IC package design. Introduced in April 2019 as part of Cadence's expansion into system analysis, Clarity is a 3D field solver for electromagnetic analysis, that uses distributed adaptive meshing to partition jobs across multiple cores. In September 2019, Cadence announced Celsius, a parallel architecture thermal solver that uses finite element analysis for solid structures and computational fluid dynamics (CFD) for fluids. Cascade Technologies, Inc includes hi-fidelity CFD solvers for multiphysics analysis of turbulence fluid flow. Acquired by Cadence from Pointwise in 2021, Fidelity Pointwise is for computational fluid dynamics (CFD) mesh generation. === Machine design and digital twins === Cadence in 2021 acquired a number of system analysis products from NUMECA, known for software tools used in the automotive, marine, aerospace, and power generation industries. Among the tools were Fidelity (formerly known as OMNIS), a computational fluid dynamics (CFD), mesh generation, multi-physics simulation, and optimization product. Its Cadence Reality digital twin platform creates manipulatable digital models of designs or factories. Cadence Design Systems in February 2024 launched its Cadence Millennium Enterprise Multiphysics Platform, or Millennium M1. The hardware/software combination was designed for creating digital twins. It draws from Cadence's older Fidelity CFD suite. === Drug design === Cadence's OpenEye Scientific division has computational molecular modeling and simulation software used by pharmaceutical and biotechnology companies for purposes such as drug discovery and antibody discovery. The Orion is OpenEye's software-as-a-service platform. OpenEye Scientific has its headquarters in Santa Fe, New Mexico. === Artificial intelligence === The company was increasingly incorporating artificial intelligence (AI) in 2023, according to Reuters, by "providing tools to design chips for AI" as well as by "adding AI into its own software to help in the complex process of designing chips." Cerebrus was released in 2021, and is a machine learning-based chip which utilizes reinforcement learning and is meant to automatically optimize the Cadence digital design flow. In 2022, Cadence introduced the AI platform Optimality Intelligent System Explorer, a system design tool with multiphysics system analysis software. Designed to be compatible with Clarity 3D and SigrityX, Microsoft was an early adopter. In September 2023, Cadence released software called ChipGPT, allowing companies to create custom silicon with assistance from AI. == Recognition == In 2016, former Cadence CEO Lip-Bu Tan was awarded the Dr. Morris Chang Exemplary Leadership Award by the Global Semiconductor Alliance. In 2019, Investor's Business Daily ranked Cadence Design Systems #5 on its 50 Best Environmental, Social, and Governance (ESG) Companies list. In 2020, Cadence ranked #45 on People magazine's Companies that Care list. Fortune magazine named Cadence to its 100 Best Companies to Work For list for the sixth consecutive year in 2020. In 2021, Anirudh Devgan was awarded the prestigious IEEE/SEMI Phil Kaufman award and in 2022 was inducted into National Academy of Engineering. == Sponsorship == In May 2022, the Formula 1 motor racing team McLaren announced a multi-year partnership deal with Cadence. Cadence partnered with the San Francisco 49ers in April 2023 on a several year technology project to fix energy efficiencies at Levi's Stadium. The deal also gave Cadence the naming rights to the team's mobile app. == Acquisitions timeline == == Lawsuits == Avanti Corporation From 1995 until 2002, Cadence was involved in a 6-year-long legal dispute with Avanti Corporation (brand name "Avant!"), in which Cadence claimed Avanti stole Cadence code, and Avanti denied it. According to Business Week "The Avanti case is probably the most dramatic tale of white-collar crime in the history of Silicon Valley". The Avanti executives eventually pleaded no contest and Cadence received several hundred million dollars in restitution. Avanti was then purchased by Synopsys, which paid $265 million more to settle the remaining claims. The case resulted in a number of legal precedents. Aptix Corporation Quickturn Design Systems, a company acquired by Cadence, was involved in a series of legal events with Aptix Corporation. Aptix licensed a patent to Mentor Graphics and the two companies jointly sued Quickturn over an alleged patent infringement. Amr Mohsen, CEO of Aptix, forged and tampered with legal evidence and was subsequently charged with conspiracy, perjury, and obstruction of justice. Mohsen was arrested after violating his bail agreement by attempting to flee the country. While in jail, Mohsen plotted to intimidate witnesses and kill the federal judge presiding over his case. Mohsen was further charged with attempting to delay a federal trial by feigning incompetency. Due to the overwhelming misconduct, the judge ruled the lawsuit as unenforceable and Mohsen was sentenced to 17 years in prison. Mentor Graphics subsequently sued Aptix to recoup legal costs. Cadence also sued Mentor Graphics and Aptix to recover legal costs. Berkeley Design Automation In 2013, Cadence sued Berkeley Design Automation (BDA) for circumvention of a license scheme to link its Analog FastSpice (AFS) simulator to Cadence's Analog Design Environment (Virtuoso ADE). The lawsuit was settled less than one year later with an undisclosed payment of BDA and a multi-year agreement to support interoperability of AFS with ADE through Cadence's official interface. BDA was bought by Mentor Graphics a few months later. == See also == Comparison of EDA software List of EDA companies List of semiconductor IP core vendors List of the largest software companies List of S&P 400 companies Semiconductor intellectual property core Ken Kundert, Cadence fellow and creator of the Spectre circuit simulation family of products (including SpectreRF) and the Verilog-A analog hardware description language == References == == External links == Official website Business data for Cadence Design Systems, Inc.:
Wikipedia/Cadence_Design_Systems
Jewellery design is the art or profession of designing and creating jewellery. It is one of civilization's earliest forms of decoration, dating back at least 7,000 years to the oldest-known human societies in Indus Valley Civilization, Mesopotamia, and Egypt. The art has taken many forms throughout the centuries, from the simple beadwork of ancient times to the sophisticated metalworking and gem-cutting known in the modern day. Before an article of jewellery is created, design concepts are rendered followed by detailed technical drawings generated by a jewellery designer, a professional who is trained in the architectural and functional knowledge of materials, fabrication techniques, composition, wearability, and market trends. Traditional hand-drawing and drafting methods are still utilized in designing jewellery, particularly at the conceptual stage. However, a shift is taking place to computer-aided design programs. Whereas the traditionally hand-illustrated jewel is typically translated into wax or metal directly by a skilled craftsman, a CAD model is generally used as the basis for a CNC cut or 3D printed 'wax' pattern to be used in the rubber moulding or lost wax casting processes. Once conceptual/ideation is complete, the design is rendered and fabricated using the necessary materials for proper adaptation to the function of the object. For example, 24K gold was used in ancient jewellery design because it was more accessible than silver as source material. Before the 1st century, many civilizations also incorporated beads into jewellery. Once the discovery of gemstones and gem cutting became more readily available, the art of jewellery ornamentation and design shifted. The earliest documented gemstone cut was done by Theophilus Presbyter (c. 1070–1125), who practised and developed many applied arts and was a known goldsmith. Later, during the 14th century, medieval lapidary technology evolved to include cabochons and cameos. Early jewellery design commissions were often constituted by nobility or the church to honour an event or as wearable ornamentation. Within the structure of early methods, enameling and repoussé became standard methods for creating ornamental wares to demonstrate wealth, position, or power. These early techniques created a specific complex design element that later would forge the Baroque movement in jewellery design. Traditionally, jewels were seen as sacred and precious; however, since the 1900s, jewellery has started to be objectified. Additionally, no one trend can be seen in the history of jewellery design for this period. Throughout the 20th-century jewellery design underwent drastic and continual style changes: Art Nouveau (1900–1918), Art Deco (1919–1929), International Style & organicism (1929–1946), New Look & Pop (1947–1967), Globalization, Materialism, and Minimalism. Jewellery design trends are highly affected by the economic and social states of the time. The boundaries of styles and trends tend to blur together and the clear stylistic divisions of the past are harder to see during the 20th century. == References ==
Wikipedia/Jewellery_design
In software engineering, test design is the activity of deriving and specifying test cases from test conditions to test software. == Definition == A test condition is a statement about the test object. Test conditions can be stated for any part of a component or system that could be verified: functions, transactions, features, quality attributes or structural elements. The fundamental challenge of test design is that there are infinitely many different tests that you could run, but there is not enough time to run them all. A subset of tests must be selected; small enough to run, but well-chosen enough that the tests find bug and expose other quality-related information. Test design is one of the most important prerequisites of software quality. Good test design supports: defining and improving quality related processes and procedures (quality assurance); evaluating the quality of the product with regards to customer expectations and needs (quality control); finding defects in the product (software testing). The essential prerequisites of test design are: Appropriate specification (test bases). Risk and complexity analysis. Historical data of your previous developments (if exists). The test bases, such as requirements or user stories, determine what should be tested (test objects and test conditions). The test bases involves some test design techniques to be used or not to be used. Risk analysis is inevitable to decide the thoroughness of testing. The more risk the usage of the function/object has, the more thorough the testing that is needed. The same can be said for complexity. Risk and complexity analysis determines the test design techniques to be applied for a given specification. Historical data of your previous developments help setting the best set of test design techniques to reach a cost optimum and high quality together. In lack of historical data some assumptions can be made, which should be refined for subsequent projects. Based on these prerequisites an optimal test design strategy can be implemented. The result of the test design is a set of test cases based on the specification. These test cases can be designed prior to the implementation starts, and should be implementation-independent. Test first way of test design is very important as efficiently supports defect prevention. Based on the application and the present test coverage further test cases can be created (but it is not test design). In practice, more test design techniques should be applied together for complex specifications. Altogether, test design does not depend on the extraordinary (near magical) skill of the person creating the test but is based on well understood principles. ISO/IEC/IEEE 29119-4:2015, Part 4 details the standard definitions of test design techniques. The site of test designers offers the LEA (Learn-Exercise-Apply) methodology to support effective learning, exercising and applying the techniques. == Automatic test design == Entire test suites or test cases exposing real bugs can be automatically generated by software using model checking or symbolic execution. Model checking can ensure all the paths of a simple program are exercised, while symbolic execution can detect bugs and generate a test case that will expose the bug when the software is run using this test case. However, as good as automatic test design can be, it is not appropriate for all circumstances. If the complexity becomes too high, then human test design must come into play as it is far more flexible and it can concentrate on generating higher level test suites. == References ==
Wikipedia/Test_design
A studio is a space set aside for creative work of any kind, including art, dance, music and theater. The word studio is derived from the Italian: studio, from Latin: studium, from studere, meaning to study or zeal. == Types == === Art === The studio of any artist, especially from the 15th to the 19th centuries, characterized all the assistants, thus the designation of paintings as "from the workshop of..." or "studio of..." An art studio is sometimes called an "atelier", especially in earlier eras. In contemporary, English language use, "atelier" can also refer to the Atelier Method, a training method for artists that usually takes place in a professional artist's studio. The above-mentioned "method" calls upon that zeal for study to play a significant role in the production which occurs in a studio space. A studio is more or less artful to the degree that the artist who occupies it is committed to the continuing education in his or her formal discipline. Academic curricula categorize studio classes in order to prepare students for the rigors of building sets of skills which require a continuity of practice in order to achieve growth and mastery of their artistic expression. A versatile and creative mind will embrace the opportunity of such practice to innovate and experiment, which develops uniquely individual qualities of each artist's expression. Thus the method raises and maintains an art studio space above the level of a mere production facility or workshop. Safety is or may be a concern in studios, with some painting materials required to be handled, stored, or used properly to prevent poisoning, chemical burns, or fire. === Dance Studio === === Educational === In educational studios, students learn to develop skills related to design, ranging from architecture to product design. In specific, educational studios are studio settings where large numbers of students learn to draft and design with instructional help at a college. Educational studios are colloquially referred to as "studio" by students, who are known for staying up late hours into the night doing projects and socializing. The studio environment is characterized by two types in education: The workspace where students do usually visually-centered work in an open environment. This time and space is beyond that of instructional time and faculty guidance is not available. It allows for students to engage each other, help each other, and inspire each other while working. A type of class that takes the above-mentioned workshop space, and recreates its core component of an open working environment. It differentiates itself based on a topic of instruction, isolated space, instructor led/included, and an added focus of directed criticism. === Pottery === Studio pottery is made by an individual potter working on his own in his studio, rather than in a ceramics factory (although there may be a design studio within a larger manufacturing site). === Production === Production studios are those studios which act as centres for the production in any of the arts; alternatively they can also be the financial and commercial entity behind such endeavours. In radio and television production studio is the place where programs and radio commercial and television advertising are recorded for further emission. === Animation === Animation studios, like movie studios, may be production facilities, or financial entities. In some cases, especially in anime, they continue the tradition of a studio where a master or group of talented individuals oversee the work of lesser artists and crafts persons in realising their vision. Animation studios are a fast rising entity and they include established firms such as Walt Disney and Pixar. === Comics === A comics studio is a workroom or entertainment company that makes comics. Comics creators, employ small studios of staff to assist in the creation of a comic strip, comic book or graphic novel. In the early days of "Dan Dare", Frank Hampson employed a number of staff at his studio to help with the production of the strip. Eddie Campbell is another creator who has assembled a small studio of colleagues to help him in his art, and the comic book industry in the United States has based its production methods upon the studio system employed at its beginnings. Another type of studio, common for instance in Spain, would produce work for-hire on license, with prospective buyers bringing in their own franchises for artwork and occasionally new stories. === Instructional === Many universities are creating studio settings for courses outside the artist's realm. There are several different projects along these lines, most notably the SCALE-UP (Student-Centered Active Learning Environment for Undergraduate Programs) initiated at NC State. === Mastering === In audio, a mastering studio is a facility specialised in audio mastering. Tasks may include but not be limited to audio restoration, corrective and tone-shaping EQ, dynamic control, stereo or 5.1 surround editing, vinyl and tape transfers, vinyl cutting, and CD compilation. Depending on the quality of the original mix, the mastering engineer's role can change from small corrections to improving the overall sound of a mix drastically. Typically studios contain a combination of high-end analogue equipment with low-noise circuitry and digital hardware and plug-ins. Some may contain tape machines and disc cutting lathes. They may also contain full-range monitoring systems and be acoustically tuned to provide an accurate reproduction of the sound information contained in the original medium. The mastering engineer must prepare the file for its intended destination, which may be radio, CD, vinyl or digital distribution. In video production, a mastering studio is a facility specialized in the post-production of video recordings. Tasks may include but not be limited to: video editing, colour grading correction, mixing, DVD authoring and audio mastering. The mastering engineer must prepare the file for its intended destination, which may be broadcast, DVD or digital distribution. === Acting === An "acting studio" is an institution or workspace (similar to a dance studio) in which actors rehearse and refine their craft. The Neighborhood Playhouse and Actors Studio are legendary acting studios in New York. === Movie === A movie studio is a company which develops, equips and maintains a controlled environment for filmmaking. This environment may be interior (sound stage), exterior (backlot) or both. === Photographic === A photographic studio is both a workspace and a corporate body. As a workspace it provides space to take, develop, print and duplicate photographs. === Radio === A radio studio is a room in which a radio program or show is produced, either for live broadcast or for recording for a later broadcast. The room is soundproofed to avoid unwanted noise being mixed into the broadcast. === Recording === A recording studio is a facility for sound recording which generally consists of at least two rooms: the studio or live room, and the control room, where the sound from the studio is recorded and manipulated. They are designed so that they have good acoustics and so that there is good isolation between the rooms. === Television === A television studio is an installation in which television or video productions take place, for live television, for recording video tape, or for the acquisition of raw footage for post-production. The design of a studio is similar to, and derived from, movie studios, with a few amendments for the special requirements of television production. A professional television studio generally has several rooms, which are kept separate for noise and practicality reasons. === Zen, Yoga and martial arts === Many healing arts and activities such as zen, yoga, judo and karate are "studied" in a studio. It is widespread to see yoga studios and martial arts studios established in settings that might previously have been for other uses, described as studios. These are not recreational centers or gyms in the traditional sense, but places where students of these activities practice or study their art. == See also == Workshop Hackspace Fab lab == Sources == == External links == The dictionary definition of studio at Wiktionary
Wikipedia/Design_studio