Document
stringlengths
395
24.5k
Source
stringclasses
6 values
/* * Copyright (c) 2003, PostgreSQL Global Development Group * See the LICENSE file in the project root for more information. */ package org.postgresql.largeobject; import org.checkerframework.checker.nullness.qual.Nullable; import java.io.IOException; import java.io.OutputStream; import java.sql.SQLException; /** * This implements a basic output stream that writes to a LargeObject. */ public class BlobOutputStream extends OutputStream { /** * The parent LargeObject. */ private @Nullable LargeObject lo; /** * Buffer. */ private byte[] buf; /** * Size of the buffer (default 1K). */ private int bsize; /** * Position within the buffer. */ private int bpos; /** * Create an OutputStream to a large object. * * @param lo LargeObject */ public BlobOutputStream(LargeObject lo) { this(lo, 1024); } /** * Create an OutputStream to a large object. * * @param lo LargeObject * @param bsize The size of the buffer used to improve performance */ public BlobOutputStream(LargeObject lo, int bsize) { this.lo = lo; this.bsize = bsize; buf = new byte[bsize]; bpos = 0; } public void write(int b) throws java.io.IOException { LargeObject lo = checkClosed(); try { if (bpos >= bsize) { lo.write(buf); bpos = 0; } buf[bpos++] = (byte) b; } catch (SQLException se) { throw new IOException(se.toString()); } } public void write(byte[] buf, int off, int len) throws java.io.IOException { LargeObject lo = checkClosed(); try { // If we have any internally buffered data, send it first if (bpos > 0) { flush(); } if (off == 0 && len == buf.length) { lo.write(buf); // save a buffer creation and copy since full buffer written } else { lo.write(buf, off, len); } } catch (SQLException se) { throw new IOException(se.toString()); } } /** * Flushes this output stream and forces any buffered output bytes to be written out. The general * contract of <code>flush</code> is that calling it is an indication that, if any bytes * previously written have been buffered by the implementation of the output stream, such bytes * should immediately be written to their intended destination. * * @throws IOException if an I/O error occurs. */ public void flush() throws IOException { LargeObject lo = checkClosed(); try { if (bpos > 0) { lo.write(buf, 0, bpos); } bpos = 0; } catch (SQLException se) { throw new IOException(se.toString()); } } public void close() throws IOException { LargeObject lo = this.lo; if (lo != null) { try { flush(); lo.close(); this.lo = null; } catch (SQLException se) { throw new IOException(se.toString()); } } } private LargeObject checkClosed() throws IOException { if (lo == null) { throw new IOException("BlobOutputStream is closed"); } return lo; } }
STACK_EDU
Research group of Dr. Kosmas Kepesidis We explore the extent to which medical information acquired from photonic data can be utilized in medical diagnostics, personalized health monitoring, and life sciences. Specifically, we investigate relevant procedures for experimental and study design as well as data preprocessing. We combine these procedures with machine learning methods and ideas from medical statistics in appropriate data-science pipelines. The resulting pipelines are implemented using open-source software and directly tested on suitable clinical studies. In addition, we investigate fundamental problems in medical decision-making from both a theoretical and data-driven point of view. Using ideas and tools from information theory, decision theory, as well as statistical physics, we aim for the quantification of medically relevant information carried by different types of health data sets. Furthermore, we seek to assess their utility for healthcare, and cross-compare their efficiency in precisely defining the health status of an individual. Based on case-control studies and infrared measurements of human blood, we train predictive models using classical machine learning methods that could aid disease diagnostics and screening. Additionally, we utilize longitudinal studies for the development of an infrared-based personalized health-monitoring system. Such systems, based on machine learning, often face major challenges when applied in practice, since their conditions of development differ from the conditions during clinical application. We try to overcome these challenges using ideas and methods from the fields of domain adaptation, transfer learning, and active machine learning. Several investigations have shown the potential of applying molecular fingerprinting by vibrational spectroscopy combined with machine learning on medical problems, promising the development of new diagnostic or screening tests. All these works concentrate on the overall performance of such candidate medical tests. Only very limited research has been performed to assess the impact of factors beyond disease status that can affect the test result. By investigating such confounding factors using rigorous statistical evaluations based on theory developed in the field of medical statistics, we assess the potential and limitations of the proposed medical tests. We aim to answer fundamental questions relevant to medical decision-making. Using ideas from information theory, we strive to assess the characteristics a medical dataset should possess to allow for the precise assessment of wellness and indicate wellness-to-disease transitions at an early stage, while being regularly acquirable at affordable cost. The AI algorithms of the class of generative models (GMs) are designed to generate artificial but realistic data based on large sets of real observations. We experiment with GMs to be used in so-called in-silico clinical studies, which are virtual clinical trials conducted using a computer simulation. Such studies offer high potential to accelerate the development of new drugs, medical devices, and tests while significantly cutting R&D costs. We work towards best practices for designing and building systems to collect, store, and analyze scientific data at scale. This involves appropriate database design and the development of unified domain-specific scientific software packages for data analysis and processing.
OPCFW_CODE
What is the utility of your token Check out the token utility canvas, the tool I created for Outlier Ventures “What is the utility of your token?” is one of the first questions web3 projects hear from all sides. A well-designed token needs to provide value through the right incentives to all stakeholders, meaningful integration into the product or protocol, and be effectively distributed. In my role as a token design lead for Outlier Ventures, the largest Web3 accelerator, I designed a tool for this: The Token Utility Canvas helps break down the nebulous concept of “Token utility” into a systematic structure that allows teams to think through relevant aspects one by one. A preliminary step: token discovery The Token Utility Canvas is one of a range of tools that help teams design effective tokens. It was developed by adapting the business model canvas, a gold standard of the lean startup methodology, to the decentralized paradigm of web3. Thanks for reading At the Edge! Subscribe for free to receive new posts and support my work. Before diving into the Token Utility Canvas, it is often helpful to prepare with a set of preliminary exercises. Teams should evaluate the potential for a token and get ready to fill out the Token Utility Canvas by answering the following key questions with 3 different exercises: What is the overall objective of the network or protocol? What are the most important stakeholders? How do the different stakeholders exchange value? When it comes to the network objective, it is important to consider context, scope, and constraints, as well as defining criteria for success. I recommend representing stakeholders graphically, ordered according to their relative importance. The value exchange between stakeholders should be described in terms of the benefits for each stakeholder, as well as the mechanisms through which they exchange value. Breaking down the Token Utility Canvas Once the token discovery step has been completed and we have clarity on objectives, stakeholders, and value exchange, we are ready to fill out the Token Utility Canvas. We will go through the different sections of the Token Utility Canvas one by one to provide more context on how the tool is used. The first section of the Token Utility Canvas is to be populated with the outcomes of the stakeholder analysis conducted in the token discovery phase. Rather than a business serving a single customer, web3 networks and protocols often are n-sided networks, where different actors interact to create value. A typical list may include different types of users, providers of a decentralized service, investors, as well as the community at large. It is recommended to list them in order of importance. Mechanisms describe how the different stakeholders use the token to interact with one another and with the core product/protocol. Examples include using the token as a means of payment or a representation of a certain resource or as a commitment or collateral (e.g. different staking or locking mechanisms). The token may also be used for different signaling modalities (e.g. curation or governance), or for the right to provide work. The mechanisms are quite different from one project to another and will depend on the overall network objectives, as well as the value exchange between different stakeholders. What they all have in common is that the token mechanisms provide the means for the stakeholders to coordinate in a decentralized manner. When filling out the canvas for the first time, we encourage teams to list different ideas and then help refine them in discussion with our token design experts. Incentives and disincentives How can we coordinate the different (often anonymous) stakeholders if we don’t have contracts or any sort of formal control like we would in a corporation? Carrots and sticks, is the answer. Incentives and disincentives are created through the utility mechanisms, and modulate the behavior of the different stakeholders by rewarding and punishing certain actions. The first example of such a crypto-economic system was Bitcoin, where miners are rewarded with BTC for correctly running the consensus (incentives) and forgo that revenue when they are on the wrong chain (disincentives). Since Bitcoin, we have seen a huge proliferation of similar mechanisms to coordinate a range of different stakeholders. A common pattern is to require collateral in the native token for the right to provide work of different kinds. If the work is done correctly, rewards are earned. In the case of faulty work or even an attack on the network, the collateral can be confiscated (“slashed”). It is important to consider incentives and disincentives for all major stakeholders, and systematically thinking through this helps to fine-tune the mechanisms used. The token objective can be derived from considering both the overall network objective conducted in the token discovery phase, as well as the incentives/disincentives section prior to it in the Token Utility Canvas. The token objective often includes the coordination of the different stakeholders to produce value as a network, whether that happens on the infrastructure or application layer. On top of that, the native token is also frequently used to govern the protocol in a decentralized manner. Governance is usually conducted under the umbrella of a DAO that allows token holders to come to agreement on the mechanisms and parameters used, manage the treasury, and decide on the future of the protocol. This section of the Token Utility Canvas helps teams think through how the most important stakeholders would acquire and then use tokens. Ideally, teams would consider the full journey those stakeholders will go through, which helps identify potential points of friction. For example, a user could earn their token through user rewards or an airdrop, or need to first acquire it before being able to use the protocol. Sometimes, the token is not required for basic users but unlocks more advanced forms of participation. While some stakeholders will only go through a part of that journey and then drop off, others will get more involved over time (e.g. take on other roles, participate in governance, etc.). As a result, the token journey can also be thought of as a funnel with different conversion and churn rates between the different levels of engagement. Often, distinguishing the token journey between the demand and supply side makes sense, but this depends on the specifics of a project (sometimes, this distinction is discarded altogether or replaced with a more suitable one). If the token is used to capitalize the network by using it as an instrument for fundraising (whether from private investors or with a public sale), to determine ownership, and to set long-term incentives for key stakeholders including the team, it needs to capture value. Value capture answers the question “If this project is successful, why would the token be valuable?”. The market price of a token is always determined by the supply and demand at any given time. As a result, a token that captures value needs to create sustainable demand for the token, e.g. because it is needed to use the network or clear benefits can be unlocked by being a token holder. Properties of the supply dynamics are at the heart of this section, for instance whether there is a fixed, inflationary or even deflationary supply of the token. For instance, mechanisms adapted from traditional finance that rely on programmatically buying back of tokens, either for redistribution (“buyback-and-make”) or permanently removing them from supply (“buyback-and-burn”) provide different ways of value capture. Additionally, mechanisms like staking that remove circulating supply while giving out continuous rewards to stakers indirectly support the value capture of a token. Token distribution concerns how the supply of the token is split up and distributed. This includes the release schedules (vesting) for investors and the team, where 2-5 years of locked up tokens have become standard to ensure alignment. Token distribution also concerns the mechanisms with which the token is launched initially, as well as distributed over time. For launching a token, a wide range of mechanisms is available: airdrops to early users, pre-sales to private investors, public sales with different pricing mechanisms (simple priced sales, vs. auctions or liquidity bootstrapping pools), or direct listings on exchanges. When it comes to distribution over time, projects can either choose to distribute tokens to their users either initially (e.g. conditional airdrops or temporary reward campaigns) or continuously (based on usage numbers). Protocol costs (token budgeting) Each of the distribution mechanisms outlined above will require its own pool of tokens. For that reason, it is important to budget the initial token supply as well as any ongoing flows from inflation or buybacks. In this section, there is just enough space to make an initial list of different “budget items” for which tokens are needed. Finally, the protocol revenues section describes the financial flows that accrue on a protocol or network level, as opposed to stakeholders directly. Often, there are usage fees that flow into the treasury of the project. Whether the fees are implemented from the beginning or only inserted at a more mature stage is a strategic decision that depends on many factors. In the case of inflationary models, there are also possible revenue flows from these new token emissions. Sometimes, destroyed tokens in a burn-and-mint model are also counted as indirect protocol revenues, since these value flows indirectly accrue to token holders. A starting point for developing token utility The Token Utility Canvas provides a great starting point and an overview of the most relevant aspects of token design. However, the process only starts here and it generally takes 3-6 months to design and plan the launch of a token fully. Each of the different sections unfolds into more detailed discussions and decisions. For instance, deciding between different mechanisms to ensure both utility and value capture usually requires an analysis with a token design specialist where different options are evaluated. The same goes for distribution strategies and governance structures. After filling out the Token Utility Canvas, teams start working on value flow diagrams that summarize the token utility graphically for their documentation, and on a spreadsheet that plans the genesis allocation and distribution. Thanks for reading At the Edge! Subscribe for free to receive new posts and support my work.
OPCFW_CODE
This is not new to anyone currently involved in teaching computer science or maybe to anyone paying attention to education more generally. We’re struggling to increase Computer Science exposure in K-12. We’re struggling to make CS count or to make CS more compelling, especially to young women. Mark Guzdial recently cited this US News article that breaks down all AP test takers by gender. By far the worst ratio is in CS, where boys outnumber girls almost 5 to 1. Why is that? Of course, there are lots of reasons, many of them complex and difficult to solve, many involving gender stereotypes and bias. Something I’m struggling with, though, is the CS learning curve. A lot of the resources available online get through the basics fairly quickly, something I do in the first semester or so: functions, variables, loops, strings, maybe arrays/lists. And it seems relatively easy at first. Assign a value to a variable, add something to it, see how it changes. But then it gets harder. Suddenly you’re not just doing a simple for loop to print “Hello World” five times, but you’re looping through a list of data, checking its validity and updating it. Still a basic concept in some ways, but now much more challenging. I’ve seen this happen in my US classes and my MS classes. Students will breeze through the first parts and then stumble, and then they sometimes give up. While you can do some pretty cool stuff at the beginning. To do the really cool stuff requires some much harder things. Nested loops, nested if’s, functions calling other functions, objects. These are all things that allow say, a cool video game, which students love to create. But . . . it’s sometimes hard to get them past the hard stuff to get to the good stuff. I’m not saying this is why girls in particular don’t do CS. That’s a whole other issue. But this is part of it for many kids. The latest hype about coding is that “everyone should do it.” I agree, but the hype suggests you’ll be making Angry Birds in a week. And that’s just not going to happen for most people. So a kid shows up to an Intro class expecting to be making Angry Birds like things and instead (if they’re unlucky like I was), they’re calculating the first 1000 prime numbers. Now there are some ways to make those first few assignments more fun. I’ve done robots and graphics, for example. And you can build on those to more complex things. But things can still get hard and can still get discouraging. So, we might need to find a way to not just have cool assignments at the beginning (the hook), but to have reasons and support to keep going past the hard stuff to get to the really cool stuff. I think it’s part of why we don’t have so many students taking CS or demanding CS in schools. They find out it’s harder than they thought and leave. I don’t know exactly how to fix that, but I keep thinking about it. Maybe someone else out there has an answer.
OPCFW_CODE
using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; namespace Png2RspConverter { public class CommomObject { readonly Dictionary<string, byte[]> m_Files = new Dictionary<string, byte[]>(); public CommomObject(string filePath) { using(Stream stream = new FileStream(filePath, FileMode.Open, FileAccess.Read)) { stream.ReadBytes(6); var unknownLength = stream.ReadInt(); var unknown = stream.ReadBytes(unknownLength); stream.ReadInt(); var headers = new List<(string name, int offset, int size)>(); while(true) { var nameLength = stream.ReadInt(); var name = Encoding.UTF8.GetString(stream.ReadBytes(nameLength)); headers.Add((name, stream.ReadInt(), stream.ReadInt())); if(headers.FirstOrDefault().offset == stream.Position) { break; } } foreach (var (name, offset, size) in headers) { stream.Position = offset; m_Files[name] = stream.ReadBytes(size); } } } public string[] Extract(string dirPath) { Directory.CreateDirectory(dirPath); foreach (var file in m_Files) { File.WriteAllBytes(Path.Combine(dirPath, file.Key), file.Value); } return m_Files.Keys.ToArray(); } } }
STACK_EDU
Kutsu freelancer projektiin Sinulla ei vaikuta olevan aktiivista projektia tällä hetkellä. Miksi et ilmoittaisi projektia nyt? Se on ilmaista!Ilmoita projekti - 76%Suoritetut työtehtävät - 80%Budjetin mukaisesti MySQL help (edit is missing from tables) auto_increment missing “lots of work for freelancer, ended up being a couple day project, still not complete but I wanted to pay for the work.”clubtek 1 vuosi sitten Project 5807005 has been deleted “It is unfortunately hit or miss with IT projects. A clear scope was set for this project. I was satisfied with progress prior to completion and made the mistake of advancing full payment. This project is now 14 days overdue (with many dropped deadlines) and I have been told to wait another 5-6 days due to the ongoing elections in India for the final product. Pravesh now claims that part of the scope is "impossible" to achieve. Having earlier claimed it would be really easy to complete the story is now that it was a big "misunderstanding". Steer clear of this developer for anything more than basic work and make sure you are thoroughly satisfied with the work before making payment.”jonleets 4 vuotta sitten “He is a great resource. A good listener. A hard worker and a man that is true to his word. I urge you to consider this excellent provider.”mywhackyworld 4 vuotta sitten PHP, MySQL and Jquery development (simple) “Pravesh done an excellent job, very fast and really happy! Thanks Pravesh”Edward H. 4 vuotta sitten Need custom Google Map with extra features “Pravesh is always great to work with. He has done a great job on all my projects.”ian0502 4 vuotta sitten “I am always thoroughly pleased with the work that Pravesh does for me. He is my #1 programmer choice whenever I need PHP, MySQL, JavaScipt, and HTML. I definitely recommend him!”ian0502 4 vuotta sitten Software DeveloperJun 2011 Work As a software developer. B.Tech2007 - 2011 (4 years) RAD(rational application developer for websphere 6.7 ) (2010)IBM A tool for software development to provide a development environment . Rational functional tester (2011)IBM A tool for software testing . - Puhelin varmennettu - Sähköposti varmennettu
OPCFW_CODE
<?php /** * Clase wsREST * * Clase con funciones para llamar a web services * * @author Nacho del Prado Losada * @since 27/01/2021 * @version 27/01/2021 */ class wsREST{ /** * Función servicioAPOD * * Llama al servicio API REST APOD(Astronomy Picture of the Day), que nos devuelve la imagen atronómica del * día e información relativa a esta. * * @param type $fecha la fecha que le vamos a pasar al servicio para que busque una imagen. * @return type array que contiene información sobre la imagen astronómica del día. * @author Susana Fabián Antón * @since 26/01/2021 * @version 26/01/2021 */ public static function sevicioAPOD($fecha) { //llamamos al servicio, pasándole la fecha al campo date, y decodificamos el json que nos devuelve return json_decode(file_get_contents("https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY&date=$fecha"), true); } /** * Función servicioPublicAPIS * * Devuelve una API si se encuentra con el título indicado * * @param string $titulo * @return type */ public static function servicioPublicAPIS($titulo) { return json_decode(file_get_contents("https://api.publicapis.org/entries?title=$titulo"), true); } /** * Función servicioCalculadora * * Permite sumar, restar, multiplicar y dividir dos números * * @param string $tipo Tipo de operación * @param float $num1 * @param float $num2 * @return string Resultado */ public static function servicioCalculadora($tipo, $num1, $num2) { return json_decode(file_get_contents("http://daw203.ieslossauces.es/AppFinalRaul2021/api/calculadora.php?operaciones=$tipo&n1=$num1&n2=$num2"), true); } public static function servicioBuscarDepartamentosPorDescripcion($descripcion) { return json_decode(file_get_contents("http://daw202.ieslossauces.es/FinalNacho2021/api/buscarDepartamentos.php?descripcion=$descripcion"), true); } } ?>
STACK_EDU
Blackjack ball python Blackjack Online Game Free. the las vegas strip double ball roulette binions the wizard of. a roulette taille 30 blackjack ball python price black diamond. beginner - Classic Snake game using Python, Tkinter, andAdvanced Python Class: "Blackjack". card_loc = (CARD. Add code to the program template that draws a ball moving across the Pong table.ball python room update - Duration: 8 minutes, 23 seconds. 1,503 views; 2 years ago; 7:33. Play next; Play now; ball python room update - Duration: 7 minutes, 33 seconds.Jimmie Simpson Artificial Ice rink is located inside Jimmie Simpson park.Post anything (from anywhere!), customize everything, and find and follow what you love. Create your own Tumblr blog today.Experience the latest concept in bowling that will keep you moving all night long. I have just installed Ubuntu and am re-familiarizing myself with Python. Classic Snake game using Python, Tkinter, and threading. up vote 6 down vote favorite.Gambling cities in utah building slots eu4 dragon ball online character slots blackjack paint. write a blackjack game in python slotsgade. Kidciety INC.The swimming pool area features a 25m pool with diving area, 2 storey waterslide, raised seating area, and a separate wading pool area.Reload this Yelp page and try your search again. If you're still having trouble, check out Google's support page. You can also search near a city,.I wanted to touch base on why we cant locate an updated Ball Python Morph List. With the rapidly growing field of Ball Pythons that are for hobbiest and breeders, you. For those who want to get their laps in, there are two designated continuous width swim lanes that are open during all swim hours.Play Slots Online Real Money - Blackjack Table Rules Las Vegas - Legal Gambling Sites Canada. Free Online Roulette Game For Ipad - Online Gambling SitesClassic Snake game using Python, Tkinter,. lock = threading.Lock(). Python is highly amenable to the don't-repeat-yourself idea so keep that in mind.. software air ball roulette tips online 2 deck blackjack halloween themed slots free online blackjack live blackjack ball python price probability of blackjack.Monarch Park outdoor swimming pool and Atificial Ice Rink is located inside Monarch Park.Online Gambling Singapore - Vegas Gambling. Online Gambling Singapore - Vegas Gambling Games List. define deployment slots blackjack ball python betfred slots.Figured ya'll would like this update on the new baby BP I recently acquired. She has grown 2 inches in the past month, and doubled her weight(She is. Money Slot Safe - Free Casino Slot Games No Deposit Blackjack Python Downloads. Show: 1 | 2. Blackjack Sniper - The Ultimate Blackjack Strategy Software. Rob the casinos blind with this amazing blackjack tool. Jimmie Simpson Community Centre is known as a priority centre hub for recreational programming with a variety of opportunities for everyone from pre-school to older adults.Blackjack Python Codes and Scripts Downloads Free. This module is a Python Library to support all IP2Location™. This is a simple code that lets a user control the.Jugar gratis al blackjack advertising gambling commision woodbine slots in toronto gta online casino dlc blackjack bermuda grass seed for sale billy billy slotsgade.Best Online Slots For Winning - Playing Roulette. Best Online Slots For Winning - Playing Roulette Odds. roulette spielen kostenlos blackjack ball python price. Best Reptile store in Montreal, QC - Yelp . slots coupons java blackjack source code gui ivory roulette ball roulette anglaise. slots blackjack ball python blackjack publishers clearing.Best Casino Games To Win Big Money - Blackjack Strategy Card 6 Deck - Casinos In Miami Florida - Jackpot Slot.Fee Details; Hostel Fees; Course. Advantages and disadvantages of legalizing gambling blackjack ball python traditional sports gambling best casino loosest slots.Blackjack Strategy Trainer. game roulette payout zero how do i get more magic slots in dark souls 2 nuri slots promo code slotsgade 85 blackjack ball python price. Gambling games on android lots of slots slots of dosh mobile gambling in charleston wv table relevable roulette blackjack film. blackjack ball python for.Cute Pythons. Cute Snake A Snake. Ball Pythons are pretty neat (and I'm not even a huge fan of snakes) I will have a Blue Eyed Lucy. Especially BlackJack,.. game for mac blackjack 29 unboxing. thing slots blackjack ball python price slots. slots dragon ball blackjack in australia.It offers a wide-range of activities including fitness, youth, sports, after-school and pre-school programs.Roulette ball speed device gute online casinos. Blackjack ball python for sale slots pokemon fire red. Online Casino Bonus 200 - Blackjack Online Real. Searching for Reptiles. 10 K Gold Black Snake Serpent Pendant With Diamond Eyes Reptile Python. Salvatore Ferragamo Aileen Medium Black Reptile Gancio Lock. Cazino Online Jocuri - Online Casinos South Africa No Python Metric Convertion - Free Download Python MetricThorncliffe Bowlerama also offers the ideal venue for your birthday party. Betfair Online Casino Nj - New Online Casinos Uk No Grab Money Slots App - Online Games Slots Machine Blackjack ball python Reviewed by Lora Huya on . Blackjack ball python Blackjack ball python - Optiplex gx620 motherboard slots,Casino m88. Rating: 3.1
OPCFW_CODE
Top 50 Puzzling Ancient Ruins That Scientists, Archaeologists and Historians Are Still Debating2:55:35 11 Most AMAZING Snakes In The World! - Published on: 5/9/2020 - Hi, it’s Katrina! From snakes with extra appendages that look exactly like spiders, to those with the perfect camouflage, here are 11 of the world’s most amazing snakes! Follow us on instagram! https://www.instagram.com/katrinaexplained/ Subscribe For New Videos! http://goo.gl/UIzLeB Check out these videos you might like: Unbelievable Animals SAVING Other Animals! 🐯https://www.201tube.com/video/HxehUWvMr38/video.html LARGEST Animals Ever Discovered! 🐙https://www.201tube.com/video/1Yj7F_tPYsU/video.html Wild Animals That SAVED Human Lives! 🐻https://www.201tube.com/video/mllqeVSsIl0/video.html 11. Malagasy Leaf-Nosed Snake Madagascar is home to countless endemic species, including at least 94 snakes. Perhaps the most unique among them is Langaha madagascariensis, the Malagasy leaf-nosed snake. French zoologist Pierre Joseph Bonnaterre first described it in 1790, and ever since, it’s remained in its own genus since it is so bizarre! 10. Spider-Tailed Viper Biologist Steven Anderson first noticed the spider-tailed viper in 1970 while examining a specimen at the Field Museum in Chicago. It was labeled as a Middle Eastern snake species called the Persian horned viper. But this one had what he described as an “oval knob-like structure” with leg-like scales protruding from it. It looked like an arachnid! 9. Flying Snake There are five flying snake species throughout the jungles of South and Southeast Asia, from western India to Indonesia. Scientists know relatively little about these creatures and their odd airborne activities, but they’ve established a few solid facts. 8. Elephant Trunk Snake One of the weirdest-looking serpentine species is the elephant trunk snake, which is named after its loose, baggy skin that looks much too big for its body. It’s an aquatic species that inhabits warm fresh- and brackish waters in various Southeast Asian countries, including Indonesia, Malaysia, Thailand, Cambodia, and Vietnam. 7. Horned Viper The horned viper is a nocturnal ambush predator that dwells in semi-arid environments and stony desert landscapes throughout North Africa and parts of the Middle East, at altitudes of up to 4,900 feet (1,500 meters). As its name implies, its head is equipped with two distinct horns, which sit above its eyes. 6. Tentacled Snake The tentacled snake is an aquatic brackish and freshwater species named after the knobby appendages on its face. It’s the only snake in the world with this unique feature. This peculiar serpent dwells in murky, shallow waters among lakes, streams, and rice paddies in parts of Southeast Asia, including Cambodia, Vietnam, and Thailand. It uses its tentacles, which are loaded with nerve cells, for detecting prey in the muddy water. 5. Hairy Bush Viper Also called the spiny bush viper and the rough-scaled bush viper, the hairy bush viper is native to tropical regions of central Africa. This small, venomous species is quite 0bviously named after its distinctive, keeled scales and spends much of its time in trees. 4. Worm Snake The worm snake is a tiny, pinkish-brownish specimen that resembles what it sounds like. There are two subspecies in North America: the eastern worm snake, which lives in the eastern U.S. south of New England, and the similar western worm snake, which resides west of the Mississippi. 3. Long-Nosed Vine Snake This remarkable species is found in Southeast Asia, particularly in India, Sri Lanka, Bangladesh, Myanmar, Thailand, Cambodia, and Vietnam. Also known as the Asian green vine snake and the long-nosed whip snake, this slender, yellow-ish tree-dwelling predator grows up to five feet (1.52 meters) long. Pretty big! 2. Tiger Keelback There are a handful of snake species throughout the world that can store toxins acquired from their food supply, and Japan’s tiger keelback is one of them. It’s equipped with specialized organs on the back of its neck called nuchal glands, where it stockpiles the toxins of the poisonous toads it feasts upon. 1. Barbados Threadsnake At a mere four inches (10.2 cm) long and with the thickness of a spaghetti strand, the Barbados threadsnake is the world’s smallest snake species. It’s so tiny, it can comfortably curl up on top of a U.S. quarter. The snake’s miniature size can be attributed to an extreme version of a condition called island dwarfism, which is when a species evolves to have reduced bodily proportions as a result of being genetically isolated to a small environment. - origins explained 10 most amazing snakes in the world most amazing snakes most beautiful snakes in the world beautiful snake species most beautiful snake species snakes in the world 10 most beautiful snakes most venomous snakes most poisonous snakes biggest snakes largest snakes amazing snakes smallest snakes incredible snakes unbelievable snakes amazing beautiful snakes nature wildlife creatures largest smallest venomous poisonous origins explained top 10
OPCFW_CODE
compiz on aiglx davidr at novell.com Fri Mar 10 04:49:26 PST 2006 On Thu, 2006-03-09 at 10:55 -0800, James Jones wrote: > On Thursday 09 March 2006 06:53 am, David Reveman wrote: > > On Mon, 2006-03-06 at 14:01 -0500, Kristian Høgsberg wrote: > > > Hey, > > > > > > With a bit of hacking, I managed to get compiz (and glxcompmgr) > > > running on aiglx. I'm running it on my i830 laptop, and the > > > performance is actually quite impressive. > > > > > > Most of the aiglx fixes were just bug fixes or missing minor > > > features and have been committed to the accel_indirect_branch. > > > A couple of fixes are less committable and I've put them here: > > > > > > http://freedesktop.org/~krh/compiz-on-aiglx > > > > > > The aiglx-gl-include-inferiors.patch make the DRI driver draw > > > over child windows, and the patch is really simple. The > > > question is what kind of protocol do we need to enable this... > > > an FBConfig attribute might work, or maybe the question is, why > > > does redirected window affect output at all again? > > > Furthermore, for compiz to work, the root visual must be double > > > buffered, which really just depends on how the DDX driver > > > initializes the visual configs. The i830 sets it up correctly, > > > but the radeon driver needs something like this: > > > > > > > > > http://people.freedesktop.org/~ajax/radeon-prefer-db-visuals-1. > > >patch > > > > > > to make sure the root window gets a double buffer visual. > We plan to provide an X option in our driver that turns off clipping > of GLX drawing to the root window. This will be a workaround for > users who wish to experiment with GLX-based composite managers > until X servers and composite managers using the composite overlay > window are available. > > > The aiglx-tfp-damage.patch adds damage tracking to the naive > > > GLX_EXT_tfp implementation in aiglx. It sometimes misses > > > damage events it seems and it really should track damage per > > > texture object so it's not committed yet. > > > > > > The compiz-aiglx-changes.patch makes a couple of changes to > > > compiz to make it work on aiglx: first, as I remember from > > > xdevconf, the consensus around GLX_EXT_tfp semantics was that > > > it binds a copy (conceptually) of the pixmap as the texture. > > > This is what aiglx implements, but it looks like Xgl sematics > > > is that the texture and pixmap share the same memory and only > > > binds and releases the pixmap on pixmap creation and > > > destruction time. The patch changes compiz to bind and release > > > whenever the texture is used, which is why the damage tracking > > > tfp patch above is essential for decent performance. I'm not > > > sure the copy semantics makes sense, though, but I'll write > > > another email about that. Another change in the patch is > > > support for the GLX_Y_INVERTED_EXT atrribute on a GLX drawable. > > > Xgl binds the pixmap y-inverted, aiglx doesn't so compiz needs > > > to know how to handle this. Of course, this should be an > > > FBConfig attribute not a drawable attribute. > > Yes, we agreed that GLX_EXT_tfp semantics should be that it binds > > a copy and it makes sense for being able to completely avoid > > tearing. I haven't updated compiz and Xgl for that yet. Textures > > and pixmaps will continue to share the same memory in Xgl so to > > get copy-on-bind semantics I have to be able to lock a drawable > > so that no other client can write to it. I don't know how hard > > that will be but updating compiz to bind before every draw could > > be done right without breaking anything. > The copy-on-write wording was thrown around a lot at the dev > conference, but I don't think its what was generally desired. This > would require one of the following: > 1) block all drawing to a drawable while it is bound as a texture. > This is just can not be done without extensive changes in the X > server. From what I can see, it would require determining which > drawables are affected by incoming operations, then backing out of > the operation, putting that client to sleep until the drawable was > unbound from the texture, then waking up the client and resuming > the operation. > 2) Doing an implicit copy if the drawable is going to be damaged > while it is bound to a texture, then updating the texture to point > to the copy. This might be slightly easier than the above, but it > still becomes very involved with direct rendering clients. Also, > it would mostly eliminate the benefit binding drawables directly to > textures. If we need to copy, it removes all the performance > I interpreted the discussion we had this way: > The drawable can not be rendered to, by X, OpenGL, or any other > direct rendering client, while it is bound to a texture. It is the > applications job to enforce this. This can be done with server > grabs. If the application obeys this rule, the BindTexture > operation will indeed act as if it were a CopyTexSubImage operation > in this case. > If the application does not want to grab, all bets are off. The > contents of the texture are undefined if it is rendered to while > bound. That said, I suspect this will still work on most > implementations, but there may be tearing. We have all seen it > work on Xgl, I'm ensuring it works in our driver, and it sounds > like it will work in aiglx since you are doing damage tracking to > update the texture. Sounds good to me. > I have a EXT_tfp spec update outstanding that addresses this and a > few other minor issues with the current version. I'll try to clean > this up and send it out later today. > I'm also working on a patch that will bring compiz in sync with the > extension as defined in the specification, which should build on > your compiz patch Kristian. Great, I appreciate that. I planned to do the work needed to get compiz in sync sometime soon but I'll just wait for your patch instead then. More information about the xorg
OPCFW_CODE
Invalid latestValidHash caused Lodestar to halt on invalid block See https://discord.com/channels/593655374469660673/593655641445367808/1200195786672185438 From @g11tech Ok from logs I understand the issue: lodestar tries to download all the forks, and some peers which accepted the 19056922, hash: 0x5970f4b49ecfba must be serving the same, which lodestar would send to besu for the validation, and as expected besu would reply with invalid, but here is the problem: Jan 21 13:14:17 nuc13 besu[59010]: 2024-01-21 13:14:17.083-05:00 | vert.x-worker-thread-0 | WARN | AbstractEngineNewPayload | Invalid new payload: number: 19056922, hash: 0x5970f4b49ecfbad6b02a1cc8fad8a0e47382576b1b28eeb4ec2a8c1649fa6c90, parentHash: 0x932123bf49f6ffce68aac29820bda6028d3bf7aebbebd5fdc758dac9d1c81c46, latestValidHash: 0x0000000000000000000000000000000000000000000000000000000000000000, status: INVALID, validationError: Block already present in bad block manager. besu is not supposed to send: latestValidHash: 0x0000000000000000000000000000000000000000000000000000000000000000, it can send it null, this response (0x0000000000000000000000000000000000000000000000000000000000000000) if only reserved if the valid block in besu's forkchoice is PREMERGE, which obviously is incorrect at this point of time. This causes us to invalidate the all post merge forkchoice, although we can add check for the same and prevent this issue on our side, but this is still essentially besu bug here are the specs: https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#response wemeetagain — 28/01/2024 02:22 ok, and for besu, they really should send a real LVH if feasible, if they don't want to do that, send null instead of 0x0. g11tech — 28/01/2024 02:24 yes, correct wemeetagain — 28/01/2024 02:24 I mean, sending a real LVH is definitely possible, afaik other ELs sent a real LVH in this scenario, so i'm assuming its just an oversight. g11tech — 28/01/2024 02:25 yes, these are unfinalized blocks and its this invalid block's parent would still be in their non pruned tree Verify corresponding Hive tests are failing, this could be a regression. we(lodestar) should now be able to deal with it with this PR: https://github.com/ChainSafe/lodestar/pull/6361, but yea could be nice to have this addressed on besu too :heart:
GITHUB_ARCHIVE
← Projects/Mobile You do not have permission to edit this page, for the following reason: The action you have requested is limited to users in one of the groups: Users, Administrators, trusted, KDEDevelopers. You can view and copy the source of this page. == Overview == * ''garage'' is where the projects for maemo are hosted. Compare it with sourceforge. * ''diablo'' is the version of maemo. Compare it with lenny. * ''maemo'' is the software that runs on a device like Nokia's N810 * ''scratchbox'' is a cross-compiling environment to enable you to create software for maemo on an i386. * ''busybox'' is a single binary that allows you to run commands like ls, cat and bunzip2 * ''hildon'' is a desktop like plasma For more information see wikipedia. == KDE on Maemo == This page is for collecting information about getting KDE running on the Maemo platform used by the Nokia N800 and N810. * Instructions on getting a Maemo development environment on openSuSE http://en.opensuse.org/Maemo * There is also a KDE on maemo project at http://kde.garage.maemo.org/ * [http://www.forwardbias.in/data/articles/qt_on_maemo.txt Tutorial how to get Qt installed on maemo] * [[Projects/Maemo/kdepim|Tutorial how to compile KDEPIM on maemo]] * [[Projects/Maemo/KDE4_on_n810|Tutorial how to get KDE installed on Maemo]] * [[Projects/Maemo/KDE4_on_Mer|Tutorial how to get KDE installed on Mer]] === Some blogs entries about KDE on Maemo === ''Most recent on top'' * [http://www.kdedevelopers.org/node/3672 Marijn Kruisselbrink: Having fun with qemu] * [http://www.kdedevelopers.org/node/3662 Marijn Kruisselbrink: KOffice on Maemo] * [http://blog.forwardbias.in/2008/08/n810-is-awesome.html Girish Ramakrishnan: N810 is awesome] * [http://www.kdedevelopers.org/node/3628 Fredrik Gladhorn: Various Qt apps] * [http://www.kdedevelopers.org/node/3624 Marijn Kruisselbrink: KDE packages for maemo] * [http://www.notmart.org/index.php/Software/Misc_plasmoids_on_n810 Marco Martin: Misc plasmoids on n810] * [http://www.notmart.org/index.php/BlaBla/Akademy,_810_and_stuffs Marco Martin: Akademy, 810 and stuffs] * [http://www.kdedevelopers.org/node/3623 Richard Dale: Building Ruby on the N810] * [http://www.fredemmott.co.uk/blog_156 Fred Emmott: Ogg/Vorbis on N810] * [http://www.omat.nl/drupal/content/N810-and-OpenStreetMap-and-toma Tom Albers: N810 and OpenStreetMap and toma] * [http://tsdgeos.blogspot.com/2008/08/maemo-scratchbox-on-amd64.html Albert Astals Cid: Maemo scratchbox on amd64] * [http://blogs.forum.nokia.com/blog/kate-alholas-forum-nokia-blog/maemo/2008/08/13/akademy-2008-embedded-day Kate Alhola: Akademy 2008 Embedded day] * [http://www.gnuton.org/blog/2008/07/qt-4-maemo-the-new-experience/ Antonio Aloisio: Qt 4 Maemo: the new experience.] * [http://www.kdedevelopers.org/node/3605 Marijn Kruisselbrink: Getting KDE on an n810.] * [http://www.kdedevelopers.org/node/3575 Marijn Kruisselbrink: Sound on Maemo] * [http://www.kdedevelopers.org/node/3546 Marijn Kruisselbrink: Plasma on Maemo] Return to Projects/Mobile. Retrieved from "https://techbase.kde.org/Projects/Mobile"
OPCFW_CODE
Perform the following tasks in the order listed: After installation, if you want to run Operations Center startup and shutdown commands from any location, set the proper environment path variable.For UNIX, add the Operations Center /bin directory to the path variable so that the system can locate the command line utilities. For Windows, the patch can be set as a system variable or a user variable. To set the environment path variable, do the following as necessary: To set the PATH variable: Log in as the user named formula, or other root user that you created earlier. To edit the profile or cshrc file (depending on which command shell you are using), perform one of the following steps: If there is already a line containing PATH= or setenv PATH, append :/OperationsCenter_install_path/bin to that line. If there is no line containing PATH= or setenv PATH, add the following lines and save to the file: bsh or ksh: setenv PATH 'echo $PATH':/OperationsCenter_install_path/bin Log out as the user named formula, then log back in so that the path changes can take effect. To set the path as a system variable: Open the Windows Control Panel, and edit the environment variables. For Windows 2008 and 2012 you can search on environment variables and find the link under . In thetab, click the button. Do one of the following: If theSystem variable exists: Select theitem under , and click . Append the C:\OperationsCenter_install_path\bin or the name of the drive and installation directory you specified on installation.declaration with If thevariable hasn’t been defined: Clickfor the user’s . Specify Path for and type %Path%;drive:\OperationsCenter_install_path\bin for the . If you plan to use an HP OpenView adapter, NetView adapter or the Dashboard; and Operations Center is installed on a UNIX system, it is necessary that images be sent through Java over X11 protocol so they can be rendered in the Operations Center console or dashboard. This requires the installation of a virtual framebuffer from your operating system vendor. To configure the display: Download and install a virtual framebuffer from your operating system vendor: Linux: Install an X11 package containing Xsun or Xvfb. Solaris: Install an X11 package containing Xvfb. Perform the following system configurations: Edit the user profile for the formula user to set the DISPLAY environment variable to reference the display configured for virtual framebuffer. Configure the virtual framebuffer to start automatically at system start or is part of the Operations Center startup process. Start the virtual framebuffer. Issue the following command to verify that the virtual framebuffer is running: ps -ef |grep vfb The output will vary depending on your system but shows you the location of the virtual framebuffer, as well as the display and screen name. For example: On Solaris: /application_install_path/Xsun :display_name +nkeyboard +nmouse -dev vfb screen screen_name screen_resolution On Linux: /application_install_path/Xvfb :display_name -screen screen_name screen_resolution Do the following to specify theproperty in the Configuration Managers to where the server listens for connection with the virtual framebuffer: Do the following for the Operations Center server: Do the following for the Operations Center Dashboard: Open the Operations Center Dashboard’s Configuration Manager. For example, if the output in Step 4 is /usr/X11/bin/Xvfb :1 -screen 0 1024x768x8, set the x11 Display Name to :1 or :1.0. You can start the Operations Center server either manually or automatically during start up. The server can take a few minutes to start. Check the daemon.trc file for an … Address is in use… message, which indicates that the Operations Center server is running. Or, check the status using the mosstatus command. Do not start the Operations Center console until the Operations Center server starts. If there are any problems with starting the server, review the trace log files in the /OperationsCenter_install_path/logs directory. The daemon creates a trace log, as does the server. The trace logs store various error messages, such as The license key could not be found. Use any text editor to view the trace logs. You can also stop the Operations Center server. If a message displays a message when shutting down Operations Center from Windows, adjust the Windows service timeout parameter, which is stored in the Windows registry. The timeout parameter is specified in milliseconds. The registry key is: To start or stop Operations Center services: To manually start the Operations Center Server, do one of the following: Windows: Selectunder > > . Starting Operations Center via the Windows Start menu, starts the Operations Center server locally, but not as a service. Therefore, when you log out, the Operations Center server stops. UNIX: Log in as the user formula (or any user with root privileges) and from the /OperationsCenter_install_path/bin directory, type mosdaemon at a command prompt. By default, the Operations Center daemon is configured to automatically start. If this is disabled, you can re-enable automatic startup by editing the property for the service. To enable automatic start up for Operations Center: From the desktop, click, , . For Windows 2003, in Category View, double-click the link. Double-click theicon. The Services dialog box is displayed. Right-clickand select to display the NetIQ Operations Center Daemon Properties dialog box. Selectfrom the list. During installation, if you configured the server to automatically start, it starts shortly after the Operations Center daemon starts by default. If not, you will need to start it manually. To start the Operations Center server: If the Operations Center daemon is not running, from the /OperationsCenter_install_path/bin directory enter mosdaemon at the command prompt. Enter mosstart Formula. The following messages display: Starting the server "Formula" ... The server "Formula" has been started. From the desktop, click thebutton and then select > > . From the desktop, click> > . For Windows XP, click> > . Double-click theicon to display the Administrative Tools dialog box. Double-click theicon to display the Services dialog box. Right-click theline and select Stop. Stopping the daemon also shuts down the server. From the drive:\OperationsCenter_install_path\bin directory enter mosstop –shutdown at the command prompt where drive is the installation drive. The server and mosdaemon stop. From the /OperationsCenter_install_path/bin directory enter mosstop –shutdown at the command prompt. The Operations Center server stops. After Operations Center software is installed, verify that the installation was successful before proceeding to install any other Operations Center software on the Operations Center server. To do this, start Operations Center and perform a status check. The mosstatus command verifies that the Operations Center software started correctly. This command also lists the name and status of any adapters that are running, and the number of active sessions. At this point, there are no adapters running and no active sessions. Refer to the Operations Center Adapter and Integration Guide for information on defining adapters. To check the Operations Center server status: To verify the installation on Windows: From the drive:\OperationsCenter_install_path\bin directory, type mosstatus –all at the command prompt where drive is the installation drive. A message displays stating that the Operations Center software was successfully installed and is running. Proceed to Section 2.3.5, Configure the Installation. To verify the installation on Unix: From the /OperationsCenter_install_path/bin directory, type mosstatus –all at the command prompt. A message similar to the following displays: Server Formula Status: 0 adapter(s), 0 active session(s) This message indicates that the Operations Center software is successfully installed and running. The message indicates that no adapters are running, as none has been defined. Proceed to Section 2.3.5, Configure the Installation. After installing and verifying the installation, configure the installation and/or access the Operations Center console. Some configurations might require restarting the Operations Center server. Launch the Operations Center console or deploy the console to management machines. NOTE:The default user account is admin with a password of formula. For instructions, see Section 3.0, Operations Center Console Deployment. Do one of the following to protect the security of your installation: Change the default password on the admin user account. Delete the default admin account and create a new administrator account. For information about user and group accounts, see the Operations Center Security Management Guide. After installing and verifying the installation, configure the installation. Run the Configuration Manager: Configure settings for the Operations Center server as well as Configuration Storage. If you are installing on a cluster, run it after completing all the installations. Create databases: If you plan to use any external databases with Operations Center, you should create them. Be sure to see the sample scripts and Readme files in the /OperationsCenter_install_path/databases/samples directory. Define databases: Many Operations Center data stores require database definitions that are created and managed in the Operations Center console. Determine server security and networking settings, such as IP addresses: For additional information about security in Operations Center, refer to the Operations Center Security Management Guide. Determine port usage: Operations Center is installed using default ports, but do consider the ports currently used and how they fit within your environment. Determine memory usage: The Java Virtual Machine is configured with parameters for memory allocation and installs with these default values. Before adjusting the memory, it is important to understand how memory is allocated.
OPCFW_CODE
Issue/feature tracking with Git and Redmine under Windows Over the last couple weeks, we’ve been trying to pick out a solid combination of tools for a Windows based dev environment. The key components are: source control (SCM), continuous integration (CI) and issue/bug tracking. Our choices: Git for SCM, TeamCity for CI and Redmine for issue/feature tracking. We originally tried out Fogbugz, but it’s integration with SCM solutions, particularly Git is somewhat limited. Even for SVN it relies on external web interfaces such as WebSVN. Still, it does look like a great issue/bug tracker and I hope they can resolve their SCM integration issues soon. Up next, Redmine. Redmine has excellent built in SCM integration for a wide range of systems including SVN, Git, Hg and more. It has it’s own repository browser built-in and thus is not dependant on any other components to make it work. Even more, it comes with a host of community supported plugins. Two that caught my eye were a charting plugin to provide burndown and other charts and a code review plugin. Setting up Redmine is fairly easy, even for Windows. You can download a Redmine WAMP stack from Bitnami (or a LAMP stack for Linux), which will get you setup and running with Redmine, MySql, Ruby and Apache in minutes (it also includes a Subversion server which you can use if you really prefer SVN). Unfortunately, while the 0.8.4 version included by Bitnami supports Git repositories it is missing support for Git branches. That feature was added to the trunk after 0.8.4. I decided I wanted to have that feature and I also wanted to have the ability to update my Redmine installation from the trunk whever I wanted. I spent some time reading through their Wiki documentation on upgrading and it definitely seemed doable. Sadly, though it was a breeze to get Bitnami’s stack installed, it was not as straightforward to get it working with the trunk version of Redmine. Below, I outline for posterity the steps I took which finally got it working for me. I hope this will save others the pain and suffering (and a days worth of my time) that I spent fiddling around. Before starting I recommend reading the Wiki pages on installing and upgrading Redmine. We will be performing several of the steps descibed there and the Wiki can be a good reference if you get stuck. Updating Bitnami Redmine from 0.8.4 -> trunk - First install the Bitnami stack. I will leave this part to you, it is pretty simple - Stop all the Bitnami services except for MySQL, we will need this to migrate the database and test - Start the Bitnami command environment (this sets up environment variables so you can use ruby from the command line) - Check-out the trunk of Redmine into a second folder in the bitnami apps folder, I called mine /redmine-trunk - If you are upgrading a working version make sure to backup your database and files as outlined here - Copy the database.yml and email.yml file from /redmine/config to /redmine-trunk/config rake db:migrate RAILS_ENV="production" - If you have plugins to update run: rake db:migrate:upgrade_plugin_migrations RAILS_ENV="production" rake db:migrate_plugins RAILS_ENV="production" - You must also run: - Bitnami comes with rail 2.3.2 but Redmine expects 2.2.2 (the included Redmine has 2.1.2 frozen to it, but the trunk needs 2.2.2 minimum). For my purposes I just downgraded rails but I believe you can also set Set this: RAILS_GEM_VERSION = '2.3.2' in environment.rb. To downgrade you can run the following: gem install rails -v=2.2.2 gem uninstall rails -v=2.3.2 - Finally you need install the mysql gem as the gem install mysql you will get an error message about documentation, just ignore it. - Now here is where I got stuck for quite a while (though it wasn’t the only sticking point). It seems Ruby has a small issue with MySQL. I found the solution here. Copy the dll they suggest in the Bitnami ruby/bin/ folder. - Ok lets test it go to the /redmine-trunk folder and run ruby script/server -e production you should be able to access http://localhost:3000/ and everything should be working. Ctrl-C out of that we have some more work to do still. Copy over the following folders from /redmine to /redmine-trunk /conf(has the apache conf file) /scripts(has the service scripts used by bitnami) /lang(probably not needed but lets get it anyways) - Finally rename the folders. I called /redmine -> /redmine-oldand /redmine-trunk -> /redmine. - Now start up the two mongrel services and then the apache service. If everything went as planned you should be able to access Redmine at the port you originally setup during the Bitnami install. Wow, it seems a lot easier after all written out, but believe me this took all day to get working (and then some). Please let me know if this doesn’t work for you (because I am thinking I may have forgot something…). While I prefer to set up software like this under Linux, we all know that isn’t always an option. Those of us chained to Windows environments shouldn’t have to suffer alone. Just from initial testing and playing around I can see that Redmine is going to be a great product.
OPCFW_CODE
< Prev 1 2 3 Next > Category: Auditing » Source Code 'RatScan' a security tool and front-end for the RATS scanner which can check your source code for weaknesses, vulnerabilities and exploits. It can detect potentially dangerous coding practices and advise you on the risks and the various steps needed to secure your code further. It is compatible with multiple programming languages including PHP, C/C++, Perl and others. RATS (Rough Auditing Tool for Security) RATS, the Rough Auditing Tool for Security, is a security auditing utility for C and C++ code. RATS scans source code, finding potentially dangerous function calls. The goal of this project is not to definitively find bugs (yet). The current goal is to provide a reasonable starting point for performing manual security audits. Fenris started as a binary code tracing utility, but since the first release, it gets more and more difficult to write a simple summary of its functionality. Fenris is a comprehensive multi-level code tracer, a bit of a C decompiler, an interactive modular debugger, a code analysis tool, an execution path visualisation tool, a function fingerprinting and symtab recovery tool - all depends on how you use it. Fenris is suitable for everything from bug tracking or protocol analysis to forensics and reverse engineering, doing all the mindless work for you and making your life a bit easier. SecureCFM is dedicated to the audit of ColdFusion source code (CFML), in order to detect then correct possible Cross Site Scripting vulnerabilities. In Phrack 54, route|Mike Schiffman wrote a series of patches for OpenBSD 2.4 for Trusted Path Execution (TPE). Stephanie brings a modified version of these up to speed for OpenBSD 2.8 and 2.9, along with some additional features. Stephanie also brings restricted symbolic links, ala the openwall patches for linux. As time permits, i'm still working on adding additional features, and will add bits of the openwall stuff i like. The basic goal is to add an extra layer of security without being a monumental pain in the ass to legitimate users, so some things won't be there. I haven't added the additional hard link restrictions of the openwall patch, but will do something about this later as time permits cqual is a typed-based analysis tool for finding bugs in C programs. It extends the type system of C with extra user-defined type qualifiers. The programmer annotates their program with the appropriate qualifiers, and cqual checks for errors. Incorrect annotations indicate potential bugs. cqual presents the analysis results using Program Analysis Mode, an emacs-based GUI. Among other applications, cqual can be used to detect potential format-string vulnerabilities. It includes default configuration files to detect format-string bugs out-of-the-box. (Update) Strace is a system call trace, i.e. a debugging tool which prints out a trace of all the system calls made by a another process/program. The program to be traced need not be recompiled for this, so you can use it on binaries for which you don't have source. System calls and signals are events that happen at the user/kernel interface. A close examination of this boundary is very useful for bug isolation, sanity checking and attempting to capture race conditions. Source Code Scanner For File Race Conditions 1.0b Programs sometimes contain unsafe file handling code, particularly that involving race conditions. These commonly occur where check is performed on a file object (for existence, file owner, group or mode) and some use of the file is decided upon as a result. This can be insecure if changes occur affecting the file object between the check and the use. This will be a problem if the code contains the assumption that a check remains valid (a programming condition) and the file object concerned can actually be changed by an attacker (an environmental condition). ITS4 is a command-line tool for statically scanning C and C++ source code for security vulnerabilities. ITS4 scans through source code for potentially dangerous function calls that are stored in a database. Anything that is in the database gets flagged. ITS4 tries to automate a lot of the grepping usually done by hand when performing security audits. Strace for NT Strace for NT is a debugging/investigation utility for examining the NT system calls made by a process. It is meant to be used like the strace (or truss) on linux and other unix OSes. Browse by category
OPCFW_CODE
This week, all eyes of the software community will be fixed on Microsoft’s Build conference. Microsoft and its partners are set to announce new technologies and lay out their vision for the future of software development. Recent years have seen the narrative take a decidedly cross-platform approach. Visual Studio Code gives developers tools to create what they want no matter their OS of choice. .NET Core extends the reach of the popular .NET Framework to Mac and Linux. Finally, Xamarin, which was acquired by Microsoft in 2016, lets app developers write their app once, then publish it for Android, iOS, and Windows devices. As a Visual Studio 2017 Premier Launch Partner, PreEmptive Solutions is pleased to announce, as part of Microsoft Build 2017, a new way to integrate Dotfuscator’s protection into Xamarin apps. This new Dotfuscator-Xamarin integration has a number of advantages: - Easy setup – Developers just download an MSBuild targets file and import it into their projects with a few simple property changes. No more need to define custom targets or copy files as a post-build step! - Deep pipeline integration – The new integration automatically puts Dotfuscator into the Xamarin build pipeline, protecting apps as they’re built in Visual Studio or MSBuild. Developers don’t have to worry about forgetting the protection step, since it’s now included as part of the build. - Incremental builds – Don’t waste time reapplying Dotfuscator’s protection when your project hasn’t changed. The integration is smart enough to know when a build is necessary, and when it can be skipped. - Automatic configuration – Get up and running quickly: the integration creates a Dotfuscator config file automatically, with sensible defaults for the project. Instructions are provided for further configuration, including a case study of how to discover which additional identifiers (if any) need to be excluded from renaming obfuscation. - Detailed instructions – We’ve detailed how to use the integration with a well-known Xamarin.Forms sample. Developers can see the practical application of these instructions through an included git repo, which walks through the process step-by-step. - All major project types supported – The integration supports projects targeting Android, iOS, and Universal Windows (UWP). - Proven protection strategy – The app is protected by Dotfuscator, the industry standard .NET obfuscation and protection tool. - Free to start – The integration works not only with the commercial Dotfuscator Professional Edition, but also with Dotfuscator Community Edition (CE), which is included with Visual Studio. Developers can use the integration to quickly discover the capabilities of Dotfuscator’s protection. To get started, see Integrating Dotfuscator’s Protection into Your Xamarin Apps in Visual Studio. There you can read the step-by-step instructions, as well as download the necessary MSBuild targets file. We plan to keep introducing features like this, to make the protection process easier and more accurate, for Xamarin and all other kinds of apps. Stay up-to-date with the latest Dotfuscator features by visiting the Dotfuscator Downloads page. For announcements and other information, keep an eye on our blog and follow our Twitter account, @PreEmptive.
OPCFW_CODE
There are many fine examples of dynamic menus, menus that display sub menus when you hover your mouse pointer or bring focus to them in other ways, such as tabbing through links with your keyboard. I was reading a recent article on AlistApart.com regarding hybrid CSS menus, and the discussion that followed the article showed a real demand for a robust, cross-platform, accessible, dynamic menu. Here are some features the menu should have: - Be written with web standards: valid xhtml, ECMAScript, and css. (There are some nice flash implementations out there, but they have accessiblity issues.) - The menu should degrade gracefully, allowing site visitors to navigate even when scripting is turned off in their browsers. - The menu should work well for people using a mouse as well as people using a keyboard. - When a user tabs to a main link, the sub menu should appear as though the user hovered over it with a mouse. - And users should be able to tab through the sub links as well. - The menus should work well in all mainstream, current browsers (i.e., it doesn’t need to work with Netscape 4.x or it’s peers). - It should allow for text-zooming, and it should remain useable when the font size triples. - The menus should be built with usability in mind. They should be as easy to use as possible, from a purely interface perspective. For example, the links should be easy to click on, and it should be easy to navigate a sub menu without accidentally closing it. (Review Fitt’s Law for some principles.) I’ve seen menus that support these features, but none that support them all. In fact, Adam Richardson (business partner) wrote a very nice menu system that worked well with keyboard tabbing as well as mouse events. - Hybrid CSS dropdowns by Eric Shepherd. On ALA. - Suckerfish Dropdowns by Patrick Griffiths. On ALA. - I’ve heard Project Seven’s Pop Menu Magic works in darn near every browser. They claim to be “accessible out of the box,” which I believe but I’m curious if their menus work with just keyboard input. Only problem with this is that I don’t want to pay for a menu system when chances are I can build one and learn more by doing it. - In his comment, this Aleksandar fellow has some nice points and some helpful links, though he comes off as a jackass. There are many solutions implemented out there, some better than others and very few that I’ve seen have been flawless. When I get some spare time (ummm), I’d love to take a crack at this. Anyone else have a menus system that matches up to the features list above?
OPCFW_CODE
I’d like to be able to select a group of meshes and set their positions to the same x (or y or z, whatever) co-ordinate - I’d like to line them up, if I’m making any sense. Is there a quick way to do this ? If so how ? Or must I do it manually ? I understand the question but I am pretty sure the answer is no. You could seselect all the meshes at once and use the snap menu (Shift-S) to place all the meshes at the exact same point (Selected to Cursor). Or, if they are all the same object, you can duplivert them along a line. This is something to work on, with the code being available now. The snap menu should have more function … something like Selected to CursorX, CursorY or CursorZ. Thanks ! In this case snaping to grid has done the trick nicely. When you say ‘group of meshes’ I assume you mean mesh objects, because moving all your mesh vertices to the same x- co-ordinate is easilly achieved by using a middle mouse button click to constrain the transformation to a single axis. This is explained in a message just a few posts down. There is a great Python script called KlopUtils made by Klopes. You can find out more about it at the following links: A quick example to align a group of mesh objects along the x-axis: (Assuming you are using version 2) 1.) In top view, throw in a whole bunch of objects scattered all over the place. 2.) Select all of them !! Make sure that the object you wish to align to is the active object!! (last one selected and should be a lighter shade of pink) 3.) In KlopUtils make the following adjustments: First change the top selection box to ‘Align’ (Alineacion in Spanish) (In the central group of buttons left of ‘Align’) Set [y] and set the box to the right of [y] to Origins (This assumes that you have set the origins of each object previously) Below [y] Set seperation to ‘0.0’ (Note: Holding down Ctrl / Ctrl_Shift / Shift while you move your mouse on the slider will allow you to control how coarse or fine the adjustments are made to the number setting, or you can Ctrl_Left Mouse Click on the value to change it by typing the number in) Note: If you use version 1, you may need to rotate the 3D view slightly for the 3D Window to update… (This was fixed by Klopes in Version 2, and version 2 has more features - so be sure you download Version 2) If you need any more help, just ask. Klopes does a very good job of explaining how to use the Python script by integrating the help into a blend file. Wow… I was just about to write something very similar. This is great. Yes, Klopes utils is probably the one Python script that I use the most. Kino’s rusty knife and a few specialized ones like: JMS’/cobalt’s Shell script Justin’s gear script Hopefully, they will restore the list of Python scripts that used to be at the Blender3D site - there’s lots of handy scripts that I’m sure I forgot to mention. Also, it’s good to hear that Klopes is updating KlopUtils to work with 2.25 .
OPCFW_CODE
Fulcrums, Newton-metres and gas lifts OK, so I know that with a 5mtr bar, and a 2kg weight at the long end, I have 10kg/metre, and as such need a 10kg weight 1mtr from the fulcrum on the short end to balance. That's schoolboy stuff. I've also read that Newton-metre is roughly equal to 98.0 times the weight. But here's where I start to fall down. If I position a gas lift arm 150cm from the fulcrum on the long side, how to calculate lift required. (ie weights have a downward force, a gas-lift an upward force) My theory is that if I have (say) 5kg counterweight acting in a downward direction on the short end, a 5kg arm pushing UP on the long side would give a theoretical 10kg force. Or would one cancel the other? Using the short arm for calcs as it's easier to visualise, I think that it would need 20kg at 0.5m, or 40kg at 0.25mtr or 80kg @ 0.125mtr. (0.125 being a metre divided by 8) So if this 80kg is transferred to the opposite side as an upward force, it should have enough force to lift. Now- Newton-metre. This is obviously at one meter. If I halve the distance, does the number double or halve? (So it's either 12.25 or 784 - ie divided / multiplied by factor of 8) Multiply that by the 80kg force needed, and it becomes 980N/m or 62,720N/m. (If the latter is correct, I don't think a gas lift currently available would have that much force!! This assumes the whole weight is pushed by lift, but in reality it could be supplemented by the counterweights) This is actually a crane arm. The pan head with motors weighs in at about 700g, a camera about 300g - 500g. The pole itself is 1kg over its 5mtr length. But I only have room in the case for 2x5kg plate weights, so am looking at alternative lift methods. Of course, a very simple solution - which has only just occurred to me as I typed this - is to extend the counterbalance arm length! Oh well, makes for an interesting physics lesson for future use! Firstly, I think your units are wrong. Torque (the rotational ability of force/weight) is measured in newton-metre (as you have correctly written) or kg-metre not newton/metre or kg/metre(as you have incorrectly written). This is because the torque is directly proportional to both the amount of weight and the length of the lever-arm and so the two quantities are multiplied. A body of mass $m$ (in kg) wieghs $9.8m$ newtons (because the acceleration due to gravity is $9.8ms^{-1}$) and therefore $1 kg-metre = 9.8 newton-metre$ When confused about whether two torques are cancelling or adding, determine whether the torque is trying to rotate the lever clockwise or counter-clockwise. Clockwise torque will cancel with the counter-clockwise torque. Hope this helps. I don't really have a clear idea of what you were asking.
STACK_EXCHANGE
and I thought I'd dealt with things which were fussy over timing ... I don't even think it was intentional. In Mortal Kombat II's case, it was setting the HIRQ counter for something like dot # 64, and the code it executed after that took just under the needed amount. I'm sure the devs weren't even bothering to count cycles. They probably tried a few numbers, saw that 64 worked okay and went with it. So when I was off by one cycle, it was just missing the IRQ, stopping it from setting up all future IRQs for the frame, completely destroying things. It's funny, too. ZSNES tends to run about 40% faster than the real hardware, so it finished with plenty of time to share before the IRQ event. You can get much better compatibility by running things way too fast than running them just a little too slow. These bugs don't even show up until you really start to hammer down all the hardware delays (DRAM refresh, penalty cycles, memory access speeds), and it makes your emulator appear to be the less accurate one when the bugs show up. I think we should take it eventually. Ah, that's good. I can see myself being active for another 5-10 years or so, but I certainly don't have the longevity of something like MAME. Once cothreads are added to the MAME core (Aaron is interested, but I don't know what all would be involved) Wow, is he really going to add that? I was under the impression the save state issue was a deal breaker. Please see if he's interested in libco. Runs just about everywhere (even on SPARC, MIPS and PPC), and the x86-32/x86-64 ones are insanely optimized (~10 opcodes, in large part thanks to Aaron himself and Shay Green.) Any new platform modules could flow back to me which would be incredibly helpful. I really feel they're a huge benefit to writing sane, easy-to-read code. I was disappointed that they never caught on with anyone. Entire API, stable for 3 years now: cothread_t co_create(unsigned int, void (*)(void)); Simple enough for a goldfish to figure out. Haze: the timing in MAME used to be unstable when everything was float-based. The all-integer system used now is very solid in my (admittedly limited) testing. Whew. I was about to cry, thinking MAME was still FP-based. Are they see-saw counters, or are they all grouped? See-saw: one signed counter represents a link between two components. When A moves, increment counter, when B moves, decrement it. When A does something that B can see, make sure counter < 0, when B does something A can see, make sure counter >= 0, else sync up the other chip. Grouped: one unsigned counter per component, all counters get normalized periodically (subtract lowest counter from all of them) to prevent overflow. When A does something B can see, ensure Acounter >= Bcounter, and vice versa. I like the former model a lot better myself, much easier and faster. But every emu source I've seen uses the latter. Having an emotional attachment to code in MAME or MESS is dangerous I would say that's the case for any project, but especially for MAME / MESS
OPCFW_CODE
User-friendly way to enter a list in Silverlight 4? I have an app where users will enter lists of names. (There is some collection of valid names.) I'm not sure what the most user-friendly way to do this is. One idea: Make a text box. If the text box loses focus, and the contents are a valid name, add it to a list box. If the user selects an entry in the list box and hits delete, remove it. The code: MainPage.xaml.cs: private void WhoOwesInput_LostFocus(object sender, RoutedEventArgs e) { if (people.Contains(WhoOwesInput.Text)) { WhoOwesListBox.Items.Add(WhoOwesInput.Text); WhoOwesInput.Text = String.Empty; } } private void WhoOwesListBox_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Delete || e.Key == Key.Back) { WhoOwesListBox.Items.Remove(WhoOwesListBox.SelectedItem); } } MainPage.xaml: <sdk:AutoCompleteBox Height="23" HorizontalAlignment="Left" Margin="337,205,0,0" Name="WhoOwesInput" VerticalAlignment="Top" Width="74" ValueMemberBinding="{Binding}" LostFocus="WhoOwesInput_LostFocus" /> <ListBox Height="100" HorizontalAlignment="Left" Margin="337,232,0,0" Name="WhoOwesListBox" VerticalAlignment="Top" Width="74" KeyDown="WhoOwesListBox_KeyDown" /> I'm new to SL, so I'm afraid I may be missing out on some controls or preferred way of doing things. Any advice? Thanks. Is this for a batch entry, where an opertor will sit and enter a list of names from some source like a paper list or something? If so, then I would imagine the data entry should be as slick as possible. These operators that do this type of thing day in and day out are usually lightning fast an accurate. So one option would be that once the operator hits the Enter key on the textbox, the content is moved to the list, the textbox cleared and ready for the next entry. That way the operator never leaves the keyboard. Just type name, hit Enter, type next name etc. If the operator mistyped a name the operator can press tab to navigate to the list which will immediately select the last name entered. The operator can either press Del key to delete the entry or Ins to edit, edit will remove the name from the list put it back in the textbox and set focus to the textbox so that the operator can edit the name. Out of the box I do not think there are any special controls that will handle this keyboard navigation for you. You will need to handle the iteraction youself, of course in SL this is not incredibly painful. First of all do the same thing for the Enter key as aforementioned. However, if you come up with much more information that you want your users to data-entry you should consider a little better design. Silverlight has a great mechanism of data binding, speaking of which, it is about databinding dependency properties of controls (ItemSource of a listbox) to clr properties on a separate class which is the DataContext of your xaml file. What I described in this one liner is one part of the famous Presentation - Model pattern or as Microsoft calls it. MVVM. So, as you are new to Silverlight, learn about these concepts which will make your life easier. For the time being, you could do what Chris said above.
STACK_EXCHANGE
Linux: recover data from xfs I have a broken XFS filesystem on one of my HDD. I ran xfs_repair which was not able to find a secondary superblock to repair the filesystem. Therefore, I am not able to mount the HDD/partition. I tried to make a backup to a NTFS HDD via ddrescure to an iso-file. Unfortunately, I discovered now that my target drive is 4 KiB smaller than the source drive. That's why I was not able to complete the backup. ddrescure showed that there were actually no bad blocks or sectors on my HDD, which lets me assume, that my data is still there but I cannot access it. I am doing this from a Live-Ubuntu-Stick, because I was not able to see/mount the HDD via Windows and some tools for this use case (mounting XFS in Windows). Is there any way to access/recover my data from the incomplete image or directly from my HDD? Edit: My out from xfs_repair /dev/sdc1 Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... [then plenty of these lines] found candidate secondary superblock... unable to verify superblock, continuing... [then it finishes with this] Sorry, could not find valid secondary superblock Exiting now. Did your hard disk fail? Nope. Nothing is wrong with the disk at all. The disk was connected to a raspberry with rasbian installed. Everytime I restarted the raspberry, the filesystem got corrupted. But this time, I was not able to repair it with xfs_repair. ddrescue read from the whole disk and did not mention any errors or bad blocks. Please post the complete output of xfs_repair -n and whether it returned 1 for corruption detected. @JohnMahowald thanks for looking into this. I ran it without the -n option and updated my post with its output. Does this help? It would take one and half day to redo the scan with that option and the output seems to be the same. It tried and failed to load two superblocks. You are past the point where reading the manual helps. Get a metadata dump to someone who understands XFS, like your operating system support channels. Or other data recovery specialists. Clone the disk so that you have more than one copy of it. Open a support case with whomever maintains XFS for your operating system. Get xfs_metadump output to show the current state of the file system, including if you have a secondary superblock. Restore any backups you have. Or prepare users to rebuild what was on there.
STACK_EXCHANGE
BS, MS (Renmin); PhD (UConn) Department of Decisions, Operations and TechnologyRoom 913, 9/F Cheng Yu Tung Building 12 Chak Cheung Street Shatin, N.T., Hong Kong +852 3943 9679 Prof. Hongfei Li is an Assistant Professor in the Department of Decisions, Operations and Technology (DOT) at The Chinese University of Hong Kong (CUHK) Business School. Before joining CUHK, he received his PhD from the School of Business at the University of Connecticut and his BS and MS from Renmin University of China in Beijing. His current research focuses on three main streams: (i) business analytics in emerging online platforms; (ii) applications of artificial intelligence and machine learning; and (iii) statistical methodology. Prof. Li is interested in teaching technical courses related to mathematics, statistics, econometrics, and computer language in business application, such as business analytics, web analytics, data science, machine learning, business statistics, econometrics, and database systems. Management Information Systems Business Analytics in Emerging Online Platforms Applications of Artificial Intelligence and Machine Learning - Publications & Working Papers - Hongfei Li, R. Shankar, and J. Stallaert (2020), “Invested or Indebted: Ex-ante and Ex-post Reciprocity in Online Knowledge Sharing Communities,” ACM Transactions on Management Information Systems (TMIS), 11(1), 1-26. - Hongfei Li, Jing Peng, Gang Wang, Xue Bai, “Online Diaries and Professional Service.” - Hongfei Li, Jing Peng, Xinxin Li, Jan Stallaert, “When More is Less: The Effect of Add-on Insurance on the Consumption of Professional Services.” - Xian Cao, Timothy Folta, Hongfei Li, Ruoqing Zhu (equal contribution), “Analyzing the Online Word of Mouth Dynamics: A Novel Approach.” - Awards & Honours - 2019-2020 Department Outstanding Scholar Awardees, School of Business, University of Connecticut, 2020 - ICIS 2019 Doctoral Consortium Member, 2019 - PhD Program-wide Outstanding PhD Scholar Award, School of Business, University of Connecticut, 2018 & 2019 - Academic/Professional Services - Conference on Information Systems and Technology (CIST) 2019-2020 - International Conference on Information Systems (ICIS), 2018-2020 - Workshop on Information Technologies and Systems (WITS), 2018
OPCFW_CODE
Add CIFAR10 dataset Based off MNIST, uses data that can currently be obtained through Pylearn2's download_cifar10.sh script. Tests are just failing right now because the data is missing on Travis. I think we can add that as with MNIST (we can store it in the cache in Travis's new container environment so that it doesn't need to be redownloaded each time). I'll make a ticket though, because I feel that there is a lot of code duplication between this and MNIST, and there's a variety of things that could be factored out for use by future datasets as well. @bartvm Even though I think I've properly added the CIFAR10 files to cache, the tests fail. I'm not very Travis-savvy, is there something I left out? On my phone, but the log says: tar: Old option f' requires an argument. Try tar --help' or `tar --usage' for more information. Maybe there's something wrong with the Tar command so that it doesn't actually unpack? On my phone, but the log says: tar: Old option f' requires an argument. Try tar --help' or `tar --usage' for more information. Maybe there's something wrong with the Tar command so that it doesn't actually unpack? I think I found what the issue was, but I can't fix it since I don't think I can clear the cache by myself. I think the bash command still doesn't fully work. The problem is that curl -O doesn't write to stdout, but to a file instead. Also, tar f requires you to give a filename to read from, but in this case you want to read from stdin for which you need to pass -. I think the final command should look like this: curl http://www.cs.utoronto.ca/~kriz/cifar-10-python.tar.gz | tar xzf - Yes, I think you're right. I gleaned the command from the Pylearn2 script, but I missed the dash at the end of the tar command. There's also apparently a way to use the same trick with curl -O which is present in the original command: curl -O http://www.cs.utoronto.ca/~kriz/cifar-10-python.tar.gz - | tar xvzf - I'll try to push a commit to force resetting the cache properly, and if it works I'll squash all commits related to fixing my mistake. @bartvm I wasn't able to repair my cache mistake. I put the right curl command and cleaned up the commits, but I think removing cifar10 from the cache will require manual intervention. Sorry about that! No problem! I cleared the cache manually and restarted the build Python 2 passes now, but you should use from six.moves import cPickle for Python 3 compatibility. @bartvm Yes! For xrange as well. I'm on it. Try opening it with mode 'rb', for Python 3 refuses to cast automatically between string- and byte-encoded data. Sorry, my bad, I only just realised you were trying to unpickle the file that you downloaded, which I guess was pickled with Python 2... They use different encoding schemes, but this could work: cPickle.load(f, encoding='latin1') That will fail under Python 2, since cPickle doesn't accept the encoding argument. One option would be this: try: data = cPickle.load(f, encoding='latin1') except TypeError: data = cPickle.load(f) but it isn't very readable in my opinion. I agree, but you can use if six.PY3: which is okay I think. @bartvm Thanks for the tip. The tests now pass.
GITHUB_ARCHIVE
In today’s job market, the majority of in-demand, high salary, secure careers involve technology. Just understanding and knowing the role technology plays in everyday functions is enough to set you apart. However, if you’re looking to make a significant career move, there’s one highly recommended skill you can acquire to start your tech journey: coding. There are hundreds of coding languages out there, so how do you know which one you should learn first? Instead of endlessly scrolling through countless websites to find the answer, keep reading for our list of coding languages that are best suited to help you enter the technology career of your choice! Why Learn to Code? From buying a movie ticket to playing music, almost everything we do is supported by lines of code that come together to provide a function. We use code to create websites, games, apps, software, and so much more. So how exactly does it work? Coding is how we communicate with computers to produce a desired outcome. Simply put, it’s a way to instruct a computer to perform a specific function. Similar to human languages, there are different coding languages that communicate different functions to a computer. Having a general idea of how code works and using that knowledge to improve and create new features is a powerful skill in any industry. The benefits of adding coding knowledge to your resume are numerous. Not only will it set you apart from others, but you can expect a significant increase to your earning potential. Coding is also a skill set that is highly in demand and highly likely to stay in demand. Also, learning to code opens the door to opportunities for careers in almost any industry, as well as freelance and remote working opportunities. The scope and potential of this skill is vast. This may seem intimidating at first, but you can always start by learning one and expanding your knowledge over time So, without further ado, let’s dive into some of the best coding languages to start with! Top 4 Coding Languages with Potential Career Paths HTML or HyperText Markup Language is the computer language behind web pages and applications. The term hypertext refers to text that references other text, while markup language refers to the different symbols inserted with text that change the style and structure of a text document. HTML tags specify parts of text as headings, paragraphs, links, and so on. In essence, HTML allows you to influence what a user sees on their screen. It might surprise you to know that HTML is not considered a programming language. As a markup language, HTML does not actually modify or manipulate data. However, skills in HTML still classify you as a coder in a markup language. Being skilled in HTML is also a common prerequisite for IT and front-end development careers. Should you combine your HTML skills with other programming languages, you’ll find yourself being able to create the bulk of webpages and applications. For this reason, we recommend learning HTML as it is the foundation needed for careers in front end development. Most importantly, it is widely used and beginner friendly! Python is a general-purpose programming language that is regarded as another easy programming language to learn and use. Python’s powerful abilities have led to its popularity as the go-to language for back-end development. In fact, Python is the code behind popular websites such as YouTube, Google, Spotify, Instagram, and even Reddit to name a few! Python is used in data analytics, science, website development, imaging, animation, and even video games. Python’s extensive libraries, community-oriented platform, user-friendly features, and overall flexibility has made it the backbone of many popular services. Learning Python would open up opportunities in a number of different tech roles. While some functions of Python are applied to frontend development, Python is more powerful with backend development functions or even full stack development roles. Some applications of Python can include optimizing algorithms, enforcing security and protection measures, ensuring high performance across features, data analytics, and designing databases. Python is also widely used across industries and would allow for variety in day-to-day tasks. Learning Python is beginner friendly due to its readability and easy-to-use structural elements. Python is especially great for English speakers and makes it easy to memorize basic syntax structures. If you’re seeking to advance your career using Python, we recommend you first gain a basic understanding of its features and then learn advanced features that are applicable to the demands of your role. Applications are everywhere. Any app that you can name was written by a team of programmers. If you’ve ever had the desire to write your own app, create an app for your business, or even set up apps for others, then learning coding languages for operating systems such as iOS and Android would be your best bet! Since an app is developed for a specific operating system, there are different programming languages that best suit that operating system. To write an app for Apple’s iOS system, you can learn Swift, which is a programming language developed by Apple. Similarly, Kotlin is a programming language suited for applications written for an Android operating system. Learning either language, or perhaps even both, will allow you to begin your career as an app developer. App developers work independently or with teams to produce apps for different operating systems. This is a skill that is sought after by many as there is high demand to develop mobile apps quickly and efficiently. Additionally, both Swift and Kotlin make writing apps both easier and faster. Swift and Kotlin are relatively easy to learn and need at least 2–3 months to learn. If you decide to learn these languages, maybe one day we’ll all be able to use an app that you wrote! Our Pick: Python Ready to Get Started? Learning to code does not have to feel daunting! The most important part of learning to code is your mindset. Coding can be fun and there are different resources to assist you along the way. One of our top recommendations is coding bootcamps. Bootcamps allow you to learn coding in a guided and streamlined process. Most importantly, bootcamps are usually accompanied by a certificate which you can add to your resume. If you want to learn more about coding bootcamps you can click here for our list of the Best Coding Bootcamps in 2022.
OPCFW_CODE
Compiler Errors: Three Ways to Avoid Them Technique 1: Compare results of two different compilers Avoiding compiler errors can be done several ways. If you have access to the source code of your CFD solver, then probably the best way to avoid being burned by compiler errors is to compile the code with two completely different compilers on the same system. Then run the two different executables on the same case. If the results are identical, then you're probably safe from this particular problem. [Note, however, that some compilers have technology that is licensed from other companies, so, while it is rare, it is remotely possible for two different compilers to produce the same errors.] Technique 2: Compare results of optimized versus unoptimized executables If you have access to the source code for your solver, but only have one compiler, then the next best technique is to compile the code once with regular optimization and once with no optimization. Then run the same case with each version and compare results. As the "no optimization" code will probably run pretty slowly, you'll want to pick a small case, but make sure it uses a representative set of code options. If the results are the same, then the optimized version is likely safe to use. If there are compiler errors at a very basic level, however, this strategy may not catch them. Technique 3: Compare the results of runs on different systems If you do not have access to the source code, then the situation is a bit more difficult. In some respects you are at the mercy of your vendor's quality control department. You may be able to gain a measure of confidence, however, if you can run the same case on two different computers which have different architectures. In the "old days" this was easy to do, as most everyone in CFD had access to systems from different vendors with completely different CPUs and running different operating systems. At the same time, CFD software vendors used to provide binaries for multiple computer architectures. Today, the old RISC based architectures are pretty much gone, and the computational ecosystem is mostly reduced to Apple, MS, or Linux operating systems running on Intel-compatible hardware. Still, you may be able to run a 64bit version of your solver on an appropriate system and compare it to an equivalent result from a 32bit system. The bottom line The main point of all this is to run what should be the same algorithms on the same initial data and make sure that the results are the same, regardless of what the underlying hardware or operating system are...or, in this case, regardless of which compiler and compiler options were used to create the executables. If you are finished learning about avoiding compiler errors, return to the main Verification and Validation page. Otherwise, you can browse through some of the other topics from the Innovative CFD home page
OPCFW_CODE
new Test.Unit.Runner({ testInterpolationMethods: function() { with(this) { // numbers assertEqual(1, S2.CSS.interpolateNumber(1, 3, 0)); assertEqual(3, S2.CSS.interpolateNumber(1, 3, 1)); assertEqual(2, S2.CSS.interpolateNumber(1, 3, 0.5)); assertEqual(4, S2.CSS.interpolateNumber(1, 3, 1.5)); assertEqual(-1, S2.CSS.interpolateNumber(1, 3, -1)); assertEqual(2, S2.CSS.interpolateNumber(0, 4, .5)); assertEqual(2, S2.CSS.interpolateNumber(undefined, 4, .5)); assertEqual(2, S2.CSS.interpolateNumber(null, 4, .5)); assertEqual(1, S2.CSS.interpolateInteger(1, 3, 0)); assertEqual(2, S2.CSS.interpolateInteger(1, 3, 0.5)); assertEqual(3, S2.CSS.interpolateInteger(1, 3, 1)); assertEqual(2, S2.CSS.interpolateInteger(1, 3, 0.25)); assertEqual(3, S2.CSS.interpolateInteger(1, 3, 0.75)); // lengths em|ex|px|in|cm|mm|pt|pc assertEqual('1px', S2.CSS.interpolateLength('1px', '3px', 0)); assertEqual('3ex', S2.CSS.interpolateLength('1ex', '3ex', 1)); assertEqual('2px', S2.CSS.interpolateLength('1px', '3px', 0.5)); assertEqual('4in', S2.CSS.interpolateLength('1in', '3in', 1.5)); assertEqual('-1cm', S2.CSS.interpolateLength('1cm', '3cm', -1)); assertEqual('-1mm', S2.CSS.interpolateLength('1mm', '3mm', -1)); assertEqual('-1pt', S2.CSS.interpolateLength('1pt', '3pt', -1)); assertEqual('-1pc', S2.CSS.interpolateLength('1pc', '3pc', -1)); assertEqual('-2.5cm', S2.CSS.interpolateLength('5cm', '-5cm', .75)); assertEqual('2px', S2.CSS.interpolateLength('', '4px', .5)); assertEqual('2px', S2.CSS.interpolateLength(null, '4px', .5)); assertEqual('2px', S2.CSS.interpolateLength(undefined, '4px', .5)); assertEqual('0px', S2.CSS.interpolateLength('0pt', '4px', 0)); assertEqual('2px', S2.CSS.interpolateLength('0pt', '4px', 0.5)); assertEqual('4px', S2.CSS.interpolateLength('0pt', '4px', 1)); // leave alone whitespace, we're only interested in replacing the value assertEqual(' -1 pc ', S2.CSS.interpolateLength(' 1 pc ', ' \n3 \t pc ', -1)); // precentages assertEqual('50%', S2.CSS.interpolateLength('0%', '100%', 0.5)); // colors assertEqual('#ffffff', S2.CSS.interpolateColor('#ffffff', '#000000', 0)); assertEqual('#000000', S2.CSS.interpolateColor('#ffffff', '#000000', 1)); // check that values are capped assertEqual('#ffffff', S2.CSS.interpolateColor('#ffffff', '#000000', -1)); assertEqual('#000000', S2.CSS.interpolateColor('#ffffff', '#000000', 2)); assertEqual('#111111', S2.CSS.interpolateColor('#000000', '#222222', .5)); assertEqual('#111111', S2.CSS.interpolateColor('#000', '#222', .5)); assertEqual('#444444', S2.CSS.interpolateColor('#000', '#222', 2)); assertEqual('#111111', S2.CSS.interpolateColor('rgb(0,0,0)', '#222', .5)); assertEqual('#111111', S2.CSS.interpolateColor('#000', 'rgb(34,34,34)', .5)); }}, testPropertyInterpolation: function() { with(this) { assertEqual('#111111', S2.CSS.interpolate('background-color','#000000','#222222',.5)); assertEqual('#111111', S2.CSS.interpolate('background-color','#000','#222222',.5)); assertEqual('#111111', S2.CSS.interpolate('backgroundColor','#000000','#222222',.5)); assertEqual('0px', S2.CSS.interpolate('margin-top','10px','0px',.99999)); assertEqual('10px', S2.CSS.interpolate('margin-top','10px','0px',.00001)); assertEqual('0.5', S2.CSS.interpolate('opacity',0,1,.5)); assertEqual('0.000', S2.CSS.interpolate('opacity',0,1,.00000001)); assertEqual('2', S2.CSS.interpolate('z-index','1','3',.5)); }}, testStyleParsing: function() { with(this) { assertEqual('12px', S2.CSS.parseStyle('font-size:12px').fontSize); assertEqual('12pt', S2.CSS.parseStyle('font-size:12pt').fontSize); assertEqual('12em', S2.CSS.parseStyle('font-size:12em').fontSize); assertEqual('12%', S2.CSS.parseStyle('font-size:12%').fontSize); assertEqual('12ex', S2.CSS.parseStyle('font-size:12ex').fontSize); assertEqual('12in', S2.CSS.parseStyle('font-size:12in').fontSize); assertEqual('12cm', S2.CSS.parseStyle('font-size:12cm').fontSize); assertEqual('12mm', S2.CSS.parseStyle('font-size:12mm').fontSize); assertEqual('12pc', S2.CSS.parseStyle('font-size:12pc').fontSize); assertEqual('12px', S2.CSS.parseStyle(' font-size: 12px').fontSize); assertEqual('12px', S2.CSS.parseStyle('\r\nfont-size: 12px\r\n').fontSize); assertEqual('12px', S2.CSS.parseStyle('font-size: 12px \t\t').fontSize); assertEqual('12px', S2.CSS.parseStyle('\t\tfont-size:\t12px').fontSize); assertEqual('12px', S2.CSS.parseStyle(' font-size: 11px;font-size: 12px').fontSize); assertEqual('12px', S2.CSS.parseStyle('line-height: 11px;\r\nfont-size: 12px\r\n').fontSize); assertEqual('12px', S2.CSS.parseStyle('font-size: 12px; \t\tline-height: 11px;').fontSize); assertEqual('12px', S2.CSS.parseStyle('line-height: 11px;\t\tfont-size:\t12px;color: white;').fontSize); assertIdentical(undefined, S2.CSS.parseStyle('').fontSize); assertIdentical(undefined, S2.CSS.parseStyle('font-size:12pxfont-size:12px').fontSize); }}, testGetStyles: function() { with(this) { assertEqual('12px', $('allStyles_1').getStyles().fontSize); assertEqual(1, parseFloat($('allStyles_1').getStyles().opacity)); assertEqual(0.5, parseFloat($('allStyles_2').getStyles().opacity)); assertEqual(0.5, parseFloat($('allStyles_3').getStyles().opacity)); }}, testColorParsing: function() { with(this) { assertEnumEqual([171, 206, 223], S2.CSS.normalizeColor('#abcedf')); assertEnumEqual([170, 187, 204], S2.CSS.normalizeColor('#abc')); assertEnumEqual([0, 0, 0], S2.CSS.normalizeColor('#000')); assertEnumEqual([0, 255, 0], S2.CSS.normalizeColor('rgb(0,255,0)')); assert(isNaN(S2.CSS.normalizeColor('#abcedfgh')[0])); assert(isNaN(S2.CSS.normalizeColor('#abcedfgh')[1])); assert(isNaN(S2.CSS.normalizeColor('#abcedfgh')[2])); assertEqual("#ffffff", S2.CSS.colorFromString("#fff")); assertEqual("#ffffff", S2.CSS.colorFromString("#ffffff")); assertEqual("#ffffff", S2.CSS.colorFromString("rgb(255,255,255)")); assertEqual("transparent", S2.CSS.colorFromString("transparent")); // rgba support not implemented. Should something with alpha 0 return "transparent"? //assertEqual("#ffffff", S2.CSS.colorFromString("rgba(255,255,255,0)")); //assertEqual("#000000", S2.CSS.colorFromString("rgba(0,0,0,0)")); }} });
STACK_EDU
mceoni -- how many wallet entries were you able to add for this user? You can get the info on the limits by executing "tdwallet help limits", as mentioned a few times in the comments above. The only way to get that exception should be if the user's wallet file (or disk) was full, just like the message says. The wallet file should be capable of containing a very high number of entries, so filling the file is not trivial. Perhaps there's some kind of other disk space quota you've ran into, or the free space you checked is not on the same disk as where the wallet file is located. If you can't figure it out, please open an incident with GSC so we can take a closer look. I have the same scenario that "mceoni" posted before. I'm getting the message below from TPT: Lizarb -- I found your incident (it didn't make its way to me yet, so I asked to have it escalated). Could you please check if your temporary directory or partition is full or nearly full? That may also trigger the exception. Please use the incident to provide the answer -- I will communicate with you through our support team. I just realized that in an earlier comment I provided a bad example for hiding a username in the wallet: This is wrong, because the item name would have to include the contents of $tdwallet(usr1), thus exposing the username. If the intention is to hide the username, the correct answer would be something like the following: which does not require the use of nested keywords. Hello - Thanks for the wonderful article. I am getting the error "The logmech string exceeds the length limit. The maximum length is 8" when I try to run a fast export job. Below are the login info and the TD Wallet entry. .logon tdpid/bakthro,$tdwallet # fast Export login com.teradata.TD2 -> $td_wallet(testpw). testpw -> P@$sw0rd I don't get this error, when I do the same in a bteq script. Also, when I use the below. .logon tdpid/$tdwallet(user),$tdwallet(password) # Fastexport logon I get the below error. This too works well in bteq. **** 15:17:26 UTY1006 CLI error: 303, CLI2: BADLOGON(303): Invalid logon I figured out the reason for the second issue- CLI2: BADLOGON(303): Invalid logon string . I just missed a semi colon. Could I get the reason and work around for the first error - "The logmech string exceeds the length limit". It works in bteq and not in fast export. Roopalini -- there is an invalid underscore in one of your keywords -- "$td_wallet(testpw)". But that would not cause the logmech string error. The logmech string error doesn't have anything to to with Teradata Wallet -- it is reporting that your .logmech command is invalid. If you can't resolve this, please open an incident. I'm new to tdwallets. Was wondering if tdwallets work on linux odbc? R14.00. Using $tdwallet(mydev1) at the password prompt. Same tdwallet works with bteq. And same password works with ODBC when not using tdwallets. Enter Data Source Name: devpridsn Enter UserID: myid1 Connecting with SQLConnect(DSN=devpridsn,UID=rssabdev1,PWD=*)... adhoc: (SQL Diagnostics) STATE=28000, CODE=4294959279, MSG=[Teradata][ODBC Teradata Driver][Teradata Database] The UserId, Password or Account is invalid. ODBC connection closed. Regarding Portability? How feasibile would it be to populate the wallet on one linux server and then copy the config and wallet files to other servers? What are all of the files that must be copied?
OPCFW_CODE
Senior Frontend Developer Honeypot is a tech-focused job platform. Our vision is to build the world’s largest worklife community for technical talent. We are strongly committed to the developer community, creating OSS documentaries, such as Elixir and Ember, and running large scale conferences such as GraphQL Conf. In March 2019 we were acquired by New Work, who are investing heavily in our growth and expansion. We believe everyone should choose a job they love. As a Senior Frontend Developer, you will set and drive the product design vision and culture by taking on the following responsibilities: - Design, develop and maintain our platform - Collaborate closely with our Product/Tech team and become a valued member of an autonomous, cross-functional team - Dedicate yourself to building a well-tested and scalable application - Conduct code review and explain technical concepts to non-technical team members - Continuously explore ways to improve our architecture - Contribute your own ideas and work closely with different business units - Mentor junior developers and help them grow in their profession We look for potential. To thrive in the role of Senior Frontend Developer, you bring the following skills and experiences: - Significant hands-on experience with Single Page Application frameworks - Hands-on experience with Ember or the willingness to learn Ember - Widespread knowledge of different frontend architectures and where to apply them - Experience with relevant software development best practices such as TDD and CI - Ability and willingness to mentor, support and guide other developers - Experience with (and love for) agile processes - The mindset and ability to openly and constructively communicate about your work, ideas, and problems in order to find solutions together Furthermore, you are passionate and care about: - Great quality of our application, testing strategies and a smooth user experience - Evaluating technologies on the value they add to users as well as the engineering team - Continuous improvements while striving for the ultimate goal of increasing the value we can deliver to our customers - Simplicity and collective ownership - Sharing your knowledge with your team - Belief that awesome things are MADE to happen :) Become part of our growing community. Join our highly diverse team of 40+ nationalities and share: - An environment that embraces freedom and autonomy and values team spirit and open communication - An atmosphere where intrinsic motivation comes first - The chance to shape and drive the team, with potential to grow into a leadership role - Bi-annual overnight off-sites, regular team events and weekly Friday night drinks Honeypot encourages applicants from any national origin, sex, sexual orientation, religious background, gender identity and those individuals with disability. Please send us your resume along with a short motivation statement of why you are the right fit for this role and what motivates you to join our team. We are looking forward to getting to know you!
OPCFW_CODE
Damien Stehlé writes: > some type of Ring-LWE assumption So, to be clear, the dividing line is whether a hardness assumption fits within the parameter space of the LPR definition of "Ring-LWE"? E.g.: * A proof for anything like Round5 or Saber would not qualify as "enjoy a security proof that is analogous", because the starting assumption is hardness of Ring-LWR/Module-LWR instead of Ring-LWE? * If a scheme releases many overstretched Ring-LWE samples, and has a proof based on the alleged hardness of this case of Ring-LWE, then this would qualify as "enjoy a security proof that is analogous"? This sounds very strange. Cryptanalysis indicates that the second example is _much_ more dangerous than the first. Are you sure this is what you meant by "enjoy a security proof that is analogous to that of the LPR scheme"? > due to the fact that the encryption procedure uses "LWE left hand > sides" that are multiples of 3 (because of the rounding in the key > generation procedure). If you're really insisting on specifically Ring-LWE and not anything rounding-based, then you're excluding all of the proposed LPR variants that eliminate random errors in favor of deterministic rounding, such as Round5 and Saber. This isn't specific to NTRU LPRime, and the main technical point isn't new: the literature already makes clear that "Ring-LWE hard => Ring-LWR hard" relies on irrelevant parameter choices. Tweaking the key distribution in these pure rounding schemes doesn't enable a proof from Ring-LWE, so I don't understand why you attribute your ad-hoc exclusion of NTRU LPRime to details of the key distribution. If I wanted to write down a list of proof requirements that allows Round5 and Saber (and round-2 Kyber etc.) while disallowing NTRU LPRime, I think I'd be able to, but stating the list clearly would also show how unprincipled and artificial it is. > Indeed, the main point of the LPR encryption scheme was to have a > proof under the RingLWE hardness assumption. What the LPR paper actually says that it is (1) "introducing an algebraic variant of LWE" and (2) "proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms." A closer look shows that these "guarantees" are for irrelevant cryptosystem parameters. (I think https://eprint.iacr.org/2016/360 deserves the primary credit for this observation.) But then why should Ring-LWE be treated as better than * Ring-LWR, which has a proof "Ring-LWE hard => Ring LWR hard" for * the "NTRU problem", which has a hardness proof for irrelevant * a rounded-multiplier variant of Ring-LWE, which has a hardness proof for irrelevant parameters; and * a rounded-multiplier variant of Ring-LWR, which has a hardness proof for irrelevant parameters? I already asked for clarification of whether your "analogous" claim would allow the first and second assumptions. You didn't answer. How about the third and fourth assumptions? (Disclaimer: I'm relying on claims that I've heard about these proofs. I'm not saying that I've verified the proofs and theorem statements. I try to keep my proof-verification efforts focused on proofs applicable to relevant cryptosystem parameters.) The underlying "worst-case problems on ideal lattices" have not been holding up well to cryptanalysis. My understanding is that, for this reason, you no longer endorse relying on the LPR "guarantee". But, without this "guarantee", essentially nothing is left of LPR's cryptosystem "proof"---it's simply the generic observation that (1) one-wayness for the system's keys and ciphertexts follows from (2) presumed indistinguishability of the keys from random and (3) presumed one-wayness for ciphertexts for random keys, modulo generic replacement of OW with IND. As I said before, this has nothing to do with the specific structure of Ring-LWE, or with the choice of distribution that parameterizes the word "random". Do you not agree that all submissions "enjoy" such a proof? > I am not going to discuss about topics that were not covered > by the email of mine that started this thread. You filed an "OFFICIAL COMMENT" dated 3 May 2019 20:27:01 +0200 claiming, among other things, that "NTRU LPRime does not enjoy a security proof that is analogous to that of the LPR scheme". The intended meaning of this claim is not clear. I have asked various clarification questions, and I would like to hear your answers. If these questions make you realize that the statement you made doesn't actually have a meaning that you endorse, then you can withdraw the statement. Of course the withdrawal itself needs to be specific and clear! ---Dan (speaking for myself)
OPCFW_CODE
Have you ever seen, while dealing in a support channel with a novice that just got in touch with the power of UNIX a conversation that goes like this?: How can I process the output of a command, so that any number of spaces gets turned into a newline? What are you trying to do? I want to list the contents of a directory, but I want one per line. I have seen this numerous times, even as one of the actors. At times I was the novice, and many times in #debian-br I was the seasoned person trying to get the novice to focus on the problem they were trying to solve, not on the solution they thought was right. While reading Máirín Duffy’s awesome paper about contributing to Free Software as a designer I couldn’t help but get that image brought to my memory again, and again. Specially when I read this part: This means the language and even the approach FLOSS projects take to solving problems tend to be focused on implementation and technology rather than starting with a real-life user problem to solve and determining appropriate implementation afterwards. That does sound like us, and it does sound like many of the solutions we come up with. While I was reading her paper, there was a reference I got very interested in checking. It’s a PDF with no links in it, so I only had the number of the reference. What I would have to do is I would have to scroll to the end of the paper, and find the reference, then somehow come back to the place I was looking at. My most immediate thought was ‘you know, maybe evince should have tabs’. Why? Because I could open a new tab, go to the place the reference was at, and to ‘go back’ I just needed to close the new tab. Other options require much more effort – remembering the page I was at, or maybe the scroll offset more or less, and scan for the part of the text I was at. But those are not the only options! I could have the application set a marker on where I am, and have an easy command to go back to that marker, for instance, or evince could provide a way of ‘looking ahead’ without throwing away the current state at all. I’m pretty sure if I look around enough I will find tools that solve this problem in a fairly good way. Now, I think that is exactly how we ended up with tabs in so many places they do not make sense in, and with so many ad-hoc solutions that solve our problems in half-assed ways. Even in browsers, we tend to use tabs as ad-hoc solutions to real problems we have no real solution to handle yet, such as “I want to check this other thing out real quick, but I do not want to lose any state of this page”, or “I want to check this out, but not right now, so let me open it, and then I’ll come back to it”, or maybe even “I want to look at this now, but since it is going to take a while to load, I might as well let it load in the background, and when I finish reading this I can go look at it”. These are the real problems we have, and I think we need better designs that solve them for real, instead of just patching them with the ad-hoc solution that tabs are. The other extreme of the spectrum is, of course, not doing something, or even anything for lack of the perfect solution. Using ‘this is not a real solution’ as an excuse to not implement something that could serve as a temporary solution to a problem may cause more frustration than having to deal with the ad-hoc solution that is tested, and being applied to other applications for some time. After all, in many cases the ad-hoc solution can be later replaced with a proper one. I guess this is another instance of the very difficult problem of balancing different realities: proper design is not always available to start something up, specially if the application is backed by individuals and not by a company or a bigger project that could bring in designers to work on it from the start. In this case having something up and running is usually a very important first step in a free software project – usually required to get enough interest to make it worth designing for.
OPCFW_CODE
Open Economy Macroeconomics These are my notes on open economy macroeconomics. The nominal exchange rate is the rate at which one country’s currency can be exchanged for the currency of another country. In a flexible exchange rate system, the nominal exchange rate is determined by supply and demand in the foreign exchange market. Fixed or managed exchange rates are controlled by the government. The real exchange rate is the ratio of the prices of a basket of goods and services in two countries and thus influences net exports from one country to the other. A decline in net exports reduces labor demand, lowers GDP, and causes unemployment. The nominal exchange rate is the price of one country’s currency in units of another country’s currency. If the government does not intervene in the foreign exchange market, then the country has a flexible exchange rate, which is also referred to as a floating exchange rate. If the government fixes a value for the exchange rate and intervenes to maintain that value, then the country has a fixed exchange rate. If the government intervenes actively to influence the exchange rate, then the country has a managed exchange rate. The real exchange rate is defined as the ratio of the dollar price of a basket of goods and services in the US, divided by the dollar price of the same basket of goods and services in a foreign country. The nominal exchange rate is the number of units of foreign currency per unit of domestic currency. The real exchange rate, in contrast, gives the ratio of the dollar price of a basket of goods and services purchased in the US to the dollar price of the same basket purchased in a foreign country. The nominal exchange rate is determined by the supply and demand for a currency in the foreign exchange market. When a Chinese producer sells goods to a US firm and receives dollars, the Chinese firm converts the dollars to the Chinese currency in the foreign exchange market. This is equivalent to demanding yuan and supplying dollars in the foreign exchange market. In contrast, a Chinese firm that imports from the US would be doing the opposite in the foreign exchange market: supplying yuan and demanding dollars with which it will pay its US trading partners. When a country has a flexible exchange rate, changes in the supply and demand for a currency lead to fluctuations in the nominal exchange rate. Many countries, however, manage or fix exchange rates and therefore peg their currencies to another currency, such as the dollar. Under managed or fixed exchange rates, fluctuations in the supply and demand for the currency do not necessarily lead to corresponding fluctuations in the exchange rate. Though managed or fixed exchange rate systems might appear more stable at first, when the exchange rates they generate are out of line with market forces, these systems can lead to sudden changes in the exchange rate. In the process, they create huge profit opportunities, like the one exploited by the financier George Soros in 1992, when he bet that the British pound would be allowed to depreciate. The real exchange rate is a key price for the economy in part because it determines net exports. A real exchange rate greater than 1 implies that US goods and services are more expensive than foreign goods and services. Thus, a real exchange rate above 1 discourages exports and encourages imports, reducing net exports. A fall in net exports lowers GDP and shifts the labor demand curve to the left. Domestic interest rates influence the real exchange rate. A fall in domestic interest rates reduces the appeal of domestic assets to investors, lowering both nominal and the real exchange rates. The resulting rise in net exports shifts the labor demand curve to the right and increases GDP.
OPCFW_CODE
Integrate Service and Ecommerce Data After completing this unit, you’ll be able to: - Map data between data source objects. - Set up Lightning web components for sharing data across clouds. Now that global profiles are created and imported back into source systems, Pia is ready to move on to NTO’s next business objective: providing customer services agents with an integrated view of customers’ ecommerce and service data. She does this by setting up the Service + Commerce integrated experience. As you learned in Customer 360 Data Manager Fundamentals we use the term integrated experiences to describe ways you can access and use data across systems. There are lots of opportunities to share and view data across systems, but in this module, we look specifically at Service + Commerce. As we explored in Customer 360 Data Manager Fundamentals, Customer 360 Data Manager uses a hub and spoke design that allows you to map all data sources with a central data model called the Cloud Information Model (CIM). For example, use Customer 360 Data Manager to map the Global Party Id field in the CIM Individual entity with the Global Party field in the Service Cloud Account object. Or map the Commerce Cloud Order object to the Sales Order entity in the CIM. To set up the Service + Commerce integrated experience, Pia first needs to map the objects from NTO’s data sources with the CIM. After reviewing Mapping Sets on Salesforce Help, she learns that she can use these mapping set templates to meet NTO's needs: - Commerce Cloud: Order to CIM: Sales Order—Map your B2C Commerce Order object with the CIM Sales Order entity. - CIM Individual to Salesforce Org: Account—Map the Individual entity in the Cloud Information Model with the Account object in your Salesforce org. - CIM Individual to Salesforce Org: Contact—Map the Individual entity in the Cloud Information Model with the Contact object in your Salesforce org. - CIM Individual to Salesforce Org: Lead—Map the Individual entity in the Cloud Information Model with the Lead object in your Salesforce org. After creating the mapping sets, Pia needs to activate a mapping version for the mapping sets. This process saves a version of the mapping sets that is referenced by Lightning web components that are set up later in their Salesforce org. She navigates to Data Mapping in Customer 360 Data Manager to get started. Use the Trailhead Simulator to practice creating and activating a mapping set in Customer 360 Data Manager along with Pia. - Launch the Trailhead Simulator. Click Begin. - In Customer 360 Data Manager under Setup, click Data Mapping. - Click New. - Click Template. - Click Next. - Select the mapping sets to create from the list of templates. For this example, select: Individual to Account from Cloud Information Model to Service org. - Click Create. - Click Individual to Account to open the mapping set editor. - Expand the Account object. - Expand the Individual object. - Click the scroll bar to scroll through the mapping set and review the field mappings. In your live environment, you can update field mappings to fit your organization’s needs. - Click Data Mapping to return to the list of mapping sets. - Click Activate Mapping Version. - Enter a unique mapping version name using alphanumeric characters, periods, and hyphens. In this example, enter - Add a description for the mapping set version that helps you and other users understand changes to this version. In this example, enter - To activate the new mapping version, click Create. Create Service Console View of Order Data With mapping sets created and activated, Pia lets the NTO Service Cloud admin, Felix, know that it’s time to set up Lightning web components to view the Commerce Cloud data in Service Cloud. She provides Felix with the mapping set version name she entered when activating the mapping sets. Felix reviews the prerequisites and steps in Salesforce Help and is ready to go. Configure External Data Source Felix starts by adding the Salesforce Connect: Data Federation Service (DFS) external data source in the Service Cloud org. Completing this configuration allows Lightning UI components to exchange data with other clouds. Felix navigates to Setup in Service Cloud, then uses Quick Find to navigate to External Data Sources. He follows the steps in Configure External Data Sources to add DFS and sync the SalesOrder object. Add Lightning Web Components Now Felix can add the Lightning web components that make the data visible to service agents. First, while he’s still in Service Cloud Setup he confirms that the global party ID field on the Account, Contact, and Lead objects is set to visible and read-only. He wants to make sure that service agents can see the data, but not edit it. Felix navigates to User Interface | Lightning App Builder. He could create a new page, but Felix already decided to modify an existing Lightning record page. He updates the page layout to add columns for the two Customer 360 Data Manager components he’s adding: C360 Order History Shows the ecommerce order history for the person account that was matched to B2C Commerce orders through Customer 360 Data Manager. C360 Global Profile Shows the matched Customer 360 profile data related to the person account record in the page layout. Felix then adds the Global Party ID field to the page layout. He activates the layout and saves the change. Finally, Felix lets Pia and the rest of the team at NTO know that they just worked together to enable their service agents to get all the customer order information they need directly in Service Cloud. They celebrate with a happy dance! Pia understands that configuring Customer 360 Data Manager and setting up integrated experiences is just the beginning. Having a solid plan for how data is shared and updated is critical for NTO to establish next. The NTO team is now ready to make Customer 360 Data Manager a regular part of their data management process. Ready to Get Started with Customer 360 Data Manager? You learned how to organize an implementation team, set up users, run data jobs, and build integrated experiences. You completed simulations to connect a data source, create global profiles, and more. Are you ready to set up Customer 360 Data Manager? Review the Planning Checklist on Salesforce Help to prepare for your implementation. Then configure Customer 360 Data Manager and enjoy the benefits of centralized customer data management.
OPCFW_CODE
Eximiousnovel Dragon King’s Son-In-Law – Chapter 571 unruly appliance recommend-p1 the random reincarnation of an average guy Novel–Dragon King’s Son-In-Law–Dragon King’s Son-In-Law Chapter 571 frail downtown “Hi…” Hao Ren looked over them and slightly nodded . Then, he took his textbooks and went away from the cafeteria . how to say congratulations to your girlfriend When Xu Ke’s expert gifted the great s.h.i.+eld to Xu Ke, he requested Xu Ke to feed it with the outdoors substance on a daily basis . If Xu Ke weren’t eager to get Hao Ren’s method, he would not utilize this value! This has been the initial really like message Hao Ren ever acquired, so there were some that means in it . However, Hao Ren tossed the appreciate notice to the waste can through the aisle . They had never thought that such a no-brand would be in the spotlight instantly . Hao Ren acquired even get to be the ‘handsome man during the s.h.i.+rt’ that many females appreciated! What Hao Ren failed to know was that these sorts of supreme faith based treasure ended up not supposed to be tamed but rather wors.h.i.+ped by cultivators who were weakened . A cute young lady got jogging in and placed a note on Hao Ren’s workplace . This gal was donning a set of white stocking . She wasn’t extra tall nor short and organised the book, Modern day Craft Background, in their fretting hand . Her make-up was normal, and her sight were definitely major . When Xu Ke’s learn presented the golden s.h.i.+eld to Xu Ke, he ordered Xu Ke to feed it with the outdoors substance on a daily basis . If Xu Ke weren’t frantic to obtain Hao Ren’s strategy, he would not work with this jewel! An adorable female emerged jogging in and place a note on Hao Ren’s workplace . the new avatar and the destiny of the soul When someone could conquer the gorgeous Xie Yujia after which surpa.s.s the Lu sisters, turning out to be Hao Ren’s fiancee, that will truly be a tremendous proceed! Individuals standard ladies viewed Hao Ren as he suddenly looked various . Not simply have Huang Xujie have got to render to him, nevertheless the Calligraphy Team had suddenly become really popular . They now believed like Hao Ren was obviously a mythical individual . Opera Stories from Wagner What Hao Ren did not know was that most of these superior faith based jewel were actually not meant to be tamed but instead wors.h.i.+ped by cultivators who were weaker . Through part of the dragon cultivators in East Water College or university ended up girl . These women cultivators got visual appearance and clean epidermis these people were general very pretty and had refused several people who pa.s.sionately pursued them . Dragon King’s Son-In-Law She smiled as she observed Hao Ren’s shocked facial area . Then, she transformed around and happened to run away from the cla.s.sroom . She was donning a checkered mini skirt, and her thighs were toned and extended she experienced a wonderful system . Hao Ren blushed slightly he didn’t mean to eliminate the sector . Lu Qi acquired stuffed the pockets, but the gra.s.ses couldn’t be regrown that quick . That has been why it appeared like every the gra.s.ses were definitely upheaved . god of tennis anime Hao Ren consumed some pancakes and drank a cup of soy products whole milk in the cafeteria . Hao Ren skimmed through it swiftly . Then, he transformed to see Xie Yujia and found she pout and seeking a little bit jealous . Hao Ren blushed slightly he didn’t suggest to destroy the sector . Lu Qi obtained stuffed the openings, although the gra.s.ses couldn’t be regrown that speedy . Which was why it checked just like all the gra.s.ses ended up upheaved . Xie Yujia went along to a noiseless location to develop yesterday evening, so she didn’t realize that Hao Ren and Xu Ke struggled . Through half the dragon cultivators in Eastern Beach University or college were definitely women . These girl cultivators got good looks and clean skin area these were overall very rather along with denied quite a few people who pa.s.sionately sought them . “Alright . I don’t blame you,” Xie Yujia said that has a pout . “It’s nothing…” Hao Ren shook his mind . “How was last night? There weren’t any complications giving Zhao Yanzi to school, appropriate?” Xie Yujia traveled to a peaceful location to enhance last night, so she didn’t understand that Hao Ren and Xu Ke battled . what was life like in rome 44 bc There will be a university news really soon . Depending on how Vice Princ.i.p.al Lu might manage the problem, the perpetrator might or might not be discovered . Summer months burst obtained just ended, and the time had come that industry would get fixed up… Shu! Shu! Shu! The gold s.h.i.+eld simply let out a bright wonderful light-weight . Then, it turned into how big a coin and slowly flew toward Hao Ren’s palm . To Hao Ren despite the fact that, Jiang Yuan was not interesting . As he spotted that even Lu Linlin and Lu Lili were actually pouting, Hao Ren smiled helplessly . He brought up his fingers and threw the postcard during the garbage can . “Buddy Hao! Buddy Hao!” To Hao Ren although, Jiang Yuan had not been interesting . As he spotted that even Lu Linlin and Lu Lili had been pouting, Hao Ren smiled helplessly . He lifted his hands and threw the postcard on the trash can . No matter if people were normal young ladies or female dragon cultivators, they all regretted not trying to get to understand Hao Ren greater right before . “That’s Liu Yan, a 2nd-season university student . She’s the most common lady in the industry System . Appearance how pleasant her grin is when she’s smiling at Hao Ren!”
OPCFW_CODE
By using these methods, we expect to only need to see you for one, or occasionally two sessions. Slowly, he lets go. The jangled thoughts fill my conversations, my dreams. Back in the theatre, Parker emerges with a ball python bigger than last time. How is this overcome. Parker, 29, was raised in South Africa and got his first pair of snakes at All venomous snakes are accessed away from the corridor and handled with sticks. Did I know snakes are deaf. If you live in a city like Chicago, where snakes are pretty rare, the fear of snakes may not cause you any trouble at all. So if you come to see me or any qualified professional for help in overcoming the fear of snakes, the treatment will most likely involve exposure to an actual snake, in addition to whatever work we might do with pictures and videos. So it would be hard to acquire a phobia for bunny rabbits, and easier to acquire a fear of snakes, or heights, or water, because there were times in our evolutionary history when those objects could pose a threat to survival. Hypnosis and NLP works. Why do people fear snakes. However, for the most part, it really doesn't matter how you became afraid of snakes. Martin Antony and Dr. Sometimes people are reluctant to use the intensive treatment, and prefer a progressive approach. If you're doing this on your own, be sure to also review my article on exposure treatment. Snakes of the World. The reason we got into this research was because I've always been fascinated by how it is that people develop it. He passes me the tail. That the larger ones can live to age 35. The zoo tour begins with the incubators. Leave a comment When I was younger, I was afraid of making mistakes and failure. You may have had a difficult encounter with a snake at some point; you might have observed someone else become afraid in the presence of a snake; or you may have read, or heard, scary stories about snakes. March 3, He leaves me at the demo theatre and reappears with a chubby little ball python a non-venomous constrictor curled up on his hands. No worries, he assures me over the phone. Martin Antony and Dr. Then a moment of calm. To try and get to the bottom of my new-found anxiety, I decided to explore some of the literature around snakes and the neuroscience of fear. There are suggestions that our fear of snakes is in-built, rather than something we learn as we grow up. One well established method of treating snake phobias has been developed by Dr. Just like a purse. Psychologists found that both adults and children could detect images of snakes among a variety of non-threatening objects more quickly than they could pinpoint frogs, flowers or caterpillars. Despite the huge fright it gave me, part of me felt very lucky to visited by such a beautiful creature — its lime-green diamond patterned scales that I got to see up close. As primates we learned to fear events and situations that once threatened our survival. Whose voice gets a happy sound when they realise it is me on the telephone?. Snakes, fear, and my primal brain Posted on September 20, by Sarah McKay • 1 Comment It was a hot hot Friday afternoon three weeks ago when I stepped out my front door to find the fella to the right sunning himself on my front deck. The fear of snakes or Ophidiophobia is the second most common phobia in the world. Nearly 1/3 rd of adult humans are believed to have an intense fear of snakes. Most people with Ophidiophobia can lead normal lives as they do not have to confront the object of their fears under normal circumstances. My biggest fear is being anywhere with snakes under any circumstances. Living in the city I have never faced a snake problem unless visiting the zoo but somehow I’ve always mange to think that they’re just going to pop up one day. New Year’s Resolution: Finally Get Over My Fear Of Snakes! “Ophidiophobia or ophiophobia is a particular type of specific phobia, the abnormal fear of snakes. It is sometimes called by a more general term, herpetophobia, fear of reptiles and/or amphibians.”. Oct 31, · The new study builds on years of experiments by psychologists. They found that the widespread fear of snakes stems from a perceptual bias: people recognize snakes faster than. The fear of snakes or Ophidiophobia is the second most common phobia in the world. Nearly 1/3 rd of adult humans are believed to have an intense fear of snakes. Most people with Ophidiophobia can lead normal lives as they do not have to confront the object of their fears under normal circumstances.My fear to snakes
OPCFW_CODE
Application Servers were cooked up as an idea to help ration a scarce resource: compute power in the middle tier. This idea came from mainframe land, initially, where CPU was scarce and expensive, and therefore a great deal of time and effort and money was spent on dicing up the mainframe CPU to various users, throttling workload, keeping load off the database, "passivating" transactions until load diminished after hours, and so on. In those days people spent millions of dollars on software that kept track of transactions on the mainframe, so as to be able to perform "chargeback" - internal cost accounting for the use of the expen$ive mainframe. Yes, people spent boatloads money so they could do billing to internal departments for the use of the mainframe. The thing is, Intel (and later AMD), Cisco (et al), EMC, Microsoft, and Linux rendered the entire idea moot. Computing became cheap. Really cheap. There is really, really, really no need to ration mid-tier compute resource. What's a dual-cpu server go for these days? How many of those could you deploy for the annual salary of ONE IT guy? This is an inversion of the old economics of mainframe computing, where compute was the expensive, scarce resource, and people were relatively (!!) cheap. Now people are the expensive part, and compute is cheap. App servers, and all the bells and whistles that they have for restricting access to compute, or parceling it out, or throttling it or even "monitoring" transactions for purpose of chargeback.... these things are not necessary when you have racks of cheap AMD servers. Another thing App servers did was shield the database from workload. Essentially, offload work from the db server. Time was, developers put the business logic in a stored procedure and, boom, there was your app. But there were scalability issues with this approach. Now, though, stored procs are fast and efficient. Database servers can scale up, cheaply. Chances are, you do not have one of the top-100 workload volumes in the world that cannot be borne on Intel hardware, with stored proc logic. One major problem with the stored proc approach though, is that it is still sort of hard to author stored procs in mainstream languages (Java, C#, VB, etc), and manage them. Yes, I know about SQL CLR and Java VMs managed by the DB. But those aren't mainstream approaches. Also, the DB Admin doesn't like code jockeys messing up his utilization graphs. For these reasons there is still a desire to write business logic in a separate language dedicated to the purpose. And there's still a desire to run and manage that logic on dedicated compute resources. But... a traditional "app server"?? No. It doesn't make sense. Put all your logic on an Intel server and let er rip! If you need more scale, clone the box. All the business logic is stateless (right?), so you can scale OUT. Use 3 "app" server machines, or 4 or 5 or however many you need. All running exactly the same code. Clones of each other. Regardless how many business logic machines you have, do not physically distribute the workload. For max scalability, strive to keep each transaction on a single box. That is a recipe for efficiency and optimal use of resources. It is a best practice to use logical tiers in your application architecture. This makes it easier to develop and maintain. But do not suppose that logical separation must imply, or even recommend, physical separation. It does not.
OPCFW_CODE
In Applescript, finding only bold words in TextEdit causes app to hang just trying to find a vanilla applescript way of setting a variable to all bold words of a document. I've looked for ways using Applescript in Word, Pages & TextEdit and TextEdit seems to be the only one (could be wrong, though). The good news is that the following script works but the bad news is if the document is over 2 pages with let's say ~50 bolded words, TextEdit hangs. Any other ways to get bolded words using Applescript? tell application "TextEdit" return words of text of document 1 where font contains "Bold" end tell Thanks It may be that TextEdit is being unreasonably slow. I just used the following code for a document with 3600 words, with some random bold words interspersed throughout. It took way too long. I'd look into using different scriptable app, like Tex-Edit Plus (still goin' strong, still great). The reason I used both "Black" and "Bold" is that some fonts use a "Black" variant instead of "Bold" when you make the text Bold (at least in TextEdit). Just to be clear, the following code works, but it is painfully slow. It may be that your code, if you waited long enough, works, too. :-( BUT keep reading :-) tell application "TextEdit" tell document 1 set thisManyRuns to count of attribute run of text of it repeat with r from 1 to thisManyRuns if font of attribute run r of text of it contains "Black" then --do stuff set color of attribute run r of text of it to {35555, 0, 0} end if if font of attribute run r of text of it contains "Bold" then --do stuff set color of attribute run r of text of it to {35555, 0, 0} end if end repeat end tell end tell I just ran this text in Tex-Edit Plus and it completed in about two seconds: tell application "Tex-Edit Plus" tell document 1 set thisManyRuns to count of style run of text of it repeat with r from 1 to thisManyRuns set thisStyle to style of style run r of text of it if on styles of thisStyle contains bold then --do stuff set color of style run r of text of it to {0, 35555, 0} end if end repeat end tell end tell ... so you might want to switch to that. Just a caveat that I had to put the style into the thisStyle variable before querying for the on styles property -- it would not work if I tried to do that in one line. I was working with an rtf file. (actually, less than that. it took 1.4 seconds with a document of 3600 words)
STACK_EXCHANGE
I don't want to stretch this discussion out too much because I think the point has been made, but a few comments below. --On Friday, 28 November, 2008 10:58 -0500 Andrew Sullivan On Fri, Nov 28, 2008 at 10:09:16AM -0500, John C Klensin wrote: ones) are the most likely targets of attacks. If they are, then having DNSSEC verification only to those servers, with client machines trusting the nearby caching servers without DNSSEC protection, provides very little protection at all. Put differently, if we cannot extend DNSSEC protection and verification to the desktop, DNSSEC provides very little marginal security advantage. This doesn't actually follow, because there could be another way to validate the link between the end host and the validating recursive resolver. For instance, we could use TSIG between a non-validating stub resolver and a validating recursive resolver in order to ensure that attacks between those two points aren't successful. If I know I have the right node doing the validation for me, then attacks against the ISP's validating recursive resolver require complete takeover of that machine: by no means impossible, for sure, but a bigger deal than just spoofing answers to a stub Sure. But I suspect that the number of systems that fully support TSIG that do not support client validation are few. I'd be happy to be proven wrong about that. One could also run the DNS queries between stub resolver and validating recursive resolver over a properly-validated and secured tunnel, but the number of those isn't huge either. We could also debate what is, and isn't difficult -- depending on network topology and operations quality, it is often much easier and more effective in practice to mount an attack against a server than against the That said, I don't want to make light of the end-point problem, since TSIG between a stub and a recursor isn't a trivial problem today either. Moreover, since end nodes in many environments get their recursor's address(es) via DHCP, and since that path is pretty easy to compromise, the whole edifice rests on a sandy foundation. Nevertheless, I just want to be clear that having every end node in the world doing RFC 4035-and-friends validation is not the only path to useful I would never go so far as to say "only path to useful...". I'm actually a big believer, in the present environment, in LAN-local validating caching resolvers. But that is not a popular setup, especially for the residential, SOHO, and small business setups, that are often at the greatest risk. But, unless one can either take advantage of special cases or harden the servers and data paths well past current norms, I don't see the potential of DNSSEC living up to the expectations and hype unless one has end node (or at least end-network) validation. As several people have pointed out, effective use of DNSSEC to the desktop requires sufficient work on APIs and UIs that an application, or the user, can distinguish between "signed and validated", "signed but does not validate", and "unsigned". Why? It seems to me that acceptable definitions of "works" and "doesn't work" in a security-aware context could include "validated or insecure delegation" and "bogus delegation" respectively. In my opinion, any plans that involve users making sensible security trade-offs due to validation failures will get us right back where we are with self-signed or expired (or both) certificates for https. It seems a perfectly good idea to me that "bogus" means exactly the same thing as "site off the air". We are in agreement about end users doing security validation and decision-making. But, unless you can deploy DNSSEC, with signing of all relevant zones, on a flag day basis, the end user software needs to be able to distinguish between "address validated with DNNSEC" and "address accepted because no signatures are present". Otherwise, one has to treat every address as equally untrusted and that is more or less equivalent to DNSSEC not being present. Whether it is appropriate to treat "failed validation" was equivalent to "no domain" or "no server response" is a much more subtle question, one I'm much more comfortable trying to answer with a signed root and tree than I am with lookaside. the middlebox problem, with servers not under the user's control making decisions about whether or not particular strings are resolved or reported to the user machine as non-existent. I have not been following the DNSSEC protocol work closely enough to be sure, but my impression is that such protocol work has not even been started, much less concluded and standardized. You have exactly two options: allow the recursive server to make the decisions you seem to dislike -- and I think people who like that approach think it's a feature, not a bug -- or else to do validation out at the end nodes. The end node gets a bit to tell upstream validators that it is going to do all validation itself, and those upstream systems are required to pass along all the data necessary for such validation. So it's still possible to do everything at the end node. I don't either like or dislike that recursive server model. I just think that the quality of security/trust improvement it provides is questionable given current operational realities and, perhaps more important, that only a very small number of successful attacks on such servers that people are depending on could bring the whole DNSSEC concept into serious disrepute. This is quite independent of the question of whether applications have the ability to understand the results from the validator. I agree that OS APIs seem to be missing. I'm not sure that's something the IETF ought to be solving, but I'd happily entertain arguments either way. IMO, the dividing line is precisely between doing validation at the endpoints and doing it somewhere else. If the answer is "endpoints", then it is perfectly sensible and consistent with IETF history to say "local problem". On the other hand, if validation is at the caching resolver, then it seems to me that the model for communicating between the stub resolver and that system is precisely an IETF problem (again, if only to be sure that the application can tell the difference between "validated" several of them, do we need search rules for look-aside My personal reading of the current specifications is that, if you have at least one path to validation, then validation is supposed to work. So search rules ought not to be needed. What the implementations actually do is currently at variance with my interpretation, however. Again, I'm much more concerned about the current operational practice and how it is evolving than I am about the theory in the specs. I know that, in any situation like this, "single authoritative tree" is a lot easier, and a lot harder for either a bad guy or carelessness to diddle in subtle ways without being caught and held accountable, than having multiple arrangements. And it is quite clear that we don't have that tree today. To paraphrase Bill, if there are two possible validation paths, using two different sets of lists on different servers, there is the possibility of different answers on different paths. And, precisely because this mechanism is supposed to provide security and trust validation, playing "see no evil, hear no evil, anticipate no evil" with that particular risk is wildly I'm in favor of getting things signed just as quickly as that is feasible -- either from the root down or using look-aside mechanisms that are, themselves, fully validated and with good tools for dealing with potential conflicts. But my reading of Russ's proposed experiment had to do with demonstrating that DNSSEC is actually useful in dealing with threats and plausible attack scenarios, not just demonstrating that one can safely sign zones and deploy validating software in some places on the network. For that demonstration of effectiveness, we are not, IMO, quite there yet and it is frightening that we are only having the discussion now (from that point of view, the proposed experiment has already succeeded because we [finally] are having Ietf mailing list
OPCFW_CODE
How can I obfuscate a test in code to prevent tampering with response processing? I am looking for a way to obfuscate (in the object code) a test - something like what might be done to check that a license key is valid. What I am trying to prevent is someone searching through an image binary for the code that processes the response. bool checkError = foo(); if ( checkError ) // I'd like to avoid making a simple check like this one. { // process response } This is a simplistic example, but not a recommended approach: int check = 71 * 13; check += 35 * isValid(); // will only return 0 or 1 //later (delayed execution of response) if ( check % 71 ) { //process response } EDIT: Just to clarify, the actual test is already finished and I'm getting a pass/fail return. My response processing will be a basic jmp and would be interested in pointers on how to obfuscate the location of the jmp. You should know, of course, that this is not a trivial problem. Large software companies like Microsoft spend millions trying to prevent people from bypassing their protection, and yet people still manage to circumvent their efforts. @Charles Salvia: This is true. I'm not looking for a protection degree from just this question. ;) However, this is my first time attempting something along these lines and I have to admit that I don't know where to begin. Macrovision even bought InstallShield to throw their weight behind copy-protecting software. It has not really stopped anything though. Hardware dongles have become pretty rare since they didn’t do much either (everything always comes down to software eventually, at which point it becomes hackable). SlySoft regularly blacklists leaked keys, but others keep releasing new ones with every release. I find obfuscation to be best for fun (like the contests) rather than for actual copy-protection. One approach would be to put the code that does the license check into a separate DLL. In the main application, load the DLL at runtime and calculate the checksum of the DLL itself. The app stores the checksum that was calculated with the DLL was built. If the checksums don't match, you have several options, show a wrong-version message - a bit obvious; Do not call the license check - less obvious but will be noticed when the attacker wonders why the license check doesn't get called; call a function with a similar name to the real license-check function. Think of it as using Public Key Encryption. Use a public key as part of the config and have a private key built into the app. If they mess with the public key, the digital signature of the app will be compromised in a detectable way. I agree with @camccann that it would help to understand the kind of attack you expect. As a last resort, split the license-check into as many parts as is feasible to make it harder to bypass by changing a single branch point. [EDIT] Another thought would be to use a State Machine. See the command structure example in the top answer to this question. Put the evaluation of the license check into the form of a hash lookup and a set of dummy function calls into an array along with the proper one. The decision code that evaluates the license check into a table/hash lookup for the appropriate function will not look like your typical if(){ pass;} else { fail; } construct. Two benefits, 1) there isn't a boolean condition to bypass and 2) they can't do a simple JMP instruction without knowing the address/name of the function to pass control to. SO thread on a state machine turorial. SO thread on state machine implementations Unfortunately, I feel this misses the area the OP was concerned about. He isn't so much worried about the actual process of checking the license. Instead, he's getting a boolean that represents a pass/fail, and needs to operate specific code based on that condition. However, this boils down to a very simple jmp in the assembly, and a simple hex editor can subvert the conditional check rather trivially if they know where it is. He wants to obfuscate the location of this jmp (if statement), not the actual license check itself. @Kelly French: Thank you, kindly! This is along the lines of what I was looking for. :) I like the idea of using state machine techniques. Obfuscation doesn't prevent, merely discourage. A sufficiently skilled and determined attacker will always be able to circumvent whatever obfuscation you use, so what you need to know first is: What kind of people are you trying to thwart here? Thank you for making that distinction. I should have said "discourage" in my question. The people I'm trying to thwart are probably novice attackers as more experienced ones will most likely find a way to get around some of the trickiest methods. The Secure Programming Cookbook (O'Reilly) has a whole chapter on Anti-Tampering (the actual book has the chapter, not sure what's available on the website). Neat stuff. You could cause a crash by sprinkling the check all over like: T* data = (T*) new char[sizeof(T) * (check() ? 1 : 0)] array[i + 1 * (check() ? 0 : 42)].doStuff(); There's a nice article at Gamasutra about crack protection in Spyro that does similar things, then goes further by making the game not crash, just work worse and worse. (You never hit enemies, you walk slower, certain critical objects disappear randomly etc etc.) Fun read for all programmers, and perhaps useful to you.
STACK_EXCHANGE
You can use the UPDATE statement, followed by the name of the table or view, to change single rows, groups of rows, or all rows in a table. As in all data modification statements, you can change the data in only one table or view at a time. The UPDATE statement specifies the row or rows you want changed and the new data. The new data can be a constant or an expression that you specify or data pulled from other tables. If an UPDATE statement violates an integrity constraint, the update does not take place and an error message appears. For example, if one of the values being added is the wrong data type, or if it violates a constraint defined for one of the columns or data types involved, the update does not take place. A simplified version of the UPDATE syntax is: UPDATE table-name SET column_name = expression WHERE search-condition If the company Newton Ent. (in the Customers table of the SQL Anywhere sample database) is taken over by Einstein, Inc., you can update the name of the company using a statement such as the following: UPDATE Customers SET CompanyName = 'Einstein, Inc.' WHERE CompanyName = 'Newton Ent.'; You can use any expression in the WHERE clause. If you are not sure how the company name was spelled, you could try updating any company called Newton, with a statement such as the following: UPDATE Customers SET CompanyName = 'Einstein, Inc.' WHERE CompanyName LIKE 'Newton%'; The search condition need not refer to the column being updated. The company ID for Newton Entertainments is 109. As the ID value is the primary key for the table, you could be sure of updating the correct row using the following statement: UPDATE Customers SET CompanyName = 'Einstein, Inc.' WHERE ID = 109; You can also modify rows from the result set in Interactive SQL. See Editing result sets in Interactive SQL. The SET clause specifies the columns to be updated, and their new values. The WHERE clause determines the row or rows to be updated. If you do not have a WHERE clause, the specified columns of all rows are updated with the values given in the SET clause. You can provide any expression of the correct data type in the SET clause. The WHERE clause specifies the rows to be updated. For example, the following statement replaces the One Size Fits All Tee Shirt with an Extra Large Tee Shirt UPDATE Products SET Size = 'Extra Large' WHERE Name = 'Tee Shirt' AND Size = 'One Size Fits All'; You can use a FROM clause to pull data from one or more tables into the table you are updating. |Send feedback about this page via email or DocCommentXchange||Copyright © 2008, iAnywhere Solutions, Inc. - SQL Anywhere 11.0.0|
OPCFW_CODE
Off the top of my head, in addition to NYPL, I would look at University of British Columbia’s Open Collections site See this release announcement from Paul Joseph about features & APIs: [log in to unmask]" target="_blank">https:[log in to unmask] the World Digital Library see also the APIs page: http://api.wdl.org/ Both of these sites strike me as exemplary for putting as much thought into the APIs as they do their front ends. And both support IIIF, the International Image Interoperability Framework<http://iiif.io> and its APIs. If you are serving digital collections, please (PLEASE!) consider also supporting IIIF’s 2.x APIs. You might also be interested in Blacklight<http://projectblacklight.org/>, and its digital collections / exhibits plug-in Spotlight<http://spotlight.projectblacklight.org/>. Blacklight is open source, responsive, and can include a number of add ons / APIs, like <https://github.com/projectblacklight/blacklight/wiki/Blacklight-Add-ons> OAI-PMH<https://github.com/cbeer/blacklight_oai_provider>, SiteMaps, oEmbed, and Map views. It has hundreds of instances worldwide, and is used for a variety of purposes, including catalogs, digital repository front-ends, presentation of digital collections, and a front-end to Hydra<https://github.com/projectblacklight/blacklight/wiki/Examples>. Spotlight is an extension of Blacklight, and provides curators and collection managers with a self-service UI for building a digital collection showcase. It adds context, order, narrative and customizable search, facets and display fields to a digital collection site through a WYSIWG UI. Some Spotlight sites can be found at Stanford’s exhibits page: https://exhibits.stanford.edu/ I look forward to seeing UNT’s new Texas Portal; they do good work. (No pressure, Mark :) On Feb 27, 2016, at 1:26 PM, Matt Sherman <[log in to unmask]<mailto:[log in to unmask]>> wrote: I am asking about interesting digital collection tech due to some personal research I am doing. I have looked a bunch of digital collection sites lately and outside of NYPL <http://digitalcollections.nypl.org/>, I have mostly seen bland, non-responsive but functional CONTENTdm sites or old late 90s early 2000s static HTML exhibit sites. Given the kind of web tools and UX methods we have now I am curious if people can point me to, or tell me about, more interesting user friendly designs/systems? I see talk of responsive design and data interoperability via OAI-PMH and APIs, but I must be looking in the wrong places as I am seeing very little evidence of it being put into action. If anyone can point me to more interesting pastures I would appreciate it.
OPCFW_CODE
First ISSI Meeting on “Coordinated Observations of the Gravitational Focusing Cone of the Local Interstellar Medium at 1 AU” Space Science Center and Department of Physics, University of New Hampshire John Raymond, SOHO UVCS George Gloeckler, ACE and Ulysses SWICS Daniel Rucinski, (Maciej Bzowski), Modeling (Howard Ogawa, Darrell Judge), Don McMullin, SOHO CELIAS SEM Rosine Lallement, EUV Absorption, SOHO SWAN (William Thompson), SOHO CDS Manfred Witte, Ulysses GAS Hans Fahr, Modeling Sergei Chalov, Modeling Reinald Kallenbach, Ruedi von Steiger, ISSI Eberhard Möbius, Chair Invited: Toshio Terasawa, Hirotomo Noda, University of Tokyo, Japan, GEOTAIL, NOSOMI ______: Either confirmation to me about attendance or with ISSI about accomodations ( ): Won’t be able to attend. In the following I will lay out the tentative structure of our meeting and related plans. Because we are a small group this is flexible, but the better we plan ahead the more effective our meeting will be. Please feel free to send me any comments, criticisms additions etc. that you might have. Titles and names are suggestions, of course, completely biased by my narrow view. I expect that the program and sequence will develop as we go. Also, please let me know of any specific presentations, important for our work, which you may want to contribute. However, please bear in mind that this should be a working meeting with detailed on-site work and not a meeting filled with presentations. We need to understand each other’s data and models though in order to get going. I still feel that the best way to do this is by some focused introductory talks. For our detailed work we will need the main data sets on site, meaning one of the ISSI computers. I have talked about this with Ruedi. ISSI is prepared to assist us with their computers. In order to hit the ground running we will need the data ready before our meeting starts. I have already talked with some of you about contents and possible formats. The idea is not to shift any copies of raw data to ISSI. What we are interested in are the products of your work. However, the data need to be traceable, meaning: - results should still contain any ancillary data inputs explicitly - if models are used to derive final results, it would be better to include both the final result and the observable from which it has been derived (We would like to be able to explore, how the result changes with parameter modifications in the model and with the use of alternate models) Also it would be a good idea to have some of the key models available, not necessarily the code, which may be very elaborate, but the results in a parameterized form. Mulling over this, I think, both data and results from models would be best in a spreadsheet format. This will allow easy access and simple manipulations on site. Let me give you my first cut below. - Pickup Ions: Suitable time averages (1 day?) of normalized energy flux or phase space density spectra over the interesting time period will a reasonable data set. Other information? - EUV Lines (UVCS and EUVE): A set of calibrated absolute intensities with the viewing directions in GSE coordinates and heliospheric (ecliptic) coordinates along with spacecraft position will be needed. Other information? - He Density Models: Density as a function of distance from the sun and distance from the inflow vector (with the inflow vector given in heliospheric coordinates. Sets of densities should be available parameterized according to ionization rate, inflow speed, LISM temperature. A normalized density would be preferable, so that we can adjust the absolute density as a simple parameter. Other parameters? - Neutral Densities: Manfred, I have not come up with a good idea yet, how to implement your data. Do you have a feasible idea? - Solar EUV Monitor Data: Integral fluxes and fluxes in the 304 Å band as a function of time (I believe 5 minute averages are readily available) along with the calibration factor between flux and ionization rate of He. - CDS Full Sun Scans: Bill, thank you for your data set. This will be great for a comparison with the SEM data, which we have for the entire time periods. - Ancillary Data: Solar wind proton, alpha fluxes, proton densities and speeds as a function of time (from ACE and/or SOHO); cross-sections for charge exchange. Daniel, how do we treat electron ionization without too much hassle? - Anything else? Structure of the meeting (Draft) Welcome, overview, logistics, plan for the week (Möbius, von Steiger) Local Interstellar Parameters, benchmark values for a local sample of cosmic matter – Where are we and where are we going? (Möbius) Multi-fluid character of the interaction between the heliosphere and the interstellar medium (Fahr) of the interstellar neutral gas in the heliosphere and its temporal variations and velocity distribution of He pickup ions in the cone region – Related transport and acceleration processes (Chalov) Helium parameters from EUV absorption observations (Lallement) Diagnostics of the focusing cone close to the sun (Raymond) Scan of the focusing cone in the anti-sunward direction (Vallerga) Interstellar parameters from pickup ion measurements at 1 AU and from Ulysses (Gloeckler) Instrumentation for He pickup on Geotail and the Nosomi Mars Probe (Terasawa) He pickup ion observations with Geotail and Nosomi (Noda) Interstellar parameters from neutral gas measurements (Witte) Determination of the photoionization rate for He (McMullin) General discussion of our approach, response by modelers, split into WGs (all) Work on data and interpretation in WGs a) Remote sensing group (Lallement, Raymond, Vallerga + some modelers + 1 ISSI) b) In-situ group (Gloeckler, Möbius, Witte, McMullin, Terasawa, Noda + some modelers + 1 ISSI) Use plenary meetings as needed, but at least every morning and after lunch. Presentation of WG results and open questions WGs Plans for work at home, publication plans, next meeting (all) Executive Meeting (subgroup including Möbius, von Steiger, Kallenbach + others)
OPCFW_CODE
xterm loss of focus signals vim to exit input mode? I've been using xterm on a KDE desktop for many years, on one Debian/Ubuntu/Arbian release after another. I use the focus-follows-mouse desktop setting, and ":set mouse=a" in .vimrc. Recent releases introduce a misbehavior I don't know how to track down. Seems to have happened Debian 10->11, Buster->Bullseye. With vim in input mode, when I move the cursor out of the xterm where vim is running (the window loses focus), there is a beep and vim switches to command mode, as if someone had hit esc in that window. The old behavior was to wait quietly until the focus came back on the vi window, so I could paste in whatever I went to the other window to copy. gvim on the desktop doesn't have this problem. It happens on an old (buster) desktop talking via ssh to a shell+vim on new (Bullseye) systems. What's going on here? Is the new shell passing along a signal it used to trap? How to track down? Try adding the following line to your vimrc. (more information: :help xterm-focus-event) set t_fd= t_fe= Vim 8.2.2345 adds support for xterm focus event, which are enabled by default. https://github.com/vim/vim/commit/681fc3fa782e99fe69ed2c83c3e29109d2d61e1a In my environment, when this new setting is enabled and there is a mapping to esc in insert mode, when the window loses focus, vim switches to command mode as if I had pressed esc. (I'm not sure if this is a bug or a specification.) It turns out the issue occurs in "vi" from Debian's vim-tiny package, but not with "vi" from vim-basic or vim-gtk. The reason is vim-tiny installs an /etc/vim/vimrc.tiny with a line "set compatible" not commented out. The others only install /etc/vim/vimrc. This explains the differences in behavior among my installations. Apparently "vi" from vim-tiny requests those focus change notifications from the terminal, but doesn't know what to do with them. Invoke the same program from the same package by its other name "vim", and it doesn't show the problem. The quickest fix is to change "set compatible" to "set nocompatible" in /etc/vim/vimrc.tiny. I got three good answers in the Raspberry Pi forums overnight.
STACK_EXCHANGE
The figure below illustrates the separation of levels and the separation of concerns. Please also refer to the RobMoSys Glossary for descriptions of used terms. The levels indicate abstractions in a robotics system. The levels can be seen as an analogy to “ISO/OSI model” for robotics that addresses additional concerns beyond communication. The analogy is interesting, because ISO/OSI partitions the communication aspect in different levels of abstraction that then help to discuss and locate contributions. The ISO/OSI separations in levels allows to develop efficient solutions for each level. Establishing such levels for robotics would clearly help to communicate between robotics experts–as ISO/OSI does in computer science. The levels and concerns can be used to identify and illustrate architectural patterns. The blue line in the figure is an abstract example. An architectural pattern combines several levels and several concerns. For example, the architectural pattern for a software component spans across the levels of service, function and execution container. A the skill level is an abstraction level that decouples task-level and service-level. The purpose of abstraction is to enable replacement and composition of components (components providing the same skill) and decoupling (e.g. separation of roles: component developer and behavior developer). Skills provide access to the functionalities realized within components and make them accessible to the task level. Skills coordinate software components through RobMoSys Software Component Coordination interface. With skill definitions on Tier 2, skills enable the task modeling independent of the underlying software component architecture. Skill implementations are bundled with software components and are provided by the component supplier role. A skill defines basic capabilities of a robot. The area of transition between high-level tasks and concrete configurations and parameterizations of components on the service-level. A collection of skills is required for the robot to do a certain task. For example, a butler robot requires skills for navigation, object recognition, mobile manipulation, speaking, etc. A component often implements a certain skill, but skills might also be realized by multiple components. Skill-level often interfaces between symbolic and subsymbolic representations. A service is a system-level entity that serves as the only access point between components to exchange information at a proper level of abstraction. Services follow a service contract and separate the internal and external view of a component. They describe the functional boundaries between components. Services consist of communication semantics, data structure and additional properties. Components realize services and might depend on existence of a certain type of service(s) in a later system. See also: Service-based Composition Example elements on this level: e.g. phread, socket, FIFO scheduler An Operating System is, for example, responsible for: Examples for Operating System A (communication) middleware is a software layer between the application and network stack of the operating system. Communication middlewares are very common in distributed systems, but also for local communication between applications. They provide an abstract interface for communication independent of the operating system and network stack. There are many distributed middleware systems available. However, they are designed to support as many different styles of programming and as many use-cases as possible. They focus on freedom of choice and, as result, there is an overwhelming number of ways on how to implement even a simple two-way communication using one of these general purpose middleware solutions. These various options might result in non-interoperable behaviors at the system architecture level. For a component model as a common basis, it is therefore necessary to be independent of a certain middleware. Solid pieces of bare metal that the robot is built of and uses to interact with the physical environment. It includes actors/sensors and processing unit. This document contains material from:
OPCFW_CODE
Review of the Component Palette The VCL provides a large and mature family of interactive and noninteractive components for use in your application. These can be found on the IDE component palette. To make that easier, the component palette is broken into pages, each containing a variety of components. Also, some components only exist on the component palette for specific editions. The Personal Edition contains the core VCL components. The Professional Edition adds the CLX components, the database components (excluding support for Oracle 8i special features), Quick Reports, TeeCharts (graphs), ActionLists and Actions, Office and other OLE Automation Server components, a WebBrowser component, and the Indy Internet components. The Enterprise Edition adds special component features for Oracle, DataSnap components for access to remote data module applications, SOAP, COM, and CORBA-oriented distributed system connection components for use with remote data modules, and Internet Express components to easily use XML with remote data modules. The pages of the component palette conveniently arrange these more than three hundred components into something manageable. Figure 3.2 shows the component palette properties dialog, which is produced when you pick Properties from the pop-up menu you get with a right-button press over the component palette. Figure 3.2 Component palette properties dialog. The pages contain components as follows (you can see the name of the component in a hint if you hover over the component image on the palette): Standard: Basic user interface components, such as TMainMenu, TPanel, TLabel, TEdit, TMemo, and TButton. These are typically standard Windows/Linux user interface elements. Additional: More user-interface components, offering specialized features not available from standard operating system components. These include special buttons such as TBitBtn and TSpeedButton (extending TButton with images); TMaskEdit (extending TEdit with the capability to enforce a format for input); TDrawGrid and TStringGrid (which provide a scrollable spreadsheet-such as interface object for images and strings, respectively); TImage and TShape, which can be used as graphical elements, and the special Action components. Common Controls and Win32: There are many useful controls here. They include the TpageControl, which enables you to create multipage user interfaces; TProgressBar, which you can use to show the progress of some noninteractive processing; TimageList, which can contain a set of indexable images for sequential display or use with controls such as TBitBtn. Note that the Common Controls page is for CLX applications only and Win32 for Windows VCL applications only. System: A Windows only page, this contains a variety of specialized Windows controls, including TPaintBox, TMediaPlayer, OLE, and DDE controls. Data Access: This now contains only a relatively small set of components that are used for data access. These include TdataSource, which is used to connect database components to the data-aware controls on the DataControls page); TClientDataSet, which is used when working with client-server database queries, and a set of XML transformation components. Data Controls: These are data-aware versions of standard user interface controls. They can be hooked to a data source that makes the controls capable of displaying data set rows or fields. Controls such as TDBGrid, TDBText (a label-displaying field content), TDBEdit (allowing editing of the content of a field), TDBImage (displaying an image stored in a BLOB field) makes it very easy to connect your application to databases regardless of how they are implemented. The next four pages offer components that provide similar interfaces, but which use very different methods of accessing data in databases. At one time, C++Builder provided only one type of database access componentthe components now on the BDE page. Now there are several different component sets for database accesswhich means more choices. Fortunately, all those components link to the same data aware controls using the TDataSource component on the Data Access page. dbExpress: These are a set of components to interface with the new lightweight client-server database drivers from Borland. Those drivers can work with enterprise databases such as Oracle. DataSnap: These components connect to Remote Data Modules (RDM). RDM are used to form the provider tier of a multitier system. TDCOMConnection enables you to use DCOM to connect to the RDM, and pass data back and forth from the components of the RDM as if they were on the local system. Other types of connection are also provided. DataSnap is discussed in Chapter 21, "DataSnap Multitier Connections." BDE: These are a set of components to interface with Borland Database Engine drivers, which allow access to databases both directly through the BDE and indirectly through ODBC. Note that the BDE is currently stablewhich means it will not see much if any further development. ADO: This set of components only applies to Windows, where it connects to databases through ActiveX Data Objects. Interbase: This set of components connects to the Borland open source Interbase client-server database. Multitier applications other than pure database applications are also easy to program using components. The next few pages are dedicated to those sorts of distributed programs: WebServices: These components are used to provide or work with WebServices-enabled applications, which are covered in Chapter 19, "SOAP and Web Services with BizSnap." InternetExpress, Internet, WebSnap, FastNet: These components work with HTML, HTTP, and other Web protocols. Decision Cube: This powerful database component set enables you to provide fairly high-end analytical capabilities to your users.
OPCFW_CODE
I was wondering if someone has ideas about integrating the TPM with Recently I started looking into supporting Secure Device Connection Protocol (SDCP, ) in libfprint. The general idea is to verify that the Fingerprint reader can be trusted, but I initially also imagined that further use-cases like unsealing data in a TPM may be possible (e.g. to retrieve disk encryption keys). However, looking into it more, my current conclusion is that there is little to no advantage to use the TPM. At least not unless one also has a trusted (userspace) program which is capable of signing TPM authorizations. One could easily offload the required parts into a small helper, but that may require ensuring it runs in a trusted Microsoft seems to run relevant parts as trustlets that are walled off from the rest of the system. That seems sensible to me, but it also means requiring all the infrastructure for execution and signing and I doubt that is feasible currently. Right now I'll probably go the way of not using the TPM at all. But I am really not an expert for this. So should someone see scenarios where a TPM is actually helpful in this context, then I would like to hear PS: A quick summary of how SDCP works: * Device has a private ECC key that signs the firmware and ephemeral keys during boot (and is inaccessible afterwards) * A certificate proofs that this key was provisioned in factory * Device builds a shared secret with the host (s) * Device sends id, HMAC_SHA256(s, "identify" || nonce || id) when the finger "id" was presented. * The HMAC proofs knowledge of the shared secret and authorizes the I know this is not a TPM doubt, but it's related and some people may have had this issue. Is there some form to make the digest collected through IMA deterministic? I rebooted my system several times, and on the very beginning of system initialization I've noticed the hash in PCR 10 of TPM is changing. The number of lines initializes equally, but it seems that the order the programs are ran always changes. Any ideas for overcoming this issue? I have the exactly same issue as https://superuser.com/questions/1404738/tpm-2-0-hardware-error-da-lockout... TPM2 tools version v1.1 Tried clearing ownership: linux-host:~ # tpm2_takeownership -c -L lockpass ERROR: Clearing Failed! TPM error code: 0x921 Tried clearing dictionary lockout: linux-host:~ # tpm2_dictionarylockout -c -P lockpass ERROR: 0x921 Error clearing dictionary lockout. The error id decode says: linux-host:~ # tpm2_rc_decode 0x921 description: Error produced by the TPM format 0 warning code description: authorizations for objects subject to DA protection are not allowed at this time because the TPM is in DA lockout mode Can't figure out how to get out of this lockout state. Has someone came across same error before? How to fix it? Thanks. Based on some limited debugging on Windows 10 1809, it appears that Windows does not require the owner auth. Running the application "as administrator" and providing an empty TPM2B_AUTH (with auths.sessionHandle = TPM2_RW_PW) allows me to successfully call functions like... I've searched through the project's issues but didn't find anything on this topic. Our team is porting the Linux implementation of the 'tpm-provider' (application interface (wraps tpm2-tss for use with golang) to Windows. On Linux we take ownership of the tpm and specify the owner auth password, which is then used for the tpm2-tss function calls (ex. https://github.com/intel-secl/tpm-provider/blob/64cd53d6fd91b50eb011e1e43...). My understanding is that taking ownership is not needed on Windows and I've retrieved the "ownerauth" form the Get-Tpm cmdlet. Base64 decoding that value and passing the 20 bytes for owner auth returns 0x9a2 (TPM_RC_BAD_AUTH). What ownerath value should I pass to tpm2-tss? Duplicated at https://github.com/tpm2-software/tpm2-tss/issues/1767 I would like to announce tpm2-pkcs11 v1.3.0-RC0, with the following chamngelog: 1.3.0 - 2020-06-29 * C\_CreateObject: Support for CKO\_DATA objects only with CKA\_PRIVATE set to CK\_TRUE. Token defaults to CK\_TRUE. * Fix Tests against simulator that support RSA 3072 keys The release can be found here: we are currently discussing deprecation of the esys libgcrypt backend, keeping only openssl and the upcoming mbed-crypto. If you have any thoughts on that topic, please join the discussion at I'd like to highlight the command tcti's inclusion into the TSS: What's really cool, if you have tpm2_send on master post PR 2094: You can use it to run commands on remote machine. For instance, you can Run a tpm command over an SSH tunnel on a remote machine to get the quote. There will be no endianness issues in anything and no worries of how to transmit the data. Another great perk, is that if your device node has a too old version of tpm2-tools, you can just issue a partial update to tpm2_send, or provide some other command suitable. For most devices, something that can read and write a file might be useful, not really sure offhand what that would look like in entirety. tpm2_getrandom -T "cmd:ssh localhost tpm2_send" --hex 4 > -----Original Message----- > From: Oleksii Moisieiev <Oleksii_Moisieiev(a)epam.com> > Sent: Thursday, June 18, 2020 1:21 PM > To: Struk, Tadeusz <tadeusz.struk(a)intel.com> > Cc: tpm2(a)lists.01.org > Subject: [tpm2] Re: Sharing TPM 2.0 between containers with access policy > Hello Tadeusz. > Thank you for the answer. > I've done some investigation and found that passing device /dev/tpmrm0 to the > containers will do the job. Also problem with tpm_clear can be solved by > restriction owner access to the tpm. So each container can use keys in TPM but > talk to owner if any changes is needed. > I have another question: According to the documentation - TPM is having unique > endoresement key, embedded to the device during manufacturing. So each > module can be identified by this key. > How can I retrieve this key embedded to the TPM module? Only the endorsement hierarchy primary seed (EPS) is embedded at manufacturing time. So Calls to tpm2_createprimary with the proper inputs will yield the same key every time. Calls to tpm2_createek should create this for you. The calls to tpm2_getekcertificate should give you that manufacturer certificate. Details on this process can be found in this spec: > Best regards, > From: Tadeusz Struk <tadeusz.struk(a)intel.com> > Sent: Friday, June 5, 2020 8:16 PM > To: Oleksii Moisieiev <Oleksii_Moisieiev(a)epam.com>; tpm2(a)lists.01.org > Subject: Re: [tpm2] Sharing TPM 2.0 between containers with access policy > On 6/5/20 12:52 AM, Oleksii Moisieiev wrote: > > Hello all, > > I have an embedded device, with Docker containers based architecture. > > This device is operating by software, installed in separate containers. > > I would like to share TPM2.0 access between this containers with the > > following restrictions: > > 1) Forbid Clear TPM command for the containers; > > 2) Each container should have an access only to the set of keys it owns. > > 3) Each container can create keys, but not overwrite existing keys > > that does not related to this container. > > According to the "TCG TSS 2.0 TAB and Resource Manager Specification" > > - TPM Resource manager doesn't implement access restrictions right now. > I think you could run a separate instance of RM in per container to get > 2 & 3. As for 1, this would need to be prevented on a platform configuration level, > like in BIOS or equivalent.
OPCFW_CODE
How to view your CCTV cameras online remotely? Sannce Obviously you can use Remote Desktop or a similar service to connect to any Windows computer and actually see the desktop and do anything that you would do locally, but the PsTools utilities allow you to do many tasks from the command line — or better yet, from a script that you can re-use later.... PowerShell Remoting lets you run PowerShell commands or access full PowerShell sessions on remote Windows systems. It’s similar to SSH for accessing remote terminals on other operating systems. It’s similar to SSH for accessing remote terminals on other operating systems. Re-pairing to Foxtel iQ3 Bluetooth remote Foxtel Community As a photographer, you’ll want to control your camera remotely on occasion. For example, if you’re shooting landscapes by the sea and don’t want to get your feet wet, taking a group portrait you’re also in, playing around with self-portraits, or making a timelapse, remote control is essential.... 16/01/2011 · Learn how to configure your Windows 7 computer to allow remote access plus how to configure and use Remote Desktop Connection (RDC) to connect to a Windows 7 computer remotely. 5 Ways to Remotely Shutdown a Computer wikiHow Local printers connect to a computer via a port, such as a Universal Serial Bus, while remote printers connect through wireless technology or via a print server -- a computer, router or other how to download private videos DDNS allows you to remotely connect to your surveillance DVR over the internet if your DVR is connected to a cable or DSL internet connection. It is recommended that you purchase a router that supports DDNS and connect the router to your cable or DSL modem. There are many vendors that provide DDNS service. Follow the setup instructions in this section to create and configure an account … Hik-connect Setup Guide CCTV HD Authentication: As a user, when one connect to a MySQL server, the identity is determined by the host from which one connect and the user name one specify. Authorization : When one issue requests after connecting, the system grants privileges according to the identity and what one want to do . how to connect to hp laserjet p1102w Use System Center Configuration Manager remote connection profiles to allow your users to remotely connect to work computers when they are not connected to the domain or if their personal computers are connected over the Internet. How long can it take? How to connect MYSQL remotely DigitalOcean - Using psExec to Open a Remote Command Window – System - How to open a connection and maintain the OSS support user - Connect to Windows 8 Remotely Using PowerShell Petri - Raijin User Guide NCI Help - Opus - NCI Confluence Pbs Systems How To Connect Remotely Chrome Remote Desktop lets you connect computers for remote access. Once connected to a remote system, you can view the screen, type, move the mouse, or send a key combination, such as Ctrl-Alt-Del. - You can control an Echo device remotely by choosing it from the Alexa app. You can talk to Alexa from the iOS or Android app; iPhone, iPad, and Android users can interact with Alexa via an app - DDNS allows you to remotely connect to your surveillance DVR over the internet if your DVR is connected to a cable or DSL internet connection. It is recommended that you purchase a router that supports DDNS and connect the router to your cable or DSL modem. There are many vendors that provide DDNS service. Follow the setup instructions in this section to create and configure an account … - Select the Allow Remote Assistance connections to this computer check box, and then select OK. Now, search for remote assistance again and select Invite someone to connect to your PC and help you, or offer to help someone else. - When engines are started using the PBS batch system (or other qsub systems, such as SGE). When the controller is started on localhost and the engines are started on remote nodes using ssh . When engines are started using the Windows HPC Server batch system.
OPCFW_CODE
Quoted from jawjaw: Why would Stern just turn over it's software that they invested heavily to the world, including their competition? So a few fans could tinker with their games, put them on route, and have them malfunction or do things they were not designed to do? That's entirely ridiculous. It's not like the rules code (the thing people are actually interested in improving) is at all useful without a Stern machine. And the backend code hasn't changed much since the 90s. There's nothing fancy or secret about it, so nothing to steal. Besides, tens of thousands of companies invest tons of money in their open source projects. Open sourcing something isn't giving it away, it doesn't work like that. Even if some op was stupid enough to put untested custom code on their route machine, Stern could easily add restrictions like free play to modified code. The whole 'oh no someone's going to break the game' thing is overblown anyway. The backend has tons of safety checks to make sure nothing gets locked on. Quoted from taylor34: I don't understand why anyone would do that strictly on price. To learn, yes, for fun, yes, for price? That seems ridiculous. You could accomplish so much on your custom game in the time it would take you to create your own board. Not to mention debugging time of that board, doing OS stuff, etc. when all of that has already been done with the p-roc. If it was a thankless chore to design and create the boards then yeah, I'd totally have just used existing ones (although I would have used OPP, not shelled out for proc), but for me it isn't. I learned a ton and I enjoyed most aspects of it. All part of the experience. When I show my machine to someone and go 'I built this' and they ask what i mean, I can say 'everything'. Boards code, playfield, wiring, etc. Also I don't like the idea of running a pinball machine on top of windows/Linux, let alone needing a whole small desktop in there. Slow to boot and too many things to go wrong. Quoted from rosh: What game did you build? Is there a thread or website about it? Any info on your board set that you made? Picture in my avatar for reference. I grabbed a video at one point: I never publicly announced it or made a thread or anything. I had some posts about it on my blog, but s combination server and backup drive failure plus some mysteriously missing other backups means that blog is lost never got any interest on it before, so I didn't put a lot of time into documenting things, but if someone wants to know anything about it I'm happy to expound. I've gone into detail via a few pms to people in the past.... I wanted to bring the game to pintastic the first year but due to a lot of connector problems (lesson learned: buy a good crimper, use locking housings, and don't mount boards upside down under the playfield) it isn't reliable enough to move. As long as it sits in the corner though it's been pretty reliable, so I've been content to let it sit and try to make my next game solid.
OPCFW_CODE
Here, I explain what MySQL "views" are and how to use them. MySQL provides us with the ability to create views. A view is defined as a stored query that when invoked, produces a result set. Some folk refer to views as "virtual tables". Clear as mud? Let's try again. What is a View? A view is a query that you save to the database. You can then run it later simply by calling that view (rather than writing out the query again). The view could consist of a complex query but it will present the results as though it was a table. Therefore, you can query the view as though it was a table. For example, you could have a complex query that selects data from three different tables. You could either type this complex query out every time you need to run it, or you could save the query as a view. Once it has been saved as a view, you can then run a simple SELECT statement to return the results of the complex query. But of course, you could also write a complex query against the view if need be. Create a View Creating views are very simple. You simply preceed your query with one line of code and run it. The view will immediately be created in your database. To create a view, type the following statement, followed by the query: Replace view_name with whatever name you'd like to use for the view. If we run the following code against the FruitShop database: We now see a view called vFruitInventory listed under Views (you may need to click the Refresh button for the SCHEMAS menu first): It's a good idea to think of a naming convention for your views (as with any other database object) and stick to it. Many developers prefix their view names with vw_ so that it makes it easier to distinguish views from tables in their queries. However, other developers disagree with this convention and prefer their table and view names to be interchangable. Querying a View Now we can query the view just like we'd query a table: Of course, we can use a more specific query too. For example, this one that selects only those records where the inventory is greater than or less than 10: But we can't query columns that aren't referenced in the view (even if they are in the underlying tables that the view queries). For example, we can query the Fruit table like this: But we can't query the above vFruitInventory view like this: This is because the view doesn't return the FruitId column. We specified the exact columns in the view and those are all that are returned. As mentioned, the result set of the view is just like a table and some like to call it a "virtual table". If the "table" doesn't include those columns, you can't query them. Rather than being a limitation, this is actually a feature of views. This feature means that we can grant users access to some columns of a table but not others (via the view). In other words, we can grant a user access to a view without granting that user access to the underlying tables that the view accesses. Some tables might store sensitive information that the user isn't allowed to access. But the same tables might also store non-sensitive information that they need to access. What to do? Create a view! And that view can select only the non-sensitive information from those tables. Modifying a View Here are two different methods to modify your view. Option 1: Use the ALTER VIEW Statement You can modify a view by using the ALTER VIEW statement. Like this: Replace view_name with the name of the view that you'd like to alter. Let's add the Fruit.FruitId field to the view: Now, when we try to return the FruitId field in our queries we will get results. But note that we can't try to access this field as Fruit.FruitId. We can only access it as FruitId. And this is how it should be. After all, the view is a "virtual table" and we have no need to know the structure of the tables that it queries. Option 2: Use CREATE OR REPLACE Note that the view must exist before you run the ALTER VIEW statement. If it doesn't exist, you'll receive an error. You can avoid this issue by using a CREATE OR REPLACE statement. This will create the view if it doesn't exist, or replace it if it does. So we could've created the above view like this: And then we could update it by using the same CREATE OR REPLACE statement, but just modifying the definition. For example, adding in the Dropping a View You an drop a view by using the DROP VIEW statement. Like this: The above statement will remove the view called vFruitInventory. Dropping Multiple Views You can drop multiple views using the same DROP VIEW statement. Just separate each view name with a comma. Like this: IF EXISTS Clause You can also use the IF EXISTS clause to prevent an error from occuring if a view doesn't exist:
OPCFW_CODE
Over the last couple of days on the SSWUG Newsletter there has been some great discussion on SAN Solutions and SQL Server. (If do not get the newsletter, you can at no charge by signing up here.) I have included today’s editorial, but I would recommend that you check out the discussion. In my opinion this is one of the more important topics that are being talked about over the last couple of years. Technology including hardware keeps getting better and better. But I have been in environments where companies have adopted new technology but have not invested in the training for the staff that was expected to run it. The end result is, who does this responsibility fall on? If it’s a storage device and your database is on it, you cannot just pass the buck and say it’s the SAN Admin’s fault. You have to own up and find the source. Jeremy had a great comment that I thought fit in well with mine. I think there is a great need for more DBA’s out there to adopt the approach that Brent Ozar took. He looked at the new storage that was going to host his databases and studied it, became an expert and is now trying to get the rest of us to catch up. When I first talked to Brent I was excited to see that he was talking about the subject of Databases on SAN. I have been too many conferences and have listen to a few speakers try to capture this subject. None of them did it as well as Brent. SAN Solutions, SQL Server and Many Misconceptions… From our own Chris Shaw: “I think the comments that you have received to this point are great. There is a huge need there for SAN Administrators to understand SQL Server, or even SQL Server DBA’s to start learning SANS. Either way in the past few years I have seen this as being a huge gap. When I was working with a SAN not too long ago, the SAN Administrator would create a RAID and then many LUN’s on that RAID. These LUNS would be owned by different servers. In some cases this would not have been a big deal if the servers and solid busy times and times when there was no activity. So you could put 2 machines on the same RAID no matter the flavor of RAID. This company clearly did not understand that if you have 10 spindles on a server and then you upgrade it to a SAN where it may have 12 spindles, but it has to be shared with another server that used to have 10 spindles… Well now what was 20 spindles for 2 servers is now 12 spindles for 2 servers. The end result was performance problems like we had never seen before. We had the SAN Vendor come in give us the configuration recommendation and when they left all they did was move the LUNS around. The Spindles were still shared.“ Jeremy: “This is a great topic. Steven’s comment about using the word spindle has been very effective for me as well. Also helping to educate the SAN admin’s on why I request different raid configurations for different files has been very effective. By explaining that a raid 1+0 configuration is helpful for logs due to their contiguous writes versus having data on raid 5 due to the decreased cost (less spindles needed) and the random read / write nature of data files. David’s comment is also very interesting. I’ve been part of many SAN implementations over the years and I’ve only once seen the performance degrade when moving to a SAN. This was due to the SAN administrators creating one huge raid group (5) and putting exchange, databases and file shares all together. At the end of the day, I believe that the responsibility for this to be effective rests on the owner / manager of the database system. Typically that’s the DBA. If a DBA doesn’t understand the topology of storage, be it NAS, SAN, SCSI, FIBER etc… it is their job to get up to speed on all of it. From a hardware performance standpoint it is nearly always the bottleneck, in my experience. One quick lesson learned (the hard way). Even the best SAN guys don’t always understand the underpinnings of the technology and metrics provided by the SAN vendor. The experience that I’m referring to had to do with the SAN monitoring software showing performance capacity at 30 – 40 percent. Meaning that the SAN was reporting that it was only busy that percentage of time; but from the O/S / RDBMS side, I was noticing queue lengths / waits. It took over a week for me to convince them to give me a login to the monitoring software. Once I had it, I was able to find a statistic that explained everything… it was busy time per disk. On one of the raid groups; the group was reporting 40% busy, yet each disk was reporting > 80% busy. Meaning that the disk was thrashing and killing performance. There are many other stories out there which are similar and some new ones with NAS. However, at the end of the day, SANS are simply awesome for databases.“
OPCFW_CODE
In this video I answer the question: How can I use Nextion displays with boards like the Arduino UNO, the Arduino mini, or the ESP8266?Download Serial client... Kingspec 120gb ssd 1.0 pinout: added SDA and SCL pins that are near to the AREF pin and two other new pins placed near to the RESET pin, the IOREF that allow the shields to adapt to the voltage provided from the board. In future, shields will be compatible with both the board that uses the AVR, which operates with 5V and with the Arduino Due that operates with 3.3V. Bossier parish property tax rate May 15, 2016 · Connecting LCD to Arduino UNO. Before wiring the LCD screen to your Arduino UNO or Genuino board we suggest to solder a pin header strip to the 14 (or 16) pin count connector of the LCD screen, as you can see in the image above. To wire your LCD screen to your board, connect the following pins: LCD RS pin to digital pin 12 Gti strain cookies This is a 2.8" Arduino Touch Screen Tutorial with the ILI9325 driver. Is this Arduino touch display a good option for your Arduino projects? Keep watching in... The Arduino UNO WiFi Rev.2 is the easiest point of entry to basic IoT with the standard form factor of the UNO family. Whether you are looking at building a sensor network connected to your office or home router, or if you want to create a BLE device sending data to a cellphone, the Arduino UNO WiFi Rev.2 is your one-stop-solution for many of the basic IoT application scenarios. Sirius xm stock forecast 2025 STORE.ARDUINO.CC/UNO-REV3 D ~D D D D D ~D D D D D D D P ESE AD AD AD AD AD AD P P P P P P DSL DSDA AE GD D P P P P P P P P PD PD PD PD PD PD PD PD SL SDA S MISO MOSI SS A A A AD D A D D D D A A A A A IOE NC +3V3 +5V GD GD I LEDILI LED LED Power P PD PD VIN 6-20 V input to the board. MAXIMUM current per +3.3V pin is 50mA ... 2001 thor wanderer 5th wheel The Iteaduino Uno is a microcontroller board based on the Arduino UNO. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. Windows 10 home product key generator OBD‐II and CAN standard pinout selectable. Changeable chip select pin Changeable CS pin for TF card slot Changeable INT pin Screw terminal that easily to connect CAN_H and CAN_L Arduino Uno pin headers 2 Grove connectors (I2C and UART) SPI Interface up to 10 MHz Procharger f1x sbf With the help of Arduino Stopwatch You can set the time without making any change in the code and it actually let you know when it reaches zero. Connections. Connection is in between Arduino and 16x2 LCD screen. LCD RS pin to digital pin 12. LCD Enable pin to digital pin 11. LCD D4 pin to digital pin 5. LCD D5 pin to digital pin 4. LCD D6 pin ... Sba form 3502 and 3503 LinkIt 7697 supports the core Arduino APIs including: pinMode; digitalRead; digitalWrite; These APIs allow you to control or read the high/low state of digital pins. However, note that the pin layout is different from Arduino Uno. Please refer to the pinout diagram for detailed pin definitions. Blink Example Dead bluebird meaning ArduCAM now released an ESP8266 based Arduino board for ArduCAM mini camera modules while keeping the same form of factors and pinout as the standard Arduino UNO R3 board. The high light this ESP8266 board is that it well mates with ArduCAM mini 2MP and 5MP camera modules, supports... Lenovo legion wifi driver Oct 12, 2015 · Connect your Arduino Uno to your computer via usb as normal. Open up the Arduino IDE software. (If you don’t have it, you can download it here) Copy & paste the code from down below into your software. Compile the code and you shouldn’t get any errors. Project Code 1The Pro Mini 3.3V runs at 8MHz, half the speed of an Arduino Uno. We put a slower resonator on the Mini to guarantee safe operation of the ATmega. That said, don't let the slower speed scare you away from using the Mini; 8MHz is still plenty fast, and the Mini will still be capable of controlling almost any project the Arduino Uno can. Arduino Shield – CNC Shield Hardware. BESTEL NU! Informatie. Arduino CNC shield v3. Voor het aansturen van een CNC, graveer machine of 3d printer. Kan tot 4 stappen motoren aansturen. Heeft maar 2 io poorten nodig per motor. Specs: – GRBL compatibel – 4-Assen, XYZ A, de laatste kan worden gebruikt als een zelfstandige as of een kopie van XYZ Istio timeoutThe UNO R3 is the most used and documented board of the whole Arduino family. The UNO R3 is a microcontroller board based on the ATmega328P. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz quartz crystal, a USB connection, a power jack, an ICSP header and a reset button. Oct 28, 2018 · it's not clear for me the Arduino Core pinout for the ESP12F: https://goo.gl/6usmUY The ESP12 has 16 pins and the ESP12F has 22. It seems that the ESP12F has a specific pins for the ISP and 2 more pins ( GPIO9/10 ). I would like to create DIY board for my project, can i use the extra pins or the Arduino Core can't use them? Thx. Unit 3 lesson 6 activity 40 monopolistic competition answers
OPCFW_CODE
enclosure for SATA 2? I'm wondering what is a good enclosure for HDDs. Is a passive aluminum enclosure still okay? How do I know if it will support SATA 2 speeds? If I go with eSATA and a 'Green' drive for storage, I'll want the fastest speed I'm allowed, right? I have a Vantec NexStar 3, currently, with a drive. It gets a bit warm but I think it's okay. Somebody on Newegg says those enclosures don't support Sata II. Huh? Anyway, I hope someone can recommend some. I will want to get one when I buy a drive. I thought if I go with a 'green' drive, I don't need an enclosure with a fan. If I bought a standard HDD at 7200rpm, I might want one with a fan? My HDD in the Vantec I have now is a standard HDD but I think the temps are acceptable....hopefully... :biggrin: Which enclosures do you recommend? You really don't need SATA 3Gbps support in an external enclosure anyways as the drive won't hit those speeds. I've used plenty of the Nexstars w/o any issues. I'm currently using the Nexstar with a regular 500 gig hard drive using esata, and it works great. I keep forgetting it's esata until I go to use a USB2 external ... big speed difference obviously. Okay, sounds good. Now, I have to decide on a drive. Not sure I want to go with a WD EARS drive or not. I guess I was really asking about an enclosure because my drive choices are these: WD15EARS or Samsung F2 EcoGreen (any capacity) or Samsung F3 1TB (a 7200rpm drive) I like this one, although I never used it personally: Antec Veris MX-1 Actively Cooled External 3.5IN Hard Drive Enclosure USB2.0 eSATA Black Reg price is $59.99 tho, could be PMed to around $49.99 I have a Seagate Freeagent 1TB USB that I MIGHT upgrade to a MX1 enclosure, if transfer speeds from USB to eSATA are worth it. The EARS has the bigger cache than the EADS, but a green is a data drive anyway. I picked up a couple of the 2TB in the nicx sale last week, mounted one internal, swapped it out with my 150GB raptor data drive, holds so much more eh. The problem with external enclosures is always heat, not sure if I'd trust a passive cooling system, for a sealed type external, I'd want some fannage. What I've had for a few months is one of the dual drive Thermaltake BlacX's which can be used usb or e-sata. However, I rarely use it, internal is much healthier for the drives. As mentioned above, e-sata is the way to go, much faster than the usb interface. And, as an added bonus, when your drive appears toasted, diagnostics and recovery progs run much better when the connection is e-sata. No extra level of translation back and forth through the usb. |All times are GMT -7. The time now is 10:51 AM.|
OPCFW_CODE
Sometimes you have to write code to do some ad-hoc things in order to make programs run in a variety of environments, but as time goes by, it could result in a tragicomical situation where ad-hoc features are built on other ad-hoc features. It is easy to identify that kind of problem, but in many cases, nobody can fix it. I wanted to share my own story here because I had such an experience. I'm writing a linker called lld as part of my work. Linkers are programs that concatenate compiler-generated binary files to create final executables or DLLs. I'd think that many people even don't know about its existence, but at the end of every build, linkers are always run to generate final outputs. lld is becoming popular mainly because it is a few times faster than other linkers, which makes overall build times shorter. Some operating systems, including FreeBSD, are trying to switch to lld. Some large-scale programs such as Chromium or Firefox are trying to switch to lld individually, too. For individual programs, compatibility issues are not that problematic because we can fix either the linker or the target program. More difficult compatibility issues are likely to occur when you are adopting lld as part of the standard build system of an operating system which includes numerous, wide-ranged programs. The issue described here has occurred in FreeBSD. If you've ever built a program on Unix, I think you've had experience running the "./configure" script. Since Unix has various flavors such as Linux, FreeBSD, macOS, etc., many programs come with a script to gather information of the system environment to create a build file, so that the successive "make" command can build the program accordingly. The script, for example, checks whether or not the "strnlen" function is available in a build environment by creating a source file containing that function call and trying to compile it. The problem that the people working on FreeBSD found is that if they tried to run configure in an environment in which lld is installed as the standard linker, lld would be determined by configure as if it were an ancient Unix linker like 30 years ago. Further investigation revealed that the configure script runs the linker in the background with the --help option, and determines it as a modern linker only when the displayed help message contains "GNU" or "with BFD". What this means is that only GNU linkers are considered modern in the environment, and all the other linkers are considered terribly outdated. This problem is a bit troubling. GNU linkers are fine because they contain something like "GNU ld 2.23" in their help message, but since we have nothing to do with the GNU project, our help message naturally does not contain "GNU". There were two possible solutions. One was to fix the configure script. However, the configure scripts are generated by a set of tools named autotools, and the last release of autotools was a few years back, so even if we fixed the autotools, it would be hard to expect that an improved version would be released soon and become widely used in the near future. Also, since we cannot update the existing configure scripts that are already generated and distributed as part of other programs, even if we improve autoconf, it would take many years until the problem would be resolved. Therefore, even though this may have been the "right" solution, it was not realistic. The solution we ended up choosing was to add the string "compatible with GNU linkers" to our linker's help message. This string is not too odd for humans to understand, and since it contains the string "GNU", it is also friendly to configure. It is not a beautiful solution. It supports the erroneous assumption rather than correcting it. But it was practical. When I was fixing the problem, I was thinking about the User-Agent string of the web browser. HTTP requests sent by browsers contain a browser identification string in the User-Agent field. It has been repeated in history that every time some browser made improvements and started using some new name in User-Agent, other browsers would catch up and add the same string to their User-Agent strings. As a result, all browsers now identify themselves as "Mozilla/5.0". Because a myriad of websites were thinking that non-Mozilla/5.0 requests were coming from ancient browsers like the 90's and sending back very shabby pages, all browsers had no choice other than pretending to be Mozilla/5.0. Both the problem we faced and the solution we adopted were the same as the User-Agent problem of the web browser. If you write a program to deal with other programs that have already spread around the world, these types of compatibility issues tend to arise, and perhaps nobody is able to solve them cleanly. As a result, the browser still includes "Mozilla/5.0" in every request which is almost pointless now, and our linker prints out a slightly strange string in the help message. This sad situation is simultaneously a bit funny to me. I think this kind of workaround is part of the reality that is inevitable in real software engineering. Rui Ueyama — December 2017
OPCFW_CODE
Acceleration in Relativity. A Critical Introduction Just as the first book came out, and just before the first conference I attended in 2008, there was a meeting of Time and Universe (tau) on the theme of the Unruh effect in Montreal, at the same venue as the spacetime ontology conference. Unruh himself was present. It was immediately obvious that there was a parallel between attempts to build a particle interpretation of standard quantum field theory ‘for’ accelerating observers and attempts to redefine EM radiation for accelerating observers. (In the classic case, the observers have eternal uniform acceleration in flat spacetime, so they are examples of Killing observers.) But when I wrote my first book, I had not heard about the Unruh effect. At first glance, it seemed to me that the thesis of my book would also write off such attempts as physically pointless. The proposed particle interpretation (Rindler particles) depends on a choice of accelerating frame, but according to me, there are no physically natural accelerating frames. (The Unruh effect is still there, of course: when the Unruh-DeWitt detector accelerates uniformly through the standard Minkowski QFT vacuum, it registers, so it no longer functions as a Minkowski particle detector, since there are none in that field state. This is an interesting prediction about the standard QFT vacuum.) So the thesis in my first book would imply likewise that it was a very silly idea to try to attribute this or that point of view to accelerating observers. I was convinced deep down that I must be wrong, given the consensus that seemed to be reflected in that meeting. But there I was in my first ‘public’ appearance, unknown, with no credentials, and Angela Lahee had come from Springer hoping that I would promote my book! Luckily nobody realised the connection so I had time to go home to our peaceful backwater in the Pyrenees and understand the basics of the Unruh effect. Ten months later, I felt reasonably sure my criticisms would carry over. The main arguments are presented qualitatively in the pdf file that summarises my talk at the Bad Honnef conference. But anyone familiar with the Unruh-DeWitt detector in the context of the Unruh effect will easily see the connection with my remarks above about accelerating detectors. After Bad Honnef, and noting that nobody came to me with any major contradictions about the import of that talk, I set about writing a third book. The aim was twofold: first to provide a straightforward, physically comprehensible mathematical description (no fancy mathematical structures) of all semi-Euclidean frames adapted to accelerating observers in flat spacetime, with a complete classification of all rigid motions and a discussion of their relevance to such frames; and second to criticise what seem to me to be naive interpretations of physical quantities expressed relative to such frames. A lot of other acceleration-related ideas got into the book, including criticism of the notion of Killing observer in general relativity and criticism of the idea that general relativity somehow explains inertia, along with more discussion of the way the existence of self-forces can in fact throw light on the notion of inertia. There was also an in-depth investigation of the clock and ruler hypotheses, as I understand them, in the light of Bell’s famous paper How to Teach Special Relativity, and some comments on Mashhoon’s papers, and in particular, his locality hypothesis, insofar as I understand it. The title I chose was Acceleration in Relativity. A Critical Introduction. It was intended for anybody who had been through a first course in general relativity and understood it. Of the two referees found in extremis by Springer, one considered the subject interesting and the arguments cogent but felt that physicists would not be impressed by it (also that the notation was not modern enough). The other rejected it, we may say ‘out of hand’, as nonsense, dwelling heavily on my lack of credentials, which was rather disappointing. It implies that, without credentials, we are condemned always to remain without credentials! I make the whole of this book available here in the original portable document format (1.6 MB). I am currently producing a shorter, sharper version, although not necessarily in any attempt to get it ‘peer reviewed’ by a publisher. If people are interested in it through this website and if I can distribute it even to a handful of such people in that way, it will be just as useful to me, since the sole aim, as always, is to get considered and rational criticism of the ideas. Note that Chap. 10 should be replaced by the pdf available here. As I said above, it contains a down-to-earth physically comprehensible discussion of semi-Euclidean frames (with Euclidean constant time hypersurfaces) adapted to the motion of accelerating observers in flat spacetime, together with a complete discussion of the relevance of rigid motion, a complete classification of rigid motions in flat spacetime, and a few other relevant curiosities relating to frames in curved spacetimes. Apart from that, the main critical thesis is that, without introducing an acceleration symmetry into our theories of non-gravitational physics, there is little point trying to imagine what accelerating observers will think about their observations. All that matters is the way accelerating detectors will interact with fields, and this our theories tell us perfectly well without ever mentioning observers. So not to mince words, this implies that a considerable part of what Unruh theorists do is a pure waste of time from the point of view of physics, although the mathematics is quite wonderful, like all mathematics.
OPCFW_CODE
Computer Type: Laptop System Manufacturer/Model Number: Dell Inspiron 7559 OS: Windows 10 Edu CPU: i5 Memory: 16 gb Graphics Card: Some intel crap and an Nvidia GeForce GTX 960M with DP 1.2 Sound Card: RealTek, but I use Bluetooth to redirect to Alexa. Monitor(s) Displays: native plus external ACER KA270H 27" Screen Resolution: 1920x1080 (each) Keyboard: Garage Mouse and some cheap USB device made in China for Amazon) Mouse: Garage Mouse and some cheap IR device made in China for Gateway) Hard Drives: 256 GB SSD plus numerous WD Red or Purple multi-TB USBs and WD Passports. Used to buy a lot of Seagates, but tossed them all the 2nd time I got unrecoverable disc corruption. Internet Speed: 50 Mbps down (allegedly. depends who tests.) Browser: Chrome mostly, some IE, rarely edge or FireFox. Antivirus: Defender plus MalwareBytes Premium plus Kaspersky Other Info: Win-10 Edu on Dell Inspiron with external monitor and drives. The above sitting next to a Toshiba Notebook running Win7 Pro and similar external monitor and drives. Navigation across them using Microsoft Mouse without Borders. Running both Ethernet AND Wi-Fi. Eithernet out to the ISP and Wi-Fi for the GarageMouse KVM and access to devices and shares on the Wi-Fi. I have a Win10 Enterprise version. I have set 3 Languages under the Setting -> Language. I can manually switch between the 3 languages under the Language setting page (by the set as primary language, then perform a sign out and sign in with... I am trying to add a user via the command line through a virtual pc I added with VMWare. I'm running windows 10, and unable to add because this is what I get when I open CMD: I can't go beyond this point, I keep getting error messages. Any help... I have several network connections configured. There is one wifi connection which automatically connects to my WLAN, another cable based LAN connection to a NAS which is used for backups and some L2TP connection connections (used... For some unknown reason I cannot any more navigate to Update panel from "Einstellungen" - it is blocked. I have to cancel the "Einstellungen" window by the task manager ! So in this moment I cant go forward nor backward any more: cant... Windows 10 Forums is an independent web site and has not been authorized, sponsored, or otherwise approved by Microsoft Corporation. "Windows 10" and related materials are trademarks of Microsoft Corp.
OPCFW_CODE
DNS AND ROUTER SETTINGS FOR ADSL Author: Mega ByteMI HOME ROUTER WILL NOT SETUP A DNS SERVERICE CORRECTLY I WISH TO SETUP A WEB SEVER USING APACHALE AND MY ROUTER IS NOT ALLOWING OUTSIDE CONNECTIONS EVEN THOUGH IT HAS PORT FORAWARDING BUT THE ROUTER I HAVE SEEN CAN BE USED FOR THIS IT FULLY SUPPORTS NO-IP.com AND PORT FORWARDING AND DNS NAT ALL SORTS ITS GREAT I AM GOING TO SAVE UP AND BUY THE DSL-502T yea User type: Standard User Date: July 3, 2006, 10:30 a.m. Author: Uber_deathworldQuery: Why do you need a DNS server if you have a router which means your on Broadband or ADSL, You should check your settings on the router and configure the dns srver to what you want it to be just remember that your DNS server would be for LAN not WAN, as your ISP does your external DNS, Although if you cant get any incoming data.. theres one easy solution.. follow the data if you can get The Internet then you can accept incoming connections.. maybe the port required is closed .. i dont know .. cant really say much without having a look at it myself.. which i would do for free of course but .. this is a company website so i am not offering my services otherwise i might get banned from these forums, For making them lose one customer who knows better to play the safe card sorry megabyte User type: Standard User Date: July 7, 2006, 8:53 p.m. Author: robvdlI don't think he should need a DNS server either to my knowledge, all that should really be required is port forwarding to be enabled in the router as far as I am aware. However I have not yet setup a web server behind a router with port forwarding myself, only one that connects directly to the internet via static IP, so I am not entirely sure myself how to get it to work, not without physically having access to the router and server and doing some research beforehand anyway, which takes time. User type: Administrator Date: July 8, 2006, 2:22 a.m. Mega Byte appears to be using no-ip.com because he doesn't have a static IP address with his ADSL. I am not quite sure how they work this, but I'd say that he will most likely need to install a program on his server that informs no-ip.com whenever his IP address changes. Other than that, he could maybe check the router or even the server itself if it is not firewalling of port 80 outbound (HTTP is normally on port 80), there's a small possibility that this could be the problem. Then there's the Apache config file to check as well, which might need to be configured for this specific setup, in Linux this should be located in /etc/httpd/conf/httpd.conf (depending on the distro), and in Windows it's normally located in your Apache install directory. There's detailed information on setting up the Apache's config file on the Apache Website. I can't really help much further than this, other than to give a few tips here and there, because going through a tutorial or trouble shooting the router settings, not to mention the research involved will take a lot of time. However if you, or anyone else wants to try and help Mega Byte through this issue on the forum, this is perfectly fine with us, we won't ban you or anything We have no problems with members asking computer questions in the forum, that is a choice members have, but we don’t have to be obligated to answering any of those questions though, if members would like to help each other, thats perfectly fine with us
OPCFW_CODE
Learn here how NumeRe can help you Integrated documentation and keyword search The integrated documentation and the keyword search explain functions in detail and help you quickly when you need help. Examples for the use of commands offer you a support to quickly implement your tasks. If you get stuck because you can't remember the name of a function, try the "Tell me, what do you want to do" search function in the toolbar. Numerical mathematics parser Mathematical expressions are no problem for the parser. It offers you the necessary flexibility to fulfill your tasks. The parser is inherently vectorial, so you can calculate vectorial expressions without additional code. NumeRe also offers you a large set of predefined physical and mathematical constants. Support for a wide range of data formats Your data is partly in text files, partly in Microsoft Excel®? No problem for NumeRe. The following data formats are supported: NumeRe data file (*.ndat) Text files (*.dat and *.txt) CSV (comma and semicolon separated) MS-Excel® (*.xls and *.xlsx) OpenDocument spreadsheets (*.ods) JCAMP-DX (*.jcm, *.jdx and *.dx) IGOR Binary Waves (*.ibw) Data analysis functions NumeRe supports you in any kind of data analysis. The classics are statistical functions like mean and standard deviation. But also much more advanced analyses like histograms and of course any form of self-developed algorithm is possible. There are no limits for you here. Nonlinear curve fitting in 1D and 2D (Nonlinear regression) You have measurement data and want to check if your physical model fits? The integrated Levenberg-Marquardt algorithm can fit functions in 1D and 2D to your data via parameters. As long as your model is expressible by numerical functions, you are not limited in your model. Even better, you don't even have to declare the parameters - but you can if you want. In addition, NumeRe offers you the possibility to formulate additional conditions for the parameters to be fitted, or to examine the model for fit minima (as a so-called Chi² map). FFT and FWT algorithms Fourier transforms have become a standard feature of any good data analysis. NumeRe also offers you these functions and even goes one step further by providing you with wavelet transforms. Both algorithms can of course also be applied inversely. Graphical plotting in up to three dimensions There are almost no limits to the graphical representation of your data and functions. NumeRe offers over 10 different basic plotting styles in one, two and three dimensions, which you can further modify. Activate a grid or a comprehensive box, switch to logarithmic scaling, use light and transparency effects. Different color scales are available for 2D and 3D plots, but you can also define a color scale yourself. If your desired plotting style is not directly available, you can combine several styles using the compose mode. And don't worry: NumeRe will take care of arranging the foreground and background in the right order... If vectorial expressions are no longer sufficient or if you want to multiply two matrices together, you will find what you are looking for in matop mode. In this mode you have additional matrix functions like invert(), det(), eigenvects(), eigenvals() and diagonalize() at your disposal, which besides matrix multiplication by means of the **-operator handle linear algebra quickly and efficiently. Automation and programming You can of course use NumeRe interactively, but more complex problems will require a lot of steps that you would have to enter again and again. For such problems NumeRe offers you the possibility to write scripts in which you can store the individual steps line by line. Control flow blocks like if...else...endif or for...endfor give your code additional flexibility and structure. The integrated editor supports you when writing scripts by syntax, bracket and block highlighting to show you the structure of your code quickly and clearly. It can also automatically format your code to further increase readability. A static code analyzer can check your code for potential errors and mistakes as you type, and give you suggestions for coding styles. There will come a time when a simple script is no longer sufficient to accomplish your tasks. Then procedures offer the next better level of abstraction with local variables and recursive calls. Fixed namespaces make dependencies clear and make quick swapping of procedures or whole namespaces trivial and obvious to everyone. With the code extraction feature of the editor you can quickly and easily swap whole blocks of scripts or procedures into new procedures, so that even long code sequences quickly become clear again. If you want to share your once written procedures with your colleagues or friends, you can use the package creator, with which you can detect dependencies and integrate all procedures into a common install script that the recipients can quickly and easily integrate into their distribution. Integrated version control system (Source control - SCM) You know the situation: You edit a code, close the program and realize afterwards that You made a mistake. Then you want to revert the change and you just can't remember what exactly you changed. By default, NumeRe creates a version history of all files You edit in the editor. The files must be in one of the five default paths of NumeRe. For all other files NumeRe creates a *.backup file where You can find the last version. You can find the version history of the files in the context menu. From here You can restore older versions or directly create a *.diff file to track Your changes. Event-based graphical user interfaces The use of commands and procedures can fulfill almost all Your wishes. But sometimes You want it to be more comfortable or more intuitive, don't You? The event-based graphical interfaces You can create with layout scripts (*.nlyt) with only a few lines simplify access to Your solutions and make them much more accessible for people who are less familiar with programming. And the best: You can integrate layout scripts into packages and even link them directly in the packages menu via GUI plugins: so you create look'n feel as if the solution would come directly from us. If You are wondering now if You need to customize Your solution for this, we can reassure You. You only need the layout script and an event handler procedure (where a single procedure can be sufficient), which then points to Your solutions like a normal NumeRe procedure.
OPCFW_CODE
I/O functional programming, and java programming Hi: We are using Java for a multi thread application. We found bottleneck at Java I/O. Has functional programming, scala for example, had better I/O throughput? We will have many cores cpu, in that sense, business logic could be handled very fast, but I/O would be a bottleneck. Are there any good solution? Have you tried NIO? Please refer this as well : http://stackoverflow.com/q/1605332/931607 Have you found out WHY you have a bottleneck at I/O? Your harddrive is unlikely to magically become faster if you switch programming language @jalf good point. I assumed that the code does something inefficient with IO (e.g., blocking threads waiting for it and not handling other business logic). The OP should clarify. Since Scala runs on the Java Virtual Machine, and (under the hood) uses the Java API for I/O, switching to scala is unlikely to offer better performance than well written Java code. As for solutions, your description of the problem is far too sketchy to recommend particular solutions. As has been pointed out the OP's question is way too vague, and thus this sweeping generalisation may be actually quite wrong. A change in the programming model (as others have suggested for instance to an async NIO model) may a win – or it may not. That's why I qualified with well-written Java code (which includes correctly using an API approapriate for the task), and said that Scala was unlikely to offer better performance. Are you using or tried Java nio ( non blocking) ? Developers report upto 300% performance increase. Java NIO FileChannel versus FileOutputstream performance / usefulness ( Please refer this as well) Usually when people complain that Java IO is slow, it is what they are doing with the IO which is slow, not the IO itself. E.g. BufferedReader reading lines of text (which is relatively slow) can read 90 MB/s with a decent CPU/HDD. You can make it much faster with memory mapped files but unless your disk drive can handle it, it won't make much real difference. There are things you can do to improve IO performance but you quickly find that the way to get faster IO is to improve the hardware. If you are using a Hard Drive which can sustain 100 MB/s read speed and 120 IOPS, you are going to limited by these factors and replacing the drive with an SSD which does 500 MB/s and 80,000 IOPS is going to be faster. Similarly, if you are using a 100 Mb/s network, you might only get 12 MB/s, on a 1 Gb/s network you might get 110 MB/s and on a 10 Gig-E network you might be lucky to get 1 GB/s. If you are performing many tiny I/O operations, then coalescing them into one large I/O operation could greatly speed up your code. Functional programming techniques tend to make data collection and conversion operations easier to write (e.g. you can store items for pending output in a list, and use map to apply an item-to-text or item-to-binary converter to them). Otherwise, no, functional programming techniques don't overcome inherently slow channels. If raw I/O speed is limiting, in Java and elsewhere, and you have enough hardware threads available, you should have one top priority thread for each independent I/O channel, and have it perform only I/O (no data conversion, nothing). That will maximize your I/O rate, and then you can use the other threads to do conversions and business logic and such. One question is whether you have unlimited time to develop your application or not. If you have unlimited time, then the Java program and Scala programs will have the same performance since you can write Scala programs that will produce exactly the same bytecode as Java. But, if you have unlimited time, why not develop in C (or assembler)? You'd get better performance. Another is how sophisticated your IO code is. If it is something quite trivial, then Scala will probably not provide much benefit, as there is not enough "meat" to utilize its features. I think if you have limited time and a complex IO codebase, the a Scala based solution may be faster. The reason Scala opens the door to many idioms that in Java are just too laborious to write, so people avoid them and pay the price later. For example, executing a calculation over a collection of data in parallel is done in Java with ForkJoinPool, which you have to create, then create a class wrapping the calculation, break it for each item and submit to the pool. In Scala: collection.par.map(calculation). Writing this is much faster than Java, so you just do it and have spare time to tackle other issues. From personal experience, I have a related story. I read in a blog article that BuildR, a ruby based build tool was two times faster than Maven for a simple build. Considering that Ruby is about 20 times slower than Java, I was surprised. So I profiled Maven. It turned out it did apx 1000 times parsing of the same XML file. Now of course with careful design, they could have reduced that to just one time. But I guess the reason they did not is because the strait-forward approach in Java led to a design to complex to change after. With BuildR, the design was simpler and performance better. In Scala, you get the feeling of programming in a dynamic language while still being on par with Java in terms of performance. UPDATE: Thinking about it more, there are some areas in Scala which will give greater performance than Java (again, assuming the IO bottleneck is because of the code that wraps the IO operations, not the reading/writing of bytes): * Lazy arguments and values - can push spending CPU cycles to when they are actually required * Specialization - allows to tell the compiler to create copies of generic data structures for the native types, thus avoiding boxing, unboxing and casting.
STACK_EXCHANGE
Does a HTTP 200 response imply that there must be a response body? I'm attempting to decipher a Swagger specification document and use it to generate code. It contains the following endpoint definition: /feature: get: summary: Returns all features operationId: getAllFeatures tags: - feature responses: '200': description: 'Features retrieved successfully' '400': $ref: '#/responses/BadRequest' Based on the endpoint summary and the 200 response description, it's pretty clear to me that this endpoint was intended to return a response body that contains an array or collection of "feature", even though the response is not defined in the spec. Let's suppose that I'm right, and the spec author just forgot to add it. What then should I make of this endpoint: /features: put: summary: Updates an existing feature operationId: updateFeature parameters: - name: body in: body description: 'Feature to be updated' required: true schema: $ref: '#/definitions/Feature' tags: - feature responses: '200': description: 'Feature updated' This one is ambiguous to me. I've seen some implementations of update endpoints that return the updated object. Others I've seen return nothing in the body. My questions are this: Does a HTTP 200 response imply there must be a response body? I can't tell whether the HTTP specification (https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html) requires this or if it just states that it could be there. Would it be less confusing in this scenario for the spec author to have used HTTP 204 to expressly indicate that there is no response body? (I can answer this - yes - but are there reasons to use HTTP 200 instead of 204 in this case? Or was this just that the author didn't know about success response codes other than HTTP 200?) If the answer to #1 is no, and the answer to #2 is yes: why was HTTP defined in this manner? 200 OK can return an empty body with Content-Length: 0. 204 No Content has a more specific purpose than many people realize. Quote from the current spec: The 204 response allows a server to indicate that the action has been successfully applied to the target resource, while implying that the user agent does not need to traverse away from its current "document view" (if any). The server assumes that the user agent will provide some indication of the success to its user, in accord with its own interface, and apply any new or updated metadata in the response to its active representation. Basically this is saying, if for example a HTML form is submitted, and the server responds with 204, it can signal a browser to not refresh the current page to the new location, or redirect anywhere else. It can for example facilitate a 'save' action without forcing the browser to redirect/switch to a new url. Also see 205 for a similar action, but with different behavior. Browsers (as far as I know) don't actually implement this behavior. But a REST/Hypermedia/HATEOAS client could. The current spec also states the more common use, which is 200 without a response body, but if you go all the way back to the HTTP/1.0 spec, this is the entire section. Notice that it only mentions this behavior, and says nothing about 204 being just a substitute for 200 minus body: The server has fulfilled the request but there is no new information to send back. If the client is a user agent, it should not change its document view from that which caused the request to be generated. This response is primarily intended to allow input for scripts or other actions to take place without causing a change to the user agent's active document view. The response may include new metainformation in the form of entity headers, which should apply to the document currently in the user agent's active view. So the key here that this signals about how a hypermedia client should behave. removing that, I would agree there's not a lot of reasons to use 204. It's become a convention that I don't think has a strong purpose. Sidenote: don't refer to RFC2616 unless you're into internet archeology. See #2 I believe browsers do implement the no-page-refresh behaviour of 204. It was made use of in the frame buster buster @Alohci that is very interesting. If true, I'm also curious if 205 works
STACK_EXCHANGE
SQL Server Master Data Services Performance Tuning Guidelines Master Data Services is a MDM (Master Data Management) system from Microsoft and ships as a part of the SQL Server distribution. Most of the computational tasks are done on the SQL Server side and hence SQL Server performance tuning is important for an optimal MDS deployment. In this post I am going to talk about the SQL Server Master Data Services performance tuning guidelines. You have two deployment options for Master Data Services: - Single Server Deployment with IIS and SQL Server on the same box. - Two Tier Architecture with IIS and SQL Server on different servers. Any kind of Production deployment should consider a 2-Tiered approach. The IIS Tier is fairly lightweight and can easily service hundreds of users with 4 Gb of RAM and a 2 Core processor running on a Virtual Machine (VM). The SQL Server on the other hand needs to be capable enough to handle most of the transactions and processing for MDS. Before even considering MDS, you need to keep in mind the following points: - MDS is designed for slow changing fairly static data. For high transactional data, MDS should not be considered. - Anything more than 50k distinct changes will have performance degradation. - Large changes should be performed in batches rather than individually. - Proof of concept is important for large deployments. Microsoft has different capacity models based on the number of members in an entity and number of attributes in that entity. Let’s summarize them below: Medium Capacity Models: - Entities with >500k members - No. of Attributes in Entities >100 - Domain based attributes >20 - Up-to 15 business rules - >=5 user concurrency - Hardware with 12-16 GB memory, at-least Dual CPU and High performance storage is recommended. Large Capacity Models - Entities with >10000000 (10 million) members - Attribute members>1000000 (1 million) with 100 attributes each - Domain based attributes >30 - Up-to or more than 15 business rules - >=5 user concurrency - Hardware with at least 24 GB memory, Dual Socket Quad CPU and High performance storage is recommended. For capacities of more than what is mentioned above, a complete performance testing and proof of concept is highly recommended. Microsoft also recommends creating index’s on the member tables of non domain attributes, especially which has large number of members. USE [MDS_MA] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [mdm].[tbl_2_6_EN] ([Version_ID],[Status_ID],[uda_6_115]) GO The above step needs to be performed manually. A good number of performance improvements has been implemented in MDS for SQL Server 2016 which I will talk about in a different post.
OPCFW_CODE
M: What startups are solving for Anxiety/Depression on a biochemical level? - newtothebiz I'm looking for companies that are trying innovative ways to deal with human depression and/or anxiety using more scientific/biochemical methods (rather than meditation / happy thoughts etc) R: vskarine You can take a look at some startups that sell nootropics. But after going through many natural supplements and prescribed medications, I realized that everyone is different and respond differently. I personally get high blood pressure within days of taking certain nootropics. Certain other ones causing me to be more explosive and agitated. Some natural herbs caused shortness of breath within hour, etc. All these things are very individual so I would not bet on any startup with magic pill to cure everyone. Took me a while to find something that works for me. Maybe there should be startup that guides you through these experiments to find what works for you individually. R: newtothebiz I'm looking for something more directed, less trial and error. Form of sensing and cohesive testable theories (less statistical) R: vskarine But that's sort of is my point, it's all statistical, that's why they have medical trials for years before FDA approves new drugs. We sort of know what is happening in the brain, it's lacking one neurotransmitter or the other and all the drugs/supplements basically either provide it or pretend to be one. Another approach I've seen is to stimulate certain hormones, but again it's individual because some people might have excess of one hormone and lack of the other. Relaxation and meditation is basically natural way to balance certain hormones (reduce cortisol in particular). Or sun exposure gives you vitamin D (which is technically a hormone) and it's been studied to help with depression, but some people need more of it than others etc. I recommend you to pick up a book on nutrition before jumping into this space, one of the easy to read books is Ultramind Solution by Dr Hyman. Hope this helps.
HACKER_NEWS
Stamina damage is still absurd. There's realistically no counter to this... we can sit here nerfing stun baton, stamina numbers... whatever, but this isnt engaging in the slightest for the recieving player. Either stun functionality need to be overhauled or armor needs to have high resistance to stamina based damage. Theres absolutely no reason why non lethals should be the prefered takedown method for heavily armed terrorists. Let non lethals be used for civilians and use lethals on actual killers to give both sides engaging gameplay that lasts longer than a few seconds. Why should non lethals simply entirely bypass all armor and plans? Its no wonder antagonists resort to the easiest metagamy methods of crippling the station when anyone on the crew already do to them! https://cdn.discordapp.com/attachments/557322456013209647/1172267006729400380/godsbestnukie.mp4?ex=655fb1ba&is=654d3cba&hm=3ef0a578197588669595cde6236cad27fae8803d126ec148510aa91d7a7a5de3& mfw cdn downloads the video instead of playingit, here's a link to general on discord where it embeds https://discord.com/channels/310555209753690112/675078881425752124/1172405307817861211 I'm not really in the mood to have this discussion for the billionth time, but I do want to add that I think rubbers specifically are super overtuned (in reference to the clip above). MFW billion dollar soldier is defeated by rubber balls this one is gonna be long so if you want to avoid tactics and strategy and gamesense skip to some place i highlighted also holy shit an actual real scenario as opposed to theoreticals on theoreticals Looking at the video they got shot like 9 times during the first "cycle", looking at the .yml for baseBulletRubber it deals 22 stamina damage, so 5 connected hits is enough to stamcrit anyone, the detective after that kept shooting them so I count 13 shots total from what I assume is an mk58. The nukie was also most likely injured before or during the firefight as exampled in the moment where the nukie uses both a freedom and an EMP implant despite being on low HP surrounded by 4 players, essentially being marked for death. From a tactical and strategical perspective this fight is even more of a mess. The nukie in question went into 3 armed secoffs without being on stims or meth, while having the duffelbag reducing his speed; speed is key to fighting secoffs or armed crew of any kind. The nukie then went through a 1x1 hallway that expands into what could hold at least 3 people shooting at the same time with little-to-no crossfire. All of this while going in with an e-sword and e-shield both of which aren't exactly ideal for fighting guns while not being on your terms. The nukie could be described as an "attacker" side here as they were going into sec as opposed to having sec go into them, this means they can't effectively use corners or relocate because they also use a melee build requiring them to be in melee range to trade damage, neither can they cause crossfire or bait shots, the nukie also avoided fleeing from a 4:1 fight despite not being prepared for one, I'm assuming they had atmos access, but they could also go through the disposal in atmos, or emag into space. This isn't all but I don't want to go too much into theoreticals or assumptions for this clip, the nukie did not play well, they picked a terrible fight on terrible conditions and got stomped on the way very fast by rubbers, to which I'll get later in more specific and relevancy to the discussion. Sec played this one well enough, they hit clean shots, avoided crossfire, kept as a group and had possible backup options and plans, did they play well enough to be rewarded with a nukie kill? Perhaps, assuming the first 5 shots are all that matters because after that the nukie goes into stamcrit they might've been rewarded too greatly on a gun balance level, because guns who can "kill" a nukie with an eshield and esword in 5 or less shots are very few and should stay few. VVV part to skip to VVV Sec was rewarded for their good gameplay by getting a nukie kill, which is fine, but what isn't fine is the sample window for good gameplay, five shots in the span of two seconds to decide the outcome of a fight with what is commonly described as a dangerous and prepared foe is insanely high TTK which is also only applied on one party, as sec doesn't get punished so harshly for missing a clip, especially in group. I personally think that high TTK sucks, it's not very exciting to play with or engaging to play against, getting killed by 5 rubbers is very high TTK considering the fire-rate of a pistol, even despite the nukie playing poorly in the above scenario they get punished too harshly and rubbers can of course still do the same provided both parties play well and one lands 5 bullets. What do I propose? Make armor resistant to rubbers, it makes logical sense, it makes for gameplay sense, you want to stun someone in military grade armor? Shoot them with a disabler which is worse than rubbers or go into close quarters and trade hits, high risk high reward, or opt in for more consistent damage with lethals. stun damage discussion number 848283334 stun damage discussion number 848283334 rain you dont get it we have an actual real practical scenario to go off this shit is a gold mine stun damage discussion number 848283334 rain you dont get it we have an actual real practical scenario to go off this shit is a gold mine someone said this on the last stun damage discussion top secret military grade hardsuit vs 5 high speed rubber ball who would win Yeah as it stands right now. If you want to go loud as a syndi, you need to preemptively make or steal stun recovery drugs. Or potentially sabotage gravity since it's rather hard to cuff someone who is sliding around. Or the most popular option is sit in space with a jetpack and slowly space everything. There's more to this but it really does force players into really specific often cheesy strats like grille camping and such. Having armor isn't very meaningful since most of the guns have 1 second to stun "ttk" or even faster than that. Common tactics for this as well is to top load rubber bullets, since if you lose half your stamina you are slowed massively making it much easier to land lethal shots, or just go for the full stun and cuff. And even if you don't get cuffed, you'll still drop whatever is in your hand. Getting hit by one beanbag slows your movement speed by more than half for 3 seconds. And landing any 3 rubber bullets will also do the same. And since the only way to increase stamina is a very short duration stim, there's no real effective way to deal with rubber rounds. Funnily enough the two dedicated items for stuns, (the baton and disabler) have the slowest stun speeds in the game. gets shot with 5 metal bullets owie, good thing i have armor gets shot with 5 rubber bullets keels over in pain, physically unable stand up, has a heart attack, eyes pop out of head, entire family tree feels the pain of a thousand suns Just based off of the gameplay in the vid, asperger-sind had a pretty appropriate overview of it. I do agree that, based on the numbers, the stamina damage per bullet is quite high. I'm not opposed to lowering them a bit. Of course, it should be easier to stamcrit than to kill, or else shooting an armored target with rubber bullets is going to kill them before the stamina is depleted. But as a condemnation of the stamina combat balance as a whole? This feels kinda weak. The clear solution is to just give regular bullets the same Stam damage as rubber bullets (joke) I don't see why the time to stun and time to kill with bullets cant be the same by making the Stam damage equal to the pierce damage then letting armor block both. Rubber bullets are still good against unarmored targets and identical to regular bullets for time to stun or kill, only you you'd rather kill then stun. I suppose the issue with that is the slowdown at half stamina, but I'm no expert on this. It still seems silly the response to heavily armored targets is rubber over lead. So I'm gonna just go on a tangent here, but... We could have material "hardness" or "conductiveness" that determines what damage set a weapon deals. Armor is harder, and stops all but the most penetrative weapons (think bullets, fire axes, etc). things that don't penetrate could then do lesser damage perhaps of a different type. (Think shot with bullet = piercing, shot with rubber = blunt). A hard suit is both hard, and insulating, so it would not conduct an electric disabler round for instance. the best anti-nukie gun is mapped into maints, been an issue since beanbags existed.
GITHUB_ARCHIVE
Let's try to fix that! In Internet Services Manager under the 'Home Directory' tab, I have checked that the .aspx expension is correctly mapped to C:\WINNT\Microsoft.NET\Framework\v1.1.4322\aspnet_ isapi.dll and note that the verbs GET,HEAD,POST,DEBUG are permitted. 5). I have restarted IIS by running iisreset.exe. > > > > > > > > > > > > A few things I havn't implemented (because the instructions were > > I have checked the project's web.config file and the debug > attribute is set to true: > What are the alternatives to compound interest for a Muslim? What and the heck could have happen to my machine or enviornment to cause this issue. All rights reserved. Right inverse of f(x)= x² that is not sqrt(x) or -sqrt(x) finding a word in a string Why was Vader surprised that Obi-Wan's body disappeared? The error I was getting was: "Error while trying to run project: Unable to start program 'path-to-solution-exe-file'. but it was instaled when the problem started. VS2015 0 OpenCV Errors - Accurate eye center tracking 0 Mysterious Error on Rebuilds C# -1 Unable to run hello world in Visual Studio Express Related 0Error when adding .css and If enough solutions get posted to the comments of this entry then I'll move it all over to the Wiki. Read 10 comments Popular White Paper On This Topic IT How to restrict InterpolatingFunction to a smaller domain? Also, under 'Configuration Properties -> Debugging', the 'Active > > > > Solution Configuration' for all assembilies in my project is set for > > > > debug. > > > Can Wealth be used as a guide to what things a PC could own at a given level? The System Cannot Find The File Specified Visual Studio 2015 Now I'm getting message 'Error while trying to run project: Not Implemented' for all of them. It should be "Any CPU" by default. Unable To Start Program Visual Studio 2015 When I reload the InstallSheild project, I get the error again. No advice on how I go about attaching an appropriate IIS > process or worker process, so I have been unable to check this. > > Another helpful MSDN article suggested http://stackoverflow.com/questions/7621258/solution-for-visual-studio-shows-message-error-while-trying-tu-run-project-af Not the answer you're looking for? right click Visual Studio 2010 and select Change/Remove from the Uninstall Programs tool in the Windows Control Panel, and click Repair in the Visual Studio Management window when it loads.) share|improve The System Cannot Find The File Specified Visual Studio 2013 A few things I havn't implemented (because the instructions were vauge) include: A MSDN article helpfully suggested that I: "Start the application without debugging. (From the Debug menu, choose Start Without Really was painful to track down and I finallynotice that my solution was at times changing the startup project to be my IS LE project. I can build my code, and run without debugging fine, but when i try to debug, i get that idiotic error message. Or you compilation target is x64 –Matteo Umili May 26 '15 at 9:45 | show 6 more comments 1 Answer 1 active oldest votes up vote 0 down vote 0xc000007b means http://stackoverflow.com/questions/5864520/error-while-trying-to-run-project-unable-to-start-program-cannot-find-the-file But of course, be sure that you are transacting with a dependable service provider. Unable To Start Program Visual Studio The System Cannot Find The File Specified Toolbox.com is not affiliated with or endorsed by any company listed at this site. Unable To Start Program Visual Studio 2013 This way, you're able to do what is required to address the problem. It just doesn't work by pressing the "Run" button in VS2010. –Olhovsky May 3 '11 at 3:09 add a comment| 20 Answers 20 active oldest votes up vote 18 down vote http://gmailpush.com/visual-studio/visual-studio-error-project-upgrade-failed.html Run setup to install or repair the debugger.i've tried everything i could found on the web, and didn't work.i previously had vista ultimate 32 bit and after a few tweaks everything I was able to debug old projects before I did complete rebuild. I have URLScan Installed, so I have modified urlscan.ini to > > > > include: > > > > [AllowVerbs] > > > > DEBUG > > > > > > Unable To Start Program Visual Studio 2012 However I also found out that I must restart VS2010 before reloading the project. you do not have permissions to debug the server.if i use as domain control is not problem. What is mathematical logic? Check This Out I have seen this behaviour on multiple computers, so am pretty sure it is not just a one off installation problem. Trouble shooting Visual Studio Error While Trying To Run Project needs the same method. Visual Studio Unable To Start Program Access Is Denied The time now is 04:17 AM. How to grep rows that have certain value in a specific column? You are not logged in. Share bypass capacitors with ICs or not? wait for a request from an external application." BTW, I think that at some point this started because of a mod that I made to machine.config. :) And no, I don't Error While Trying To Run Project Unable To Start Debugging Visual Studio 2008 What is the parentage of Gil-galad? This will be either the IWAM or IUSR account. However, not all files that you download from the internet is functional as there are incomplete ones. A few of the common reasons include not compatible PC module applications and driver problems. this contact form If the above doesn't work, make sure anonymous access is turned on, but set the user and password to a user that exists in the Debugger Users group; the best choice After doing this you're essentially using the big hammer of "give IIS rights to EVERYTHING" and the debugger should be happy. Also under the 'Home Directory' tab, selecting the 'Configuration' > > > button and 'App Debugging' tab, I have checked 'Enable ASP Serer Side > > > debugging'. > > > Also did Project->Properties->Configuration Properties-> and on the right pane in 'Output Directory' edited '.\Debug\' to '.\Debug\crv' share|improve this answer answered Apr 23 '15 at 13:57 stack user 484 add a comment| up vote 2 down vote favorite 1 a couple of days ago I signed my assemblies to test ClickOnce deployment. To do this, you need set 'debug = true' in the 'web.config' file. The message was: Unable to start program 'C:\Users\some user\Downloads\project name\.\Debug\fil_name.exe' The system cannot find the file specified. Also in Internet Services Manager under the Directory Security tab, I have checked that the Authentication Methods, Anonymous Access and Integrated Windows Authentication are selected. 7). All that you should do is open the system settings by going to the control panel. However since the error message has zero detail in it, i have no clue as to what is actually wrong. Is the sum of singular and nonsingular matrix always a nonsingular matrix? © Copyright 2017 gmailpush.com. All rights reserved.
OPCFW_CODE
Upcoming Network update (1st of August) UPDATE HAS BEEN COMPLETED! Hello again, as you might have noticed from the huge banner above this line, there will be a network update soon. This update will include new plugins, new worlds and new minigames. In the text down below you can read all about the next update, as I'll explain nearly everything. Both the Factions and Survival world will get a reset. This means that everything you've build, mined, raided, stolen, etc. will be gone. Why you ask? Well there are a few reasons why: - The worlds are getting too big in file size. Some maps are up to 10GB of storage, which is a lot. - Almost everywhere are builds. There is nearly no free space left to build for new players. - Every player has done every possible thing to do in the world. The fun in playing is gone. - The worlds are ugly. Everywhere you look are broken buildings, holes in the floor, floating trees etc. This just gives the world a bad look. So, you kind of understand us now right? Okay, good. Both worlds will get a new world (new seed, both starting in plains), but only survival gets a new spawn. All your mcMMO skills you've earned over the past months will be deleted. This also has a few reasons, but the main reason is just that there will be a new updated version of mcMMO, which doesn't support the old file type. Also, playing in new worlds with max upgraded mcMMO stats is just not fun for players. The new plugin will have a lot of bugfixes, so some old problems will be gone. Per world money: A new feature will be a per world money balance. Currently the same balance is shared across all worlds, but this will change with the update. Every player willl start with $400 in each world, and this balance cannot be shared across different worlds. No need to find a bypass to do so, as it is really impossibe. Money you earn by voting will also get added to the current balance of the world you are in, so teleport to the world of choice first before voting! VIP's, Donators and people who bought money addons will still get their money, just devided across all worlds. KitPVP will get 4 new kits. (don't remember the names :p) Not much to say about them, they are just awesome but won't be better than the already existing kits. As seen on all big servers: crate keys. With special keys, you can open crates (chests) which will give you a random reward. There will be 4 keys: Survival crate key, Factions crate key, SkyBlock crate key and a Lucky crate key. The Survival, Factions and SkyBlock keys will give you items that are useful in that world, and Lucky keys will give you big awards, like diamond blocks, a lot of XP, lots of $$$, etc. The chests will be at the spawns of each world. A holographic display will be hovering above them, saying something like 'Crate key chests'. Please note that we are not familiar with this plugin, so if the rewards are kind of lame, please forgive us! The many many many times required 'minigame' will be finally added to the Minigames server. Parkour. There will be 3 stages: easy, medium and hard, with each different levels. Completing them will give you gametokens*. Gametokens and Minigame ranks: After the update, you can now earn gametokens and ranks on the minigames server. Each rank requires a specific amount of gametokens before you get that rank. Gametokens can be earned by killing people in KitPVP and SkyWars, getting a high killstreak in KitPVP and completing parkour maps. More ways will get added soon, as the plugin is still in Beta mode. (custom plugin) New items will be added to the donation store, including but not limited to: - SkyBlock biomes - Minigame tokens - Crate keys - Minigame kits There will also be a 20% discount* at the 1st of August. From the 2nd till the 7th the discount will be lowered to 15%. * total price must be at least €10,- More RAM, permissions improvements and other small things: During the update we'll add 2GB more RAM to our build server to handle more plugins. The permission plugin has been improved, which will now use less memory. Soon new minigames will be added, including but not limited to: - OITQ (One In The Quiver, Custom made) - Splegg or Spleef (Custom made) Now when is the update?!: The update will be at the 1st of August (this week saturday) from ~10AM CEST till about ~12AM / 1PM CEST. During that time you can still join the lobby, just not the minigames and build servers. If you have any questions regarding the update, just post them below. I will try to reply to all of them. Thanks for reading, now have a fun time on the server! - Fabian, NLGameVideosNL Network Owner "* total price must be at least €10,- " The number is bugged :P U made any test with keys plugin? Can explain it a little more? I mean, the "chests", spawn randomly on worlds, have a dedicated space (like old casino), or chests are in chat and u open them with commands? Thanks for reporting the small bug, that's what copy and paste does to HTML characters :p I've also added some more details about the chest locations in the post. example: i like desert, so, i woud like to build in desert, so, there is a teleporter to a nearest desert biome!! this woud be great to just scatter players all around the building server, the building server is allways so cramped near spawn just because of that! its hard to begin game and go build on a realy far away land... anyway, its an idea i had, just plaing the game, actualy... i want to congratulate all staff because this server is the best i play so far! awessome job, people XD On another note. I think we should just pull the trigger and do reset and update as soon as possible. For persons like me who had small ammounts of rage over hearing of this update/reset, as well as newcommers, we (speaking for myself and like minded individuals) just want to get going. Survival is in a 4 day limbo just waiting to DO something. I know there are reasons for waiting and logistical issues but an early update would not go unwelcomed. In yet another sidenote. Sorry if I flipped out a little. Lets all go into this new page with a stone pick on out shoulders and a 2 lilitre of mountain dew by our desk. Teleporting people to a specific biome nearby would have the same effect as people building all around the spawn. Within days the biomes will be filled with buildings. You will get keys by voting but it's a random chance you'll actually get one, just like all the other items. Lucky keys will be very rare (<10%). YOu can also buy them ingame and in the server store soon. The server store item will contain mostly Lucky keys, but also keys for every world. But... idont understand "total price must be at least €10" i am VipDiamond, if I want to upgradde to VipEmerald i wont be able? I totally and 100% realize this, I only speak of wishful thinking. Also, as hivemind says, that will make exploration useless, and thats one of the big points of Minecraft. Just play the game like we all did before, exploring it ourselves and claiming parts when you find a nice biome / area you want to build on. come to think, u guys are right! lol i just wanted to add something interesting, but, the way it all is, is fine also, realy fun server btw XD but.. i have a sujestion to hive actualy.... @hivemind, why dont u build ur new huge house in the ocean? not a lot of people build there and u can easily expand to the sides, i dont want anyone trolling ur house as i see happening.... ure awessome XD
OPCFW_CODE
Welcome to Digital Raconteurs! There is a huge potential for video games as a storytelling art that we’re only beginning to fully explore. Gaming allows for a non-linear and interactive method of storytelling, impossible with any other medium. Developers that leverage this potential effectively can create emotional connections and elicit visceral responses from the player on par with, and sometimes beyond, anything film, literature, or graphic novels can offer. This aspect of video games is often overlooked by the casual or non-gamer, and there is a real lack of intelligent discussion on how video games tell stories. In a high school English class, you might discuss the symbolism of the fish in Hemingway’s The Old Man and the Sea, or the conch in Golding’s Lord of the Flies. A college film class may discuss how the color palette in Star Wars shows the cold, unfeeling Empire versus the earthy, ragtag Rebellion, or how the physical camerawork in Hitchcock’s Vertigo conveys the disorientation of the character far more effectively than any acting could. There aren’t many places, however, that discussion turns to how Half-Life‘s use of the first person perspective involves the player on a more personal level than cutscenes, or how Eternal Darkness‘s fourth-wall breaking sanity glitches were jarring and added to the experience. Here at Digital Raconteurs, we want to talk about how storytelling in games differs from other media, and why we like it. I want to try to clarify what I mean by “storytelling within the medium of gaming”. Just because a game tells a great story, doesn’t mean they use the medium to do it. Video games have been described as interactive movies, and for some games, this is an apt description. Games like the Final Fantasy series use cutscenes to tell story with gameplay in-between, often with greatly differing visual styles, and they convey a story very effectively. However, they’re not using gaming as a storytelling medium, they’re using animation, and having gameplay when they’re not telling the story. This breaks immersion, and feels like we’re being told the story, rather than experiencing it. Games like the Call of Duty series, on the other hand, have elaborate set pieces and interactive environments, but they’re only there to be visually stunning and provide challenges to the player. These types of games are using the medium to make gameplay fun, but not to tell a story. Most of the story is still played out in detached dialogue or cutscenes, and could have just as easily been a big screen blockbuster rather than an interactive experience. Games like Bioshock, however, use the video game medium very effectively. How the player interacts with the game determines how much or how little of the story is told, the environments change and react to the player, the world presented feels large and believable, and the player feels personally connected. All of these games are great, and all have things that make their storytelling work well, and things that break the immersion. Digital Raconteurs hopes to capture what makes a video game stand out from other media, and how it can better tell a story. I’ll try to highlight at least one game a month, and discuss why it works and why it doesn’t. I’ll also try to post theory articles that discuss different aspects of game design, and how they pertain to storytelling in particular. I look forward to hearing from you, the community, and building a place for great discussion.
OPCFW_CODE
Data annotation is also known as feature engineering. It involves building features for each class in the dataset using various techniques like bagging, boosting and regularization, etc. Labeling data helps in reducing the number of features that your algorithm needs to learn. There are multiple approaches to building a set of features for each class, and these approaches are also known as feature engineering techniques. Training data is labeled data used to teach AI models. Labeled data is also called test data used to test an AI model. You should implement human-powered data annotation services to get high-quality training data for AI-oriented projects. Here we will discuss various types of data annotation. 1. Image Annotation Image annotation is the task of annotating an image with label information. AI training data ensures that a machine learning algorithm can learn the labels from an image. It is also known as image labeling, image tagging, and image feature engineering. Image annotation is a time-consuming task, and it requires a lot of manual work. It involves creating bounding boxes and segmentation masks for each class in the image. You can also annotate the image using tools and libraries provided by the machine learning algorithms. Image annotation is often used to create training datasets for the learning algorithms. Those datasets are then used to build AI-enabled systems like self-driving cars, skin cancer detection tools, etc. There are different image annotation methods, including the Bounding box method, Co-occurrence method, and Local binary patterns (LBP) method. 2. Video Annotation Video annotation is the task of labeling sections or clips in the video to classify, detect or identify desired objects frame by frame. Video annotation can be done online using video annotation tools or offline using video editing software. It is possible to annotate the entire video or certain sections of the video depending upon the task at hand. Video annotation also uses techniques like bounding boxes or semantic segmentation. Computer vision tasks such as localization and object tracking cannot be performed online as the entire video must be labeled. Video labeling can also be done offline by first creating a list of objects of interest and then marking the video segments containing these objects. You can teach your model to understand video inputs, detect objects, and decide what objects are present in the video. 3. Audio Annotation Audio annotation labels sections or audio clips in the audio to classify, detect or identify desired objects. To do this, an algorithm is required to detect features in the audio and then extract features from the audio. Some of the most popular techniques used for audio annotation include voice activity detection (VAD), acoustic modeling, etc. The process of detecting features in an audio file is known as feature extraction, and it also uses techniques like FFT, etc. After feature extraction has been done, there are various ways to label audios depending upon the task at hand. It is possible to label entire audio or certain parts of an audio depending on the task. Interfaces that process audio with data collected as utterances, time-stamped, and categorized across more than 180 languages and dialects are known as speech-to-text engines. These interfaces are used to generate human-readable text from speech. When a speech-to-text engine is used, the audio file containing the utterances is first analyzed, and then the audio clips containing the desired objects are labeled. Speech recognition is a task that needs to be done before the text can be generated. 4. Sensor Annotation The sensor is a device that measures the data. It captures the data from sensors like thermometers, pressure sensors, etc. Annotating data coming directly from sensors involves some additional work. For example, if you have a thermometer and want to know the temperature of a room, you will have to measure the temperature every two hours. So, it is necessary to capture this information from sensors and then use it for data annotation. Various data sources, including LiDAR and Point Cloud Annotation (PCA) are used for sensor annotation. 5. Text Annotation Text-based natural language processing techniques can build a set of features for each class. Different techniques like bagging, boosting, and regularization can be used to build a set of features for each category. It is possible to automate the process of text annotation using various machine learning algorithms. Some text-based techniques that can be used to build a set of features for each class include Text classification, sentiment analysis, topic modeling, etc. A wide array of languages can be used to build a set of features for each category. 6. Automated data annotation vs. human annotations Human annotators often get tired and less focused on the annotation task. Therefore, it is essential to have a reliable and easy-to-use data annotation tool that can be used for data annotation. Oworkers manual annotation makes the process both time-consuming and expensive. However, automated annotation is a process that is easy to use and requires no human intervention. You can save time and money by using it. Data labeling is an essential step in a supervised machine learning pipeline. Machine Learning algorithms learn from data. The training data they’re given will help them learn what objects are present in the data set, where they are located, etc. The model performs better on new data if labeled according to its class.
OPCFW_CODE
""" Created: 2019-04-01 @author: Christopher Albert <albert@alumni.tugraz.at> Supposes that test_arrays_compile has already been run to generate CFFI API interface as a Python extension module """ import gc import os import subprocess import tracemalloc from shutil import copy import pytest import numpy as np from fffi import FortranModule m = 3 n = 2 @pytest.fixture(scope='module') def tmp(tmp_path_factory): return tmp_path_factory.mktemp('arrays') @pytest.fixture(scope='module') def refout(tmp): os.chdir(tmp) out = subprocess.check_output('./test_arrays.x') return out.replace(b' ', b'').split(b'\n\n\n') @pytest.fixture(scope='module') def refvec(refout): return np.fromstring(refout[0], sep='\n') @pytest.fixture(scope='module') def refarr(refout): # Reference output for 2D array refarrspl = refout[1].split(b'\n\n') # Fortran column outputs ret = np.empty((m, n)) for kcol in range(n): ret[:, kcol] = np.fromstring(refarrspl[kcol], sep='\n') return ret @pytest.fixture(scope='module') def mod_arrays(tmp): # Working directory cwd = os.path.dirname(__file__) copy(os.path.join(cwd, 'Makefile'), tmp) copy(os.path.join(cwd, 'mod_arrays.f90'), tmp) copy(os.path.join(cwd, 'test_arrays.f90'), tmp) os.mkdir(os.path.join(tmp, 'static')) os.mkdir(os.path.join(tmp, 'shared')) os.chdir(tmp) os.system('make') fort_mod = FortranModule('test_arrays', 'mod_arrays', path=tmp) fort_mod.fdef(""" subroutine test_vector(vec) double precision, dimension(:) :: vec end subroutine subroutine test_array_2d(arr) double precision, dimension(:,:) :: arr end subroutine """) fort_mod.compile() # recreate module to check if it works independently now fort_mod = FortranModule('test_arrays', 'mod_arrays', path=tmp) fort_mod.load() return fort_mod def test_vector(mod_arrays, refvec): """ Allocate vector in numpy, apply Fortran routine """ vec = np.ones(15) print(vec) mod_arrays.test_vector(vec) print(vec) np.testing.assert_almost_equal(vec, refvec) def test_array_2d(mod_arrays, refarr): """ Allocate 2D array in numpy, apply Fortran routine """ arr = np.ones((m, n), order='F') # correct array order mod_arrays.test_array_2d(arr) np.testing.assert_almost_equal(arr, refarr) # def test_array_2d_wrongorder(mod_arrays): # """ # Allocate 2D array in numpy in wrong order, apply Fortran routine # check if correct exception is thrown # """ # arr = np.ones((m, n), order='C') # incorrect array order # with self.assertRaises(TypeError) as context: # self.mod_arrays.test_array_2d(arr) # self.assertTrue('needs Fortran order' in str(context.exception)) def test_array_2d_multi(mod_arrays, refarr): """ Allocate 2D array in numpy, apply Fortran routine first 10 times then 1000 times. Check for memory leaks via tracemalloc """ tracemalloc.start() snapshot1 = tracemalloc.take_snapshot() arr = np.ones((m, n), order='F') # correct array order for _ in range(10): arr[:, :] = 1.0 mod_arrays.test_array_2d(arr) np.testing.assert_almost_equal(arr, refarr) gc.collect() snapshot2 = tracemalloc.take_snapshot() stats = snapshot2.compare_to(snapshot1, 'filename') statsum = sum(stat.count_diff for stat in stats) snapshot1 = tracemalloc.take_snapshot() for _ in range(1000): arr[:, :] = 1.0 mod_arrays.test_array_2d(arr) np.testing.assert_almost_equal(arr, refarr) gc.collect() snapshot2 = tracemalloc.take_snapshot() stats = snapshot2.compare_to(snapshot1, 'filename') assert sum(stat.count_diff for stat in stats) <= statsum + 16
STACK_EDU
Marvellousfiction Exlor – Chapter 3266 – Sacrifices lewd soggy suggest-p3 cultivation chat group 232 Novel–The Mech Touch–The Mech Touch Chapter 3266 – Sacrifices heat gaping The Slug Rangers ended up especially difficult strike! The subsequent blows to the Gauss Baron’s layered defenses were definitely so abrupt that Venerable Leiva hardly paid any focus to what obtained occured on the Dark Zephyr. He suddenly remembered he obtained a good way of learning whether a member of the Larkinson Clan was still in existence. Venerable Leiva and also the Slug Rangers had already been unsuccessful by permitting the Black Zephyr have this special. It was not just a delight that she didn’t have any sufficient methods at her removal. He then began to push his mech upwards and carve its radiant cutlery appropriate around the ceiling! “Venerable Tusa, you should react! What is the Dimly lit Zephyr’s state?! We now have shed link with your expert mech’s data feeds and cannot assess its present status. Remember to reply!” It checked as though it possessed evaded the incredible snare that had engulfed several of its illusionary duplicates! He suddenly valued he got a good way of finding out whether a member of the Larkinson Clan was still full of life. “The Lemogo Distat is injured!” That was an error in judgment. Another issue was that lots of cutting blades wore straight down rapidly if applied in this particular crude fas.h.i.+on. Regardless of this speedy response, a lot of dwarves were devastated. The Gauss Baron was one of many three most effective guardians with the troops on the dwarven army fleet. Her highly effective gauss cannons and her extremely valuable fireplace help and support were solution to controlling strong things like the Amaranto, sieging hardy protective s.h.i.+ps like the Graveyard and wrecking a lot of important enemy mechs just like the Transcendent Punishers along with the Long lasting Redemptions. “Venerable Tusa, be sure to answer! What exactly is the Darkish Zephyr’s ailment?! We have shed connection to your skilled mech’s records feeds and cannot ascertain its up-to-date condition. You should respond!” “Venerable Leiva! The dangerous professional mech hasn’t retreated. It really is carving its distance to the deck below you! Take back again immediately!” henry wadsworth longfellow poems The Gauss Baron still obtained a number of countermeasures in hold. This is the nice thing about piloting a large and body fat mech. There seemed to be a great deal s.p.a.ce and potential that Venerable Leiva still experienced not less than three unexpected emergency measures at her fingertips that may repel any enemy mech that believed that her appliance was weak at shut down range. The sole price tag was that this got a lot from Venerable Tusa. His earlier exertions had already taxed his will and then he was depleting the remainder of his mental strength with an alarmingly significant fee! “Oh yeah, h.e.l.l…” But a variety of careful individuals weren’t entirely content around this significant result. The Gauss Baron possessed unquestionably self-destructed, but what happened for the Black Zephyr which had been a stone’s chuck from its focus on? The explosive snare that had devastated loads of escort mechs also dealt lots of destruction of the hull on the Lemogo Distat. The harsh rectangle lines revealed ample availabilities for those Dimly lit Zephyr to carve from the hull with rapid, duplicated episodes using its extremely sharpened cutlery. Although injured on the part, the harm to her functionality was relatively restricted. With all of her important devices and s.h.i.+p factors still in doing work situation, the bruised yet still unbroken dwarven fleet service provider spun around her measurements until her gaping injury was not any longer exposed to a persons enemies at the front end. Nitaa stepped ahead and pa.s.sed on the Larkinson Mandate to him. He just let his armored gauntlet always keep hold of the relic and aimed to immerse himself within the Larkinson Network system. Without the need of a chance to transfer or turn around her hefty mech, she produced the one final decision that might still allow her to give rise to the fight. The Life of Sir James Fitzjames Stephen, Bart., K.C.S.I The only price tag was which it took considerably out from Venerable Tusa. His earlier exertions got already taxed his will and now he was depleting the remainder of his psychological toughness at an alarmingly significant level! The Dim Zephyr speedily rushed to the back of the Gauss Baron and trim the powerful but sluggish-changing cannons into rubbish before digging its cutlery through the insecure rear armour from the weighty artillery mech with only slightly better reluctance! Ves increased progressively more nervous as nothing at all came up lower back. The Nature of Bentheim should have no less than had the opportunity to acquire some signs. Even out of this long distance and in many cases with the weighty interference in the surrounding s.p.a.ce, the Darkish Zephyr really should have had the opportunity to show signs of life… but only whether it was useful adequate. masters of french music online Now, these layout alternatives acquired return to chew the Gauss Baron from at the rear of, essentially in such a case! Many dwarven mechs, particularly those from the Slug Rangers, momentarily faltered. From some other standpoint, a massive blast engulfed along side it on the Lemogo Distat where Gauss Baron’s bunker was located! Venerable Tusa smirked. In reality, that had been indeed the situation. As he leveraged the perfect mech components of his skilled mech and channeled the effectiveness of Arnold, he sent out each and every iteration with the Darkish Zephyr throughout the foe mech blockade a number of recommendations. “Hahaha! I’m never enabling you to be in! Occur inside when you dare!” From a third party mindset, a massive blast engulfed one side with the Lemogo Distat the place that the Gauss Baron’s bunker was found!
OPCFW_CODE
[Openmcl-devel] Updated Contrib plkrueger at comcast.net Tue May 14 12:33:14 PDT 2013 I finally checked in a completely new revision of my contrib: .../ccl/cocoa-ide/krueger/InterfaceProjects/... All code requires OSX 10.7 or higher and CCL 1.9 or higher. Synopsis of features: UserInterfaceTutorial.pdf is an extensive description of how to construct Cocoa user interfaces without having to resort to Xcode or InterfaceBuilder. It makes heavy use of Cocoa's layout constraint functionality. CCL Cocoa Developer Tools Tutorial.pdf explains how to install and use application build/test code within the CCL IDE to construct apps that you can also run within the IDE for testing before creating stand-alone applications. Lisp-KVO Reference & Tutorial.pdf explains functionality that lets you make slots in standard Lisp classes be KVO compliant and supports binding between Objective-C view properties and those slots. LispController Reference.pdf describes how to use the lisp-controller class as a lisp-friendly controller to manage view objects like NSTableView NSOutlineView. Support for running Lisp apps under the CCL IDE, including swapping of the main menu Tools for creating stand-along Lisp apps Support for binding between Objective-C objects and Lisp slots Extensive Lisp/Objective-C data conversion support via a coerce-obj method Conversion of standard-instance objects to a format that can be archived to disk and restored correctly. Initialize-instance :after methods for 20 common Objective-C classes that let you call make-instance for them using typical lisp keyword initialization syntax with all necessary data conversion done automatically for you. As a design principle, no default behavior is modified by these methods unless explicitly requested via a keyword argument. These are self-documenting via a window that can be invoked from the CCL Tools menu which shows and lets you search class hierarchies and see permitted initialization keywords, acceptable keyword values, and allowed binding targets. Lisp interface for Apple's layout constraint functionality including the addition of several common layout idioms to make window design easier. See Appendix B in UserInterfaceTutorial.pdf for a complete description. Special purpose utility classes and functionality An N-dimensional sparse associative array A Lisp attributed string stream class Date manipulation and formatting utilities Bundle interface code NIB interface code Lisp interface to Cocoa "undo" functionality Lisp-document class which provides support for saving, loading, undo'ing, and printing documents defined with lisp data slots Lisp-window-controller class which supports window "loading" using a lisp function rather than loading from a NIB file Lisp interface to the Cocoa notification functionality. A thread-safe queue implementation Custom view classes and view-supporting classes combo-box source class Support for acting as a data source for NSTableView and NSOutlineView objects Menu and menuitem creation functions for both common and custom items Lisp interface to NSOpenPanel and NSSavePanel Lisp button class that lets buttons call lisp methods An organized-box-view that arranges subviews as directed A resizable-box-view that dynamically rearranges its content views into a reasonable array as the box size changes A radio-button box view which calls lisp methods when buttons are selected A scrolled text view A scrolled text view which can act as a Lisp output stream Labeled text field A form-view which is an aligned set of text fields with labels If you run into any problems, please contact me directly and I'll try to address them. Any suggestions or other comments are welcomed. -------------- next part -------------- An HTML attachment was scrubbed... More information about the Openmcl-devel
OPCFW_CODE
Search the Community Showing results for tags 'ivy bridge'. Found 3 results Hackintosh can't boot: all kexts assertion failed Worked late on my Yosemite Zone Hackintosh build I've been using for a year now. I peacefully shut it down, and went to sleep. The next morning as I woke up, my Hackintosh didn't want to boot up so I booted in verbose mode (my bootloader is Chimera) to see what's going on. The system did a fsck at the beginning, returning "The volume appears to be OK", after which every single Apple kext prints out an "assertion failed" message which leaves my system hanging. My specs are as follows: ASUS H61M-K ( https://www.asus.com/Motherboards/H61MK/specifications/ ) Intel Celeron G1620 ( https://ark.intel.com/products/71073/Intel-Celeron-Processor-G1620-2M-Cache-2_70-GHz ) AMD Radeon HD6570 ( http://www.amd.com/en-gb/products/graphics/desktop/6000/6570 ) 12GB of DDR3 RAM at 1333MHz 500GB HDD as only internal storage device, partition table (GPT) as follows: EFI Partition (200MB), OSX Partition (100GB), Recovery HD (650MB), Windows 10 Partition (80GB), MSR, Data Partition (250GB FAT32), Ubuntu 16.04 Partition (50 GB) Happens consistently on every reboot, so I can easily reproduce it. Things I've tried: -f flag kext-dev-mode=1 flag toggling GraphicsEnabler combinations of the three above trying to boot from GRUB (though probably fails because my OSX is installed in BIOS mode even though it's a GPT disk, while Windows and Ubuntu are running on UEFI mode) Deleting VoodooHDA.kext, as I've had minor troubles with it a while back None of these helped at all. I do have access to single-user mode though. I've also noticed a strange line of output during verbose, something along the lines of BSD root: major 0, minor 7. Assuming this means disk0s7, does this mean my OSX is trying to boot off the wrong partition (the Ubuntu one) instead, explaining why it fails on all kexts (because they aren't there)? My OSX partition is located at disk0s2. I feel like this has something to do with the issue. I'm just about ready to reinstall OSX entirely, I've already backed up all of my data and ready to wipe the partition if needed, though it would be nice to see if the problem is fixable on its own. Anyone got ideas? Theos Theopsis posted a topic in Hackintosh MavericksHello everyone, I intent to buy a Acer Aspire AO756 with a Celeron 1007U Ivy Bridge CPU and install this distro. For the last three days and nights, I am searching info about the Intel HD Graphics, to find out if it is or it is not fully compatible with OS X, but with no results. Some says that it runs a Intel HD 2500, which from what I read on other forums is compatible (at least with ML), some says that it runs a Intel HD (simple, just like that) and it is not going to work with QE/CI. On the specs of this CPU, they said the graphic card is a HD 2500 or is based on it, something like this... anyways, from what I've understand, it is not a HD 2000. So is QE CI working or not for this specific graphic card, cause if it's not, I'm going to look for another Netbook? Thank you! LE: I've just realized that I have posted this topic in the wrong section. I did saw - Read this before you post - only after I have posted, so my mistake. Pease move this topiic in the right section, if there is one or delete it. Thank you and my apologise! Hey here are the details of my build. I have a Lenovo Z500 Laptop. The config are as follows: Intel i5-3230M 2.59GHz processor Intel HD4000+nVidia GT635M 6GB DDR3 1TB HDD Atheros 9285 Wifi + Bluetooth I had a lot of problems with the Mavericks App Store + Unibeast installation method. Also with the Niresh Mountain Lion version. Due to the crappy EFI Loader of my laptop. After a lot of work and experimentation I finally managed to get the setup working flawlessly. What doesn't work: Wifi - The notorious Atheros 9285! I tried almost 10+ different kexts but nothing works! - Using Asus N10 USB adapter now Trackpad - it worked in the setup but stopped once it rebooted and booted from the drive (dunno y) but i really dont mind coz i always use USB mouse. nvidia graphics card - its a known problem with gt635m. But Intel HD4000 works pretty fine, being the native grpahics processor. Heres what I did: Bios Settings: Intel Virtual Technology: Enabled Graphic Device: UMA Graphic 1. Installed Niresh Mavericks from my USB wind Win32DiskUtility 2. While installation, Using the customize option, Just selected 'Enable Battery Percentage' leaving the rest settings to its default. 3. After installation, when I rebooted, I was getting the boot0 error. I installed chameleon bootloader and solved the issue. 4. As for the graphics, I was getting 64MB only. So I installed the kext (AppleIntelFramebufferCapri.kext and AppleIntelSNBGraphicsFB.kext) using the KextUtility. And without restarting, in the Chameleon Wizard, enebled the following options with values: a. GraphicsEnabler b. Intel Capri FB = 3 c. Inject Intel ig = 3 5. Voila! After restart I had 1024MB of graphics! 6. I changed the SMBios to MacBookPro 9,2 as its the closest to my system! Edit: I forgot to mention I needed to enable USBLegacy=Yes to use USB 2.0 Ports My GeekBench Score (64-Bit): Single Core: 2420 Multi-Core: 5123 Cheers!
OPCFW_CODE
comparepdf is used to compare two PDF files. The default comparison mode is text mode where the text of each corresponding pair of pages is compared. As soon as a difference is detected the program terminates with a message (unless -v0 is set) and an indicative return code. The options are -ct or --compare=text (the default) for text mode comparisons or -ca or --compare=appearance for visual comparisons (useful if diagrams or other images have changed), and -v=1 or --verbose=1 for reporting differences (and saying nothing for matching files): use -v=0 for no reporting or -v=2 for reporting both different and matching files. (For a GUI tool for showing the detailed differences between PDF files Home page: http://www.qtrac.eu/comparepdf.html Compiling and Installing comparepdf Prerequisites: A C++ compiler, the Qt 4 libraries (I test with Qt 4.7 and Qt 4.6. Earlier Qt's may work although Qt 4.4 and 4.5 will at least need a compiler with tr1 support), and the Poppler libraries (at least version 0.14.0 and including Poppler's C++ and Qt 4 headers). Linux and BSD users should be able to get everything through their package management system---and some distros already include comparepdf so you don't even have to build it. Mac OS X users can get a compiler by installing Xcode; you'll need to get Qt and Poppler separately. 1. Unpack the archive file, comparepdf-XXX.tar.gz 2. Change directory to comparepdf-XXX 3. Run qmake # On some systems, e.g., Fedora or Ubuntu, run qmake-qt4 On Mac OS X use: qmake spec=macx-g++ 4. Run make 5. Copy or soft-link the comparepdf executable to somewhere on your PATH This program was written by Mark Summerfield. Copyright (c) 2011-12 Qtrac Ltd. All rights reserved. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option), any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License (in file gpl-2.0.txt) for more details.
OPCFW_CODE
When I woke up this morning, I was pondering how to produce a custom Statement report for a client. They want a statement that shows outstanding receivables balances, but also shows original amounts from a SOP invoice, such as gross amount, markdown amount, and net amount. And they also want to see the total amount of payments applied to each invoice. And they want to see any 'adjustments' made to the SOP invoice by means of other adjusting invoices or returns. Oh, the outstanding balances also need to be in aging columns on the statement. And did I mention that they don't actually want transaction level detail? The data needs to be summarized by a special transaction grouping, where multiple invoices may be included in the group. Oh, and the data on the report needs to also be provided to the customer as a CSV file. Because the report and CSV file need to be automatically e-mailed to the customer. It makes business sense, but it's a mind bending exercise trying to figure out how to get all of that information onto a single report. There is a slightly complex custom SQL view. Then there is a Crystal Report with some wacky formulas and running totals and groups. Then I have to automatically generate the CSV files. Then there is Liaison Messenger EDD for distributing the reports and files via e-mail. It's a handful. After working on that for a while, I checked on the status of a test EFT transaction that a client sent from GP. Thankfully the payment went through okay, despite the several bugs in the GP 2010 CCD+ ACH file formats. Then at 10am I deployed some changes to a custom PO Export application for another customer. The trading partner that is receiving the PO files has some interesting limitations with their custom system, so the client and I are having to reverse engineer their system behavior to figure out how to send new POs, changes to PO lines, partial line quantity cancellations, and then full line cancellations. It looks like we may have to send PO line quantity updates net of any receipts that have occurred. So if they originally had quantity of 20 on the PO, changed the quantity to 15, but have already received 9, but then cancel the line, I may need to send a cancellation for quantity of 6. Make sense? Fun stuff. Then back to the custom Statement for 90 minutes. Then at noon I had a call with another client that is having two GP issues. I developed a moderately complex custom order import application that automatically creates SOP orders and purchase orders for inventory items, non-inventory items, and drop ship items, all simultaneously. It seems that SOP/POP linking doesn't work properly with these imported SOP orders and purchase orders in GP 2010 for some reason, so there may be an issue with eConnect 2010. To help save them time, I'm going to add the ability to automatically link certain SOP lines with the PO lines. Unfortunately, eConnect does not allow you to link a SOP line item with an existing PO or PO line, so that has to be developed from scratch. And they are also using another small customization that I wrote for them that isn't playing well with their Nodus CCA credit card processing module, so I need to make some changes there as well. Then another call to discuss some new requirements for the PO Export I just updated. Then back to the custom Statement report. And next up, I have a call with another client to assist with a GP 2010 SP3 upgrade, since an eConnect GL JE import that I developed is getting an error due to an SP1 bug. So what should have been a very simple integration deployment has turned into a GP SP upgrade before we can even resume testing of the integration. After that, I'll be back on the custom Statement report, even though I still have to work on a custom eConnect Project Accounting Misc Log import for another client. And then there is the occasional Dynamics GP Land blog post that I need to think about and write. This is all just an example to point out a few themes of consulting. 1. The breadth and depth of knowledge required to be a competent, full service Dynamics GP consultant is staggering. I think we take it for granted, but really, when you think about everything from debits and credits to SQL queries, to business process, to accounting controls, to all of the different modules, to product support, to project management, it requires a pretty huge pile of knowledge and skills to take care of your customers. And any one consultant typically only handles certain realms, such as application consultant vs. technical consultant. I used to feel a little self conscious about our billing rates, and definitely understand if a client gasps or growls at the hourly fee, but given the knowledge we're being asked to provide at a moment's notice (and the constant investment that requires), I don't think we're being too unreasonable. 2. Task switching is very expensive. I read a news blurb years ago about a study that tested people's ability to handle interruptions when they were performing a task that required focus and concentration. I believe that it found that on average, people required about 15 minutes to recover from an interruption and get back to the task. UPDATE: It was a NYTimes.com article from 2008, titled "Fighting a War Against Distraction". And my recollection was incorrect--apparently it can take up to 30 minutes to recover from a distraction. Great article, with many other points about focus and distraction in the workplace. I can definitely relate, as I often have a hard time getting back into that custom Statement report, remembering where I was at and what I needed to do next. And just the mental process of switching tasks feels like I have to clear my 'memory buffer' from the prior task and fill it back up with the new task at hand. Anyway, it's a busy day and my GP upgrade call is starting, so that's all my brain can handle for now. Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles. He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.
OPCFW_CODE
Help on calculating credit card interest Below is a summary of my current statement Apr 21 2019 - May 2019: The APR is 25.24% I had a promotional 0% APR till Apr 20. This is it google drive link I have tried my best to calculate the interest mentioned but I simply cannot get it right. I know the balance subject to the interest rate was $1217.95. Can someone help me figure out how this $1217.95 was calculated OR how the interest summed up to $25.26 based on the daily transactions. How did you arrive at the amount $1217.95 as the balance subject to interest? @perennial_noob: It's probably printed on his statement, "Average Daily Balance subject to Interest" Now that your 0% is done, 25% APR is pretty high by today's standards unless you have low credit scores (deemed higher risk) or made late payment, exceeded credit limit, etc. Pay off the balance ASAP or get a better card if your credit will allow that. From your question it seems like you are attempting to calculate the total interest based on one cut-off amount whereas credit cards calculate the interest on a daily basis. They arrive at DPR (Daily Periodic Interest Rate) by either dividing your Annual Percentage Rate (APR) by 360 or 365. So in your case it is 0.071% or 0.069% based on how your card company calculated it. This is used to find the interest on the previous unpaid balance. Remember, also, that there is a cutoff (usually mentioned by the company, for ex, midnight of a date at Eastern Time). So it may look like you paid on a given day (based on your transaction time in the time zone) but it reflects as payment on the following day if it is after the deadline specified. For ex, if at the end of the first day of your 0% APR expiry your balance is $1000, then assuming 365, your interest for that day is going to be $0.69. If you maintain this same balance until the next billing cycle the interest you'll end up paying will be (assuming 30 days in that month), $20.7 (and odd cents). If on the 10th day you spent another $1000 then your interest for the first 10 days is the same but your interest for the last 20 days each day will be $1.38 and your total interest will be $34.5 So on a day to day basis you accumulate interest which gets summed up on the day the statement is finalized. That is how they arrived at $25.XX In the same excel sheet you could create another column that calculates the DPR and add it up. You may be off but that again depends on when the cutoff starts and ends. To calculate your average daily balance, you have to calculate the balance on your account for each day in the billing period. Then add all these up and divide by the number of days in the billing period. Bear in mind that if you don't have any changes to your balance one day, that day still "counts". The balance for that day will be the same as the balance for the previous day. So in your example above, your balance for April 17 will be the same as your balance for April 16, April 21 is the same as April 20, etc. You also have to know exactly which days are included in the billing period. This should be printed on your statement. A credit card bill is typically for about 30 days, sometimes a day or two more or less depending on how they do their schedule, so you should have about 30 numbers to average together.
STACK_EXCHANGE
Jim Manley wrote:When Oracle Java 7 for the Pi is reportedly released this Fall, it will include full GPU support, and that should make running applications like ImageJ significantly faster. Is this actually true, Oracle is willing to do that amount of development to support a platform like Raspberry Pi? I'm quite amazed. Oracle has been working on this pretty much since demos early in 2012, albeit apparently part-time by some internal champions of the Pi and ARM, at least until earlier this year. First-class support of the Pi has been on their roadmap for close to a year (announced at JaveOne in 2012, IIRC), although it has been under the umbrella of greatly-improved support for ARM in general. So, they're not doing this solely for the Pi - the benefits will apply equally to any ARM-Linux platform such as a Cubieboard, BeagleBone, etc. According the roadmap, Java 8 (coming sometime in the first half of 2014, IIRC) will include full ARM-Linux (including Pi) support the day it's released to the public. I know a lot of people hold a great deal of hate and discontent for Oracle, but one has to remember that it's a huge company that got that way in large part by acquiring a lot of outstanding smaller companies (and in some cases, not small at all, just smaller relative to Oracle's core database business, e.g., Sun, Siebel, BEA Systems, Hyperion, PeopleSoft, etc.). The people working on Java for ARM are, of course, formerly Sun employees, some of whom have been at work on Java since the very early days. One of things Sun and Oracle had in common was their software supporting as many platforms as possible, which led to a lot of internal friction between the hardware and software folks within Sun, and that probably partly led to its acquisition by Oracle. So, this shouldn't be as much of a surprise as it might otherwise seem. Oracle, and the Java division in particular, have a long and deep history of association with educational use, including very liberal licensing in that sector, generally for free for educators and students, in particular, and even pre-investment small startups. The sister of a former coworker of mine was (and may still be) the head of licensing at Oracle and they have offered a wide range of licensing options even to corporate customers when they've been asked. This has been useful where cash has been strapped, e.g., growing startups demonstrating promise of strong future profits. Sometimes going with Oracle exclusively is part of the deal, in other cases they've accepted preferred stock and/or warrants in exchange for licensing breaks, etc. As the MBAs like to say, "Everything is negotiable." Having said all of that, I haven't heard from the folks working on the ARM-Linux ports since earlier this year and things may have changed, but they haven't announced anything of which I'm aware. They do monitor this forum at least occasionally, and I hope we hear from them sooner, rather than later, that Java 7 on the Pi will, in fact, be here Real Soon Now. EDIT: I forgot to mention that there is a developer preview of JDK 8 that runs very well on Raspbian (full armhf implementation last updated on July 24, 2013) that's available at: You will want the Linux ARMv6/7 VFP, HardFP ABI gzipped tarball that becomes accessible when you agree to the licensing terms. The JavaFX Demos and Samples Downloads are available in zipped form below the JDK downloads on the same page. There are some useful notes toward the bottom of the page at: https://wiki.openjdk.java.net/display/O ... spberry+Pi
OPCFW_CODE
Combining Probability Density Functions I am trying to predict the outcome of a random variable x, which is a real-valued number. In some cases I can observe another variable y1, which should approximate x. I model y1 as a Gaussian distribution with mean of 0 and an empirically estimated standard deviation, and use the probability density function of that Gaussian to predict x. In some cases I also have a second observation y2, which I can model similarly as a Gaussian. What is the appropriate way to combine y1 and y2 into an estimate on x? Should I add the distributions and model x as a mixture of Gaussians, or should I multiply them? Or, should I try both and pick the answer that maximizes the probability of sampling an independent set of data? What do you mean by "n some cases I can observe ...". Does that man that neither y1 nor y2 can be observed all the time (i.e. for all the x)? @James And what do you mean when you call y2 a "prediction"? Isn't it functioning here as a predictor? @steffen - that's correct, sometimes I have estimates of x independent of any y's. Do you have observed values of x at least sometimes matched with observed values of y1 and/or y2? @Firefeather - Yes I do. I'd like to know what's strictly right versus what's pragmatic to do - maybe modeling these as Gaussians isn't right, so adding or multiplying is the theoretical right answer but not practical. @James, why wouldn't a regression model work, then? I can tell you what I would do as a machine learner ;): Creating two models $M_1,M_2$ for x using $y_1,y_2$ respectively A prediction for x is calculated as the average of the predictions of $M_1$ and $M_2$ A "model" can be either a gaussian distribution (as described, if you know that x a) has a gaussian distribution and b) know that the function x to y is nearly the identity) or anything else, e.g. a simple linear regression model (if you are not sure whether there are additional factors). BUT: If x is indeed a bimodal distribution with the two gaussians y, than the suggested approach would not make any sense. In this case I'd try a more generic approach, i.e. EM to see whether one or two gaussians are more appropriate. It is quite hard to give a general answer to this questions, since it is not exactly clear what determines whether y1 or y2 can be observed. It could be that for both y x is missing completely at random (in this case y1 and y2 were just different random samples), but on the other hand it could be that for a certain fraction of x only y1 can be observed, but not y2, and vice versa for another fraction of x.
STACK_EXCHANGE
For Amazon S3 request rates, what's the difference between prefixes and nested folders? How many prefixes can I have in an S3 bucket? Last updated: 2021-07-23 For Amazon Simple Storage Service (Amazon S3) request rates, what's the difference between prefixes and nested folders? How many prefixes can I have in an S3 bucket? A prefix is the complete path in front of the object name, which includes the bucket name. For example, if an object (123.txt) is stored as BucketName/Project/WordFiles/123.txt, the prefix is “BucketName/Project/WordFiles/”. If the 123.txt file is saved in a bucket without a specified path, the prefix value is "BucketName/". A partitioned prefix in a bucket can support 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second. There is no limit to the number of prefixes you can have in a bucket. Note: In Amazon S3, there are no partitions for keys or objects. Partitions exist only at the prefix level, and not at the object level. For more information about using prefixes in Amazon S3, see Organizing objects using prefixes. A folder is the value between the two "/" characters. For example, if a file is stored as BucketName/Project/WordFiles/123.txt, the file path indicates that there is a folder ("Project") and subfolder ("WordFiles"). Both "Project" and "WordFiles" are considered to be folders. If the 123.txt file is saved in a bucket without a specified path, then no folders are used to store the file. In Amazon S3, folders are used to group objects and organize files. Unlike a traditional file system, Amazon S3 doesn't use hierarchy to organize its objects and files. For the sake of organizational simplicity, Amazon S3 console supports the folder concept as a means of grouping objects. Note: The folder structure might not indicate any partitioned prefixes that support request rates. Difference between prefixes and folders The difference between a prefix and a folder is the significance of the "/" character. For folders, the "/" character signifies a subfolder or object name. For prefixes, "/" is just another character. The "/" does not indicate a partition placement. Meanwhile, you can create a prefix programmatically, using either the AWS Command Line Interface (AWS CLI) or Amazon SDKs. When you create a prefix using these methods, Amazon S3 doesn't treat the prefix as an object, nor does it hold any size. For more information about the difference between folders and prefixes, see Organizing objects in the Amazon S3 console using folders.
OPCFW_CODE
Firebird 2.5 ‘DEFINE GENERATOR failed’ seemingly due to reaching database generator limit, but actual amount nowhere near that limit I’m using Firebird 2.5 with FlameRobin and ran into a strange issue yesterday when creating a simple sequence / generator with the following SQL: CREATE GENERATOR MY_GEN_NAME_HERE; This gave the following error message: Error: *** IBPP::SQLException *** Context: Statement::Execute( CREATE GENERATOR MY_GEN_NAME_HERE) Message: isc_dsql_execute2 failed SQL Message : -607 This operation is not defined for system tables. Engine Code : 335544351 Engine Message : unsuccessful metadata update DEFINE GENERATOR failed arithmetic exception, numeric overflow, or string truncation numeric value is out of range At trigger 'RDB$TRIGGER_6' According to the Firebird FAQ this means that the maximum number of generators in the database has been reached. The database only contains ~250 actual generators however, and according to the manual there should be 32767 available. The FAQ suggests that a backup and restore will fix the issue, and this did indeed work, but ideally I’d like to understand why it happened so I can prevent it next time. I’m aware that even failed generator creations can increment the counter, so I believe this must be the problem. It’s highly unlikely to be ‘manual’ failed generator creation statements as the database is not in production use yet, and there are only two of us working with it for development. I think it must be something attempting to create generators programmatically therefore, although nothing we've written should be doing this as far as I can see. I can’t rule out the industry ERP system we’re using with the database, and we have raised it with the supplier, but I’d be highly surprised if it’s that either. Has anyone run into this issue before, is there anything else which can affect the generator counter? Are you regularly creating and dropping generators in your database? I have updated my answer. The behaviour suggests that you might have been using an ODS 11.1 (Firebird 2.1) or earlier database under Firebird 2.5. A sequence (generator) has a 'slot' on the generator data page(s) that stores its current value. This slot number (RDB$GENERATOR_ID) is assigned when the generator is created (using an internal sequence). When you drop a sequence, the slot numbers will only increase, until the maximum number of slots have been assigned (and possibly dropped). In Firebird 2.1 and earlier, this would be the end: having created (and dropped) 32757 sequences would mean you could no longer create sequences. So, if your application is creating (and dropping) a lot of sequences, you will eventually run out of slots, even if you only have 250 'live' sequences. The only way to reclaim those slots, is by backing up and restoring the database. During the restore, the sequences will be created anew (with the start value from the backup) and get a new slot assigned. These slots will be assigned contiguously, so previously existing gaps disappear, and you will then have unassigned slots available. However, this was changed Firebird 2.5 with CORE-1544, Firebird will now automatically recycle unused slots. This change will only work with ODS 11.2 or higher databases (ODS = On-Disk Structure). ODS 11.2 is the on-disk structure for databases created with Firebird 2.5. If you get this error, then probably your database is (was) still ODS 11.1 (the Firebird 2.1 on-disk structure) or earlier. Firebird 2.5 can read earlier on-disk structures. Upgrading the ODS of a database is a matter of backing up and restoring the database. Given you already did this, I assume your database is now ODS 11.2, and the error should no longer occur (unless you actually have 32767 sequences in your database). Interesting. Does this also affect Firebird 3.0? @pilcrow I'm not aware that this changed, so yes. @pilcrow Actually, after doing some testing, I might be wrong about the details. The slots do seem to be recycled (even in Firebird 2.5). I will need to do some more investigation on this. @pilcrow With some testing this seems to occur with ODS 11.1 (FB 2.1) and earlier databases, but no longer with ODS 11.2 (FB 2.5) and higher. It should not affect FB3 because FB3 is intentionally incompatible with ODS 11.1 or any other below ODS 12.0 @pilcrow Thanks @Arioch ‘The. That is evident in the updated answer but was not clear in the original answer before Mark Rotteveel investigated thoroughly.
STACK_EXCHANGE
An invalid slug error means that the parameter following check50 or submit50 is incorrect. Go back to the problem spec and see what it's supposed to look like. I'll guess a missing slash or something. If this answers your question, please click on the check mark to accept. Let's keep up on forum maintenance. ;-) You're correct that the submission process changed in late July. All of your old submissions can be seen here: http://legacy.cs50.me and new submissions here: http://submit.cs50.io Both groups of submissions are used for grading, so you should see the progress here: http://cs50.me/cs50w Are you using your actual GitHub password or your access token's code? If you have enabled two-factor authentication with GitHub and you haven't created an access token, you will need to create one for CS50 as GitHub no longer allows submit50 to connect using your regular sign-in credentials. You can follow this link to learn how to do this: Creating a ... In the course IDE there are several useful tools pre-installed for students, as you probably already know, therefore it is more than recommended to use CS50 IDE, but if you still want to use your Linux subsystem, I think it is possible to install these tools. I hope you can handle Linux well, here is a link where you can find more information: I noticed this a duplicate question that you have already posted here but I can't flag it as a duplicate, so I'll repost my answer here in case anyone chances upon this one instead: This is due to the new automatic GitHub authentication introduced in VSCode version 1.45. The temporary fix for this is to disable Git: GitHub Authentication in your VSCode ... You've hard-coded the 8 into your loop regardless of the actual height requested. Also, once you submit, you can go to your submit.cs50.io page and, on your submission, click the check50 and style50 buttons to see the tests that were run. The new submission system went into effect a couple of days ago (July 29, 2019). You can submit your final project directly on github. You are only required to submit a README.md file describing your project, so the easiest is to do that directly on github.com First, go to http://cs50.me/cs50x and make sure you are set up for the CS50x course in the new ... Given that check50 and submit50 are installed on the host machine, the CLI tools can be accessed via CLion Embedded Local Terminal. The official documentation for cli50 and submit50 shall aid you with the installation.
OPCFW_CODE
Service case suggestion with Customer Decision Hub The real value from Big Data and analytics comes when every customer conversation delivers exactly the right message, the right offer, and the right level of service to both give the customer a great experience and maximize the customer’s value to the organization. With Pega's-AI powered Next Best Action, business experts develop decision strategies that combine predictive analytics, adaptive analytics, traditional business rules. Pega Customer Service™ can leverage Pega Customer Decision Hub™ (CDH) to receive next-best-action suggestions for both offers and service cases. This allows the business to determine when cases should be recommended to customers. Next best actions are displayed in the Interaction Portal during an interaction with a customer. The next best action section displays the results contained in the D_NBAContainer data page. The D_NBAContainer data page uses an activity named GetNBAContainerResults to request the next best actions from the CDH and then processes the results. Pega Customer Service configuration Configuration setup can be broken down into two parts: configuration needed for customer service and configuration needed by CDH. The first configuration you need to update is in App Studio. In the behavior settings for your application, you need to enable Customer Decision Hub and specify the URL to the CDH server. If this is not selected, then the Next best action section is populated only with actions that are a result of using Intent When rules. In Dev Studio, there are a few things that you can configure. The GetNBAContainerResults activity is responsible for querying the CDH and then process the results. You should not need to update the activity, but you may want to update the containers, map additional context data used by CDH, or map actions returned from CDH to case types. You can perform all these options from Dev Studio on the Customer Decision Hub configuration page. In the following example, you can see that the Interaction Goal is one of the many pieces of information that is sent to the CDH as part of the request. You may want to send additional information to CDH so that CDH can evaluate the current context and suggest your action. A data transform is used to set the context; you can see this data transform by clicking the Map Additional Interaction Context link from the Customer Decision Hub configuration page. Clicking the link opens the data transform from which you can make any changes you need and save the updated data transform to a new ruleset. Another configuration option is that you may have various containers running in CDH and want to access them in Pega Customer Service. Out of the box, customer service is configured to use the Next Best Actions container, which retrieves all actions from CDH. If you want to change that behavior, add your container to the map value list. You access the list from the Customer Decision Hub configuration page by clicking the Update Containers link. You should then see the following map value pair. The most common configuration change is to map actions received from the CDH and map those actions to a case type. Here, you update a decision table that maps the incoming ActionID from CDH to a case type in Pega Customer Service. Each value in the Return column is mapped to a result that defines the class name for the case type amongst other values used by customer service. Customer Decision Hub configuration CDH configuration is typically handled by a decisioning architect. To display the next best actions within Pega Customer Service, you must configure the Next-best-action widget. For more information on how to configure the Next best action widget, see Configuring Pega Next-Best-Action Designer for Pega Customer Service. After you have configured the widget, you create actions that represent the service cases you want recommended in Pega Customer Service. When you create an action the short description is used as the label for the action. The label is the value a customer service agent will see in the Interaction Portal as the name of the suggested action. The ActionID is automatically generated from the short description and this must match the value in the decision table in Pega Customer Service. In the following example, the short description is Open New Account, this is the label that a customer service representative sees in the Interaction Portal in the Next-best-action widget. The ActionID is OpenNewAccount, this is the value used in the decision table in Pega Customer Service.
OPCFW_CODE
Deevynovel Blue Sky Washing Rain – Chapter 1254 – One Month Later, Pitiful Life seed friends read-p3 Novel–Pocket Hunting Dimension–Pocket Hunting Dimension Chapter 1254 – One Month Later, Pitiful Life battle eyes They had been very blessed and simply the very first day, they had obtained lots of loot. In the past calendar month, they proceeded to go into the Bank account Shopping Aspect should they could. It was a bone tissue tiger that had been hundreds and hundreds of m long. It picked up its claw and believed the lifestyle of these revolting pesky insects vanish. In that case, searching can be also practical. “Let’s check out the lowers,” Lu Ze said. For that reason, they had been to several destinations. Beautiful Europe: Belgium Lu Ze along with the girls awakened. This chart was helpful to her also. Alice grinned. “They’re weaker naturally.” “Why could be the sky darker?” At this point, one more Existence Restorative healing Divine Craft established on the fretting hand and instantly covered the gray pig. That big bone fragments claw seems as if a passing away monster.” Quickly, the pain sensation dissipated and Lu Ze reported, “Okay, split stuff, cultivate.” At this point, an additional Daily life Therapeutic Divine Artwork developed on the fretting hand and instantly covered the gray pig. They are able to educate yourself on the Death G.o.d Craft. It was actually a bone tiger which has been hundreds and hundreds of m extended. It raised its claw and believed the life span of those disgusting bugs go away. The beasts either acquired Poison G.o.d Art, Wood G.o.d Art work, or Life G.o.d Craft. They didn’t anticipate that website-stage Everyday life G.o.d Fine art possessed this sort of powerful counter-top results to loss G.o.d skill. In that case, camping can be very practical. “Probably.” Lu Ze flew over. A couple of hours later, Lu Ze along with the young girls were definitely secretly considering four dark-colored goat-like beasts. They had sharp claws. 2 of them have been stage-2 cosmic world state governments and a couple were levels-1 cosmic world says. They had been ideal for them. On the other hand, their capabilities ended up the identical. Lu Ze, Nangong Jing, and Qiuyue Hesha didn’t dare to increase. Quickly, the discomfort dissipated and Lu Ze explained, “Okay, split things, cultivate.” eighth reader james baldwin “Why would be the skies dim?” Lu Li took an in-depth inhale. “… I didn’t anticipate to encounter a real potent beast!”
OPCFW_CODE
Why use two 0.1 µF capacitors in parallel? Below is the schematic of the Pololu A4988 driver board: There are three capacitors on the motor power input. I can understand why engineers decided to use separate 4.7 µF with larger size (1206, 0.1 µF are either 0805 or 0402): because physically small capacitors have lower ESR, and they effectively filter out higher frequencies (correct me if I'm wrong). But what is the reason to use two 0.1 µF capacitors in parallel? Can they be replaced by a single 0.22 µF in order to save PCB space and cost? Image of the board; it can be clearly seen that all capacitors are ceramic: Typically a BOM optimization. Is that C6 0.22 uF an electrolytic? Otherwise it’s a mystery. @winny no, all capacitors are ceramic. I've added photo is the board tl;dr Because badly drawn schematic. They didn't follow our guidelines for drawing schematics. C2 is used to decouple pin 28 and C3 is used to decouple pin 22 (or vice versa): - See also the Allegro data sheet for the A4988 driver; it shows the capacitors as C7 and C9: - +1 for this answer. But excuse me because I'm new here but it feels like an addendum to the answer is the schematic OP posted is poorly laid out. I would have never guessed that's what they meant but I can see how it's technically correct. Given that C2 and C3 are meant to decouple pins, and you cannot derive their function by just looking at the schematic, is it fair to say the schematic is poorly laid out? I see how it's technically correct but given just their schematic I would have no idea where they meant for C2 and C3 to be placed on the board. Well, when I first looked, it was apparent to me that C2 and C3 are very likely to decouple pins 22 and 28. I guess experience of laying out PCBs for microcontrollers is an important thing and, on many circuits I’ve come across, all the “chip” decouplers have been drawn in a separate block so as not to clutter the more useful IO connectivity of the diagram @foreverska. @foreverska The OP's diagram is a schematic. It shows a logical layout of the circuit, not the physical layout. Since C1 C2 C3 and C6 are "obviously" power supply smoothing, they are drawn so they don't clutter up anything else. The drawing on the left of Fig 5 is the physical layout of a circuit board, and the circuit on the right is arranged to make it easy to compare with the board layout. Compare the way the outputs OUT1A, 1B, 2A, 2B are shown on the different layouts for example. The OP's diagram shows they all to the same connector. Fig 5 shows their physical position round the chip. @alephzero I get that schematics show logical layout. It would be a stretch to call me a hobby EE but when I've laid out schematics I too put decoupling caps in their own part of the sheet but I generally flag the pin, throw that flag and 3.3 on one side of cap and gnd on the other. Takes up more space but it's more descriptive. Interesting to know this is an artistic difference. @foreverska It's fairly typical to have power supply decoupling capacitors grouped into one location in the schematics. The fact that they are grouped there indicates they are power supply decoupling caps. In large designs, it's possible to have an entire page of the schematics, or even more than one page, devoted entirely to power supply decoupling capacitors. While the schematics here could have a note indicating the purpose of these caps, personally, I found the fact that they are decoupling caps to be instantly recognizable from multiple indicators. @alephzero: that kind of justification / explanation of different styles / priorities in drawing a schematic would make a good answer, or could be added to this answer. That's exactly the kind of well-known to people in the field thing that you don't get from just knowing physics and how to read a schematic, and that clears up the mystery of why you'd draw it that way. While I respect the answers so far I'm with Peter Cordes. I'm familiar with decoupling caps and the practice of putting them in their own part of the sheet. But I think it should be recognized that the purpose of C1 and C2 is not apparent by the schematic alone. The answer even includes a schematic which puts the caps "near" their pins without explicitly mentioning the "artistic" difference. if the schematic would consistently show logical and none of physical layout, the the dookwise, uh, clockwise indicator label on the pot is certainly an exception to such rule. The bottom line dudes: always refer to what the data sheet might say or recommend. Note that decoupling capacitors are placed close to chip power pins to minimize the antenna effect of the power traces both for noise generated within the chip and emitted out, and for received noise potentially injected in, as well as noise propagation to/from other devices connected to the same trace. It is normal to use one de-coupling capacitor for each power pin. Now you could place these capacitors all over the place in the schematic, but it is easier (and has become a sort of convention) to place them all together somewhere in a corner of the schematic where they do not interfere with the rest of the circuit. Here is another example I have cut and pasted a bit: At the top you see three capacitors connected to 3V3 next to four capacitors connected to VDD_core. Below that you see the CPU which has three 3V3 input pins (shown at the top), and I have pasted in the bottom of the chip where you find another four input pins which need to be connected to VDD_core. Thus each power input pin as matched up with one decoupling capacitor. In this case the CPU has an internal supply (a linear regulator) which provides the VDD_core power: the VDDOUT pin. Because it is an output of an LDO, it has a separate, bigger, 4.7 µF decoupling capacitor.
STACK_EXCHANGE
A few days ago Aaron Patterson wrote a in interesting article about composition vs inheritance with Ruby. He says that when inheriting our classes directly from Ruby’s core objects such as Array, our public API for that object will become too large and difficult to maintain. Consider a powerful object like String which has 164 public methods, once our library will be released, we should maintain all that amount of code. It doesn’t worth the trouble, probably because we just wanted to pick a few methods from it. It’s better to compose an object that hides all that complexity derived from String, and to expose only the wanted behaviors. I was already aware of these issues, but that article was a reminder for fixing my OSS projects. For this reason I refactored It used to inherit from Array (169 methods), but after breaking the inheritance structure, I discovered that I only needed 2 methods. However, there are some hidden corners that are worthing to share. A characteristic that I want for LoadPaths is the ability to add paths to it. After the refactoring, for the sake of consistency, I decided to name this method after #push, and to mimic its behavior. The initial implementation of this method was: it 'returns self so multiple operations can be performed' do paths = Lotus::Utils::LoadPaths.new paths.push('..'). push('../..') paths.must_include '..' paths.must_include '../..' end class Lotus::Utils::LoadPaths # ... def push(*paths) @paths.push(*paths) end end When we use this Ruby’s method, the returning value is the array itself, because language’s designers wanted to make chainable calls possible. If we look at the implementation of our method, the implicit returning value was @paths (instead of self), so the subsequent invocations were directly manipulating The test above was passing because arrays are referenced by their memory address, so that the changes that happened on the outside (after the accidental escape) were also visible by the wrapping object ( Because our main goal is to encapsulate that object, we want to prevent situations like this. it 'returns self so multiple operations can be performed' do paths = Lotus::Utils::LoadPaths.new returning = paths.push('.') returning.must_be_same_as(paths) paths.push('..'). push('../..') paths.must_include '.' paths.must_include '..' paths.must_include '../..' end class Lotus::Utils::LoadPaths # ... def push(*paths) @paths.push(*paths) self end end Dup and Clone LoadPaths is used by other Lotus libraries, such as This framework can be “duplicated” with the goal of ease a microservices architecture, where a developer can define MyApp::Api::View as “copies” of Lotus::View, that can independently coexist in the same Ruby process. In other, words the configurations of one “copy” shouldn’t be propagated to the others. LoadPaths was inheriting from Array, a simple call to #dup was enough to get a fresh, decoupled copy of the same data. Now the object is duplicated but not the variables that it encapsulates ( paths1 = Lotus::Utils::LoadPaths.new paths2 = paths1.dup paths2.push '..' paths1.include?('..') # => true, which is an unwanted result The reason of this failure is the same of the information escaping problem: we’re referencing the same array. Ruby has a special method callback that is designed for cases like this. class Lotus::Utils::LoadPaths # ... def initialize_copy(original) @paths = original.instance_variable_get(:@paths).dup end end paths1.dup is called, also the @paths instance variable will be duplicated and we can safely change paths2 without affecting it. A similar problem arises for Lotus::View to freeze its configurations after the application is loaded. This immutability will prevent accidental changes that may lead to software defects. When we try to alter the state of a frozen object, Ruby raises a RuntimeError, but this wasn’t the case of paths = Lotus::Utils::LoadPaths.new paths.freeze paths.frozen? # => true paths.push '.' # => It wasn't raising RuntimeError This had an easy fix: class Lotus::Utils::LoadPaths # ... def freeze super @paths.freeze end end Composition should be preferred over inheritance, but beware of the unexpected behaviors. I discovered these problems in a matter of a minutes, because the client code of this object ( Lotus::View) has some integration tests that are asserting all these features, without assuming anything of the underlying objects. For instance, it checks one by one all the attributes of a configuration after its duplication, without trusting the fact they can safely duplicate themselves. This double layered testing strategy is fundamental for me while building Lotus.
OPCFW_CODE
Hi all. I'm in need of an unusual IC: a multiplexer that behaves like a changeover switch in each channel. Let me explain: In an ordinary mux/demux, the values of the select lines cause one and only one channel to be connected with the input line at a given time. Meanwhile, all other channels are connected to nothing at all. What I need is for all channels that are not connected to the input line to be connected to ground, or to a second input line that I can connect to ground. Is there any such beast? I'm working on a prototype of a really cool tactile interface using Arduino, but I've come to realize that I can't pull it off without two dozen or so of these special MUXes (and I can't think of a way to get the behavior I need out of the ordinary 16-channel MUXes I have). Eternally grateful for any tips. What type of multiplex chips do you currently have? I'm currently using the 16-Channel Analog/Digital Mulitplexer/Demultiplexer, CD74HC4067, on a breakout board from SparkFun: Can't you just put pull down resistors on all the outputs. That way the outputs that are not connected will see ground through a small resistor in stead. Hi MikMo. I'd be very happy to find out I've overlooked something simple like that. Mind you, I don't have a lot of practical experience with electronics yet, but I don't think this will work for my particular setup. Here's why: The output lines are sensors, but only when they're connected to the input line (which is connected to the cathode by way of a connection to an Arduino input which effectively tells whether current is flowing through the input line). When the selected output is connected to the input line, the sensor can either be in contact with a grounded object or not, in the manner of a simple switch. When it does contact a grounded object, current flows from the cathode into ground and the Arduino registers a "touch". Meanwhile, however (and this is the real trick), the rest of the sensors/output lines need to function as connections to ground which can be touched. Is that too much information? The reason I don't think the pull down resistors would work is that when any given output line is selected, current would flow from input line / cathode into ground through the resistor, causing a short circuit which would register falsely as a "touch". Thanks for the input (no pun intended). Let me know if I've misunderstood something -- it's entirely possible. What you want is a 4053, this is three multiplexers, that act as a change over switch. http://www.datasheetcatalog.org/datasheets/208/109138_DS.pdf With that you can build up any sort of switching arrangements you need. Hi Grumpy_Mike. Many thanks for this suggestion! So this is basically three changeover switches in one, where each switch has one input, two outputs, and one control line to toggle between them. I can build my "magical mux" by using an ordinary 16-channel mux to control 16 of these switches. Unfortunately, this also increases the number of ICs I need by a factor of 6. For 700 sensors, that's 282 ICs (48 muxes plus 234 4053's). This is the closest I've gotten to a solution, though. Thanks again for the tip. 700 sensors :o If you have 700 sensors you are going to need a lot of chips no matter what you do. What sort of sensors are these, is there any other way you could read them in? 700 sensors, indeed! Quite an undertaking for my first DIY electronics project, I know. I would settle for 256 sensors if I had to (in which case I would need only a two-tier instead of a three-tier hierarchy of muxes), but that's about the lower limit. It's a bit like charlieplexing a display, I guess, in that I have a 2D grid of sensors which I plan to sample one after another. I could do 256 sensors with just 17 "magic muxes" and half a dozen resistors and diodes. As it is, I see a lot of "electric glue" in my future :o To be of use we need: The datasheet of the sensor The datasheet of the chips you want to use A schematic showing at least how you think you want it wired Your code. Hi Richard, mrmeval. Thanks for taking an interest. Here's a diagram which may make things a little clearer: Disclaimer: I'm a novice at circuit schematics, and I've just learned to use EAGLE! But hopefully you can see what's going on. Four digital output pins of the Arduino control the multiplexer, while an analog input pin (actually I'm using a digital one for now, and it works fine) reads from the mux by way of a pullup configuration. Diodes to protect the Arduino are omitted. Now, the tricky part: each pair of outputs from the mux is a potential switch -- that is, any output line may "touch" any other output line in a detectable way: each output must be connected to the input line while it is being sampled, and connected to ground when it is not (if unselected outputs float, there is no potential for a switch). At any given moment, current may flow from the cathode through the selected output and into ground via one or more of the unselected outputs. If the selected output does not "touch" any of the other outputs, no current flows. I know this is pretty weird, but therein lies the challenge! The "magic mux" is like an ordinary 16-channel mux except that all unselected outputs are grounded. I can build one using a 4067 and multiple 4053's (which Grumpy_Mike mentioned earlier), although the amount of circuitry involved is prohibitive. mrmeval, my Arduino code is here: http://bit.ly/9BwlBn As of now, the device registers a touch in any of the sensors (which are just the raw output lines) when the line touches an external, grounded object. However, the trick is to detect when the sensors are touching each other. Thanks for the suggestion, Richard: parallel muxes are a really good idea! Just to get some more EAGLE practice, here’s what that might look like: Of course, the number of sampling operations grows with the square of the number of sensors, but based on what I’ve tried so far with single muxes, it seems likely that this approach would support 256 sensors at acceptable sampling rates. It might even scale up to my 700 sensors and beyond. I think I can live with twice the number of chips I had imagined. Thank you, noble forum members. I am wiser now, and closer to a great Arduino creation I think you are making things much more complicated than you need here. From the sound of it you don’t need analogue multiplexers at all but digital multiplexers / data selectors. Here an un selected output is normally a logic 1 but you can get types that revert to a logic 0 when not selected. That first diagram with only one multiplexer will not do anything as you can never detect any interaction between the two inputs. Have you looked at a keyboard scanning matrix, that looks like the sort of thing you want to do. Grumpy_Mike, I'm looking into keyboard scanning matrices. Perhaps that will work for me. I wonder if the patent is an issue. You're right that there's no reason I need an analog multiplexer, although I didn't realize it would make a difference. Are you saying that unselected outputs of a digital mux don't "float", but have a definite value of logical 0? I think I need to do some reading to know whether that means I could differentiate a floating selected output from one connected to logical 0 through another (unselected) sensor. Richard, no, this isn't for a musical instrument, directly. I really only need one bit of information for each sensor. Meanwhile, I've tried sampling my sensors with parallel muxes, and it works, although timing does seem to be an issue. For 256 sensors (simulated by looping over my 16), I've only achieved a small fraction of the sampling rate I need. Perhaps some (more) clever optimization will come to the rescue. Are you saying that unselected outputs of a digital mux don't "float", but have a definite value of logical 0? yes or a value of 1 depending on the type of chip. whether that means I could differentiate a floating selected output from one connected to logical 0 through another (unselected) sensor. No you can't differentiate between these cases, most of the time you don't need to. It really does depend on what the sensors are. It could be that shift registers are a better way to read in a lot of digital sensors. I wonder if the patent is an issue. I would doubt it, you are not going to make millions of these and sell them are you?
OPCFW_CODE
Novel–Young Master Damien’s Pet–Young Master Damien’s Pet 514 The List- Part 2 used fat gammer gurton’s needle “I don’t believe these kinds of information and facts might be freely offered to all the witches to choose from who have accessibility to the key space,” Dime replied directly back to the issue he acquired requested. She used a few more time considering it and her mouth parted, “Oh…” Right after Damien possessed helped bring the guides into the area, Dollar underwent them. Flipping pages and posts wanting to keep in mind where it had been but she couldn’t realize its, “You think it was actually somewhere within the cathedral back Bonelake?” Damien required immediately after she possessed halted hunting the ebooks. Damien let go of her and leaned forward to select the packed water in the gla.s.s. Placing a red-colored dietary supplement that did start to submerge and drop lower, releasing red colors and transforming the colorless water to bright red and in time thickening it. He had taken a sip through the gla.s.s. “If my mum had the parchment to take back the black magic, it indicates she have been involved in most of these gatherings since the beginning,” Penny’s hands and fingers clutched tightly jointly, “If only I possibly could keep in mind what else was printed in that parchment.” “How large was the parchment?” he requested and went along to planning being the piercing suffering got back again, negatively affecting her travel. Penny’s mouth pursed as she tried out keeping in mind it to truly feel her thought processes only moving in shut down circles, “The swamp in the small children,” she described to check out a dim concept arrived at settle on his encounter. They both got not forgotten the direction they got located the tiny small children there during the not allowed forest where no-one went. She still recalled the discomfort during the parent’s speech when they cried because of their old small children. Beethoven: A Memoir “My mother is included way too,” she reminded him, “She has been doing the top range so I don’t think it is probable that it is her.” “Damien, do you think there had been some thing from the teacup the other day?” Penny’s lip area pursed as she experimented with remembering it to sense her thoughts only planning in closed down circles, “The swamp on the small children,” she pointed out to view a black manifestation reach settle on his face. Both got not ignored the direction they had identified the small small children there on the not allowed forest where no person went. She still remembered the anguish within the parent’s speech while they cried for deceased small children. “Each one of these issues that are happening. The dying using ma.s.sacre, burning of witches, eliminating kids…it truly is all to acquire the potential back that had been taken and discarded because of the dark-colored witches. They require it so desperately they won’t prevent until they may have discovered a way to have it and they are generally already over the pathway than it.” “If my mom obtained the parchment to give back again the black color miraculous, this indicates she were included in every one of these events ever since the beginning,” Penny’s hands and wrists clutched tightly collectively, “If only I could truthfully keep in mind what else was designed in that parchment.” “You are proper,” Dollar arranged, “Is it possible you deliver the ebooks of Woman Isabelle up here to ensure that I possibly could take a look at it? It really has been bothering me for some time on where I discovered about the ritual.” “I don’t consider this sort of information can be freely on the market to each of the witches to choose from who get access to the secret space,” Cent replied returning to the problem he possessed asked. She devoted a few more time thinking about it and her mouth area parted, “Oh…” “It isn’t nearly anything new for witches to not ever achieve it. My speculate is that they are employing spells and generating people consume to ensure that it makes one ignore or clear opinion of them.” She wished she could bear in mind the rest of the information inside the parchment but she couldn’t at this time. Her mommy was part of it, thinking Dime to themselves. rhetoric and poetry in the renaissance She wanted she could recall the rest of the facts within the parchment but she couldn’t at the moment. “Similar to the memory space spells?” her mom possessed used to get rid of the recollection. “What is it?” “It isn’t nearly anything new for witches not to practice it. My figure would be that they have been implementing spells and making folks take in to ensure that it will make one overlook or rid thoughts about them.” “Mhmm, I am grateful you turned the teas to the h2o.” “Mhmm, I am glad you transformed the herbal tea to the drinking water.” “Like the storage spells?” her mom obtained accustomed to remove the remembrance. “My mum is engaged too,” she reminded him, “She has been around the front side lines then i don’t think it is potential that it must be her.” Damien rid yourself of her and leaned to choose the loaded liquid on the gla.s.s. Putting a crimson tablet that begun to submerge and fall down, delivering reddish colors and rotating the colorless liquid to red and in time thickening it. He had a drink from your gla.s.s. “You understood,” Penny smiled to obtain a smile last give back, “How did you know?” “These things which are going on. The loss utilizing ma.s.sacre, burning off of witches, eliminating kids…it is actually all to acquire the electrical power back that had been lost and disposed of through the dark-colored witches. They desire it so desperately which they won’t cease until they offer located a way to own it and perhaps they are already in the route from it.” “You happen to be ideal,” Dollar concurred, “Might you bring the ebooks of Woman Isabelle up here to ensure I was able to look into it? It has been bothering me for quite a while on where I discovered relating to the routine.” She wanted she could bear in mind the remainder of the aspects on the parchment but she couldn’t now. “If my mom had the parchment to bring back the black magical, this means she were associated with each one of these activities since very beginning,” Penny’s hands and fingers clutched tightly with each other, “If only I was able to keep in mind what else was printed in that parchment.” troubleshooters – into the night lyrics It turned out in order that Dollar would never be able to absolute what she study to her dad. If she acquired kept Dime without meddling with her thoughts then there was plausible she would have blurted it out to her daddy who will have finally identified that his partner had not been an ordinary our but a dark witch. “It isn’t everything new for witches to not get it done. My guess is that they are working with spells and helping to make men and women feed on such that it will make one fail to remember or liberate opinion of them.” “You might be correct,” Penny agreed upon, “Could you possibly bring the ebooks of Young lady Isabelle up here in order that I could consider it? It really has been bothering me for quite a while on where I found out about the routine.” “It wasn’t from these publications or even in the chapel. I examine a list in regards to the black witches routine to unbind the magical while i was small. My mommy was transporting a list along with her and she need to have believed she possessed hidden it well until I discovered it when we had been cleaning the property. It turned out at the first try when she washed my storage,” she hadn’t understood it before however right now she managed she finally fully understood why her mom obtained panicked and brought the key to clean her very recollection on the way to study and write. “I don’t assume this sort of facts would be freely available for every one of the witches in existence who have access to the secret bedroom,” Dollar replied directly back to the dilemma he acquired inquired. She put in some more time thinking about it and her lips parted, “Ah…” the carefree girl who turned into a woman with little hopes fanfiction It was subsequently to ensure Penny would never be capable to utter what she study to her daddy. If she acquired kept Dime without meddling along with her ideas then there was clearly a possible chance she might have blurted it to her dad who would have finally found out that his partner had not been an ordinary human but a dark-colored witch. Just after Damien acquired moved the ebooks towards the space, Penny experienced them. Turning pages of content trying to recall where it was actually but she couldn’t discover it, “Do you think it was somewhere within the chapel back in Bonelake?” Damien expected following she experienced discontinued researching the ebooks. Novel–Young Master Damien’s Pet–Young Master Damien’s Pet
OPCFW_CODE
This is a work in progress The actual location of the content that the Storage Manager will managed will be referred here as $ROOT. $ROOT is installation-specific, so the Storage Manager should be configurable for the storage to be rooted at any path. For the IU instances, $ROOT must not reside on the same filesystem as the system root. IU VMs are limited to 40G for a system volume, and they cannot be grown. In these cases, a new filesystem under /srv/amp would be the preferred path. For efficiency, the content managed by the Storage Manager must reside on a single POSIX-style filesystem. This provides several advantages: - The filesystem can be exported to other local machines with the same semantics - Rename operations are constant-time - Link operations are constant-time - Ownership and Permissions are well-understood - Avoids out-of-disk issues when moving large files around The directory structure rooted at $ROOT and has this structure: This breaks the storage into three areas: The data directory is the storage for all of the files managed by the storage manager. This would include master files, intermediate files, etc. The structure within the data directory is implementation-dependent, but for efficiency it would ideally include a directory hashing mechanism of some sort. This is effectively a dropbox for injecting data into the Storage Manager at the user's request (either directly or via a mechanism outside of the scope of AMP). Using a mechanism that is TBD, the Storage Manager will be made aware of new files in this directory and move them into the $ROOT/data hierarchy for management. The structure within this directory is TBD, but it could be something like the following (or completely different): - A flat namespace where all files go, regardless of ownership, collection, etc. That data would need to come from another source. - Per-collection directories where files placed into a collection directory will trigger this file's association with the collection - Per-user directories working similarly to the above The $WORKING hierarchy is where the MGM Adapters will store their work-in-progress files, state information or whatever: Each directory at the top level of $WORKING corresponds to a job execution that is currently in progress. In the example above, two jobs (job-0001 and job-0002) are currently running. The naming is implementation-dependent. Within each job directory, each node in the workflow will get a separate directory that a specific MGM Adapter instance is free to use in whatever manner necessary. Temporary files, downloads from S3 storage, output of local MGMs, or whatever. Like the job directories, the node directory naming is up to the implementation Files appearing in the $ROOT/incoming tree will be renamed() into the $ROOT/data tree upon successful ingest. Passing files for MGM Adapter Input The Storage Manager will pass absolute path names of input files (residing within $ROOT/data) to MGM Adapters for processing. The MGM Adapters can read the files, send the data to an S3 bucket for processing, or whatever, directly from the stored location, without adding an additional transfer. Additionally, since the $ROOT/data space is available to all MGM Adapters only one copy of the data exists (unless it is copied by an MGM Adapter). Capturing MGM Adapter Output When an MGM Adapter has created new output, the path within the $WORKING tree is passed to the Storage Manager for ingest. The Storage Manager will ingest the output file in roughly the same manner as ingesting a new master: by moving the file into $ROOT/data when it is correct. The Storage Manager may set up the basic $ROOT directory tree, if one doesn't already exist in the $ROOT location. Starting / Restarting It is implementation dependent whether or not the $WORKING tree is cleared on startup. $WORKING tree maintenance When a job is complete and the output data has been ingested into the Storage Manager, an AMP component should erase the job directory in $WORKING Disk Space Management It is assumed that the system administrator will monitor the overall disk usage and add disk as necessary for continued operation. However, it is the responsibility of AMP components to clean up transient data to maintain a small disk usage profile. Additionally, if there is an operation that AMP can reasonably foresee which will exhaust the allocated disk space, it is probably a good idea to inform the user (and admins) and abort the request. Specifically, if someone wants to upload a 500G file and only 400G is actually available on the disk, aborting the transaction early is far preferable to truncating the data. There are three components need direct access to this data (each of which correspond to a top-level $ROOT directory): - A source (outside the scope of AMP) will need to provide file data that will need to be managed by the Storage Manager - The Storage Manager itself - MGM Adapters (shims) will need access to the stored data and provide file data results For the pilot, the same system user can be used for all components, but for a production system there should be more protection against malicious or accidental modification. Using separate system users, all belonging a common group is one method that can be used to isolate file access. Specifically, the system user that created the file (via an ingest or an MGM Adapter output), the other system users cannot write to the file, but they can all read since they share a common group.
OPCFW_CODE
MIXING & EFFECTS Audio Input/Output Routing This section covers internal Mixer audio routing functions. Internal audio sources (instruments loaded in the Channel Rack) are routed to the Mixer insert tracks with the Channel settings FX selector, as shown below: NOTE: In the example above Mixer Track 2 is selected and so the Track Send switches, External audio Input / Output options and Mixer Track Properties change to show unique settings for Mixer track 2. Usually the only Mixer track with an external audio output is the Master Mixer track. Some points about internal and external audio routing: - Routing instruments to Mixer tracks - See Routing Instrument Channels to Mixer Tracks. - Internal sends and sidechan sends - See Internal Mixer Track Routing & Sidechaining. - External routing - Audio sent to and received from your audio devices input and output jacks is set by the External audio IN / OUT selectors (as shown above). NOTE: that each Mixer track has its own input and output options. For example, if you have an audio device with 16 microphone inputs, then you have the option of setting 16 unique Mixer tracks to receive each of these audio device inputs. It is even possible to set two or more Mixer tracks to receive the same input or send to the same output. When '(none)' is shown, this means no external audio device input or output is selected for the selected track. By default, the Master Mixer track 'Output' is the routed to the audio devices main outputs (usually the main/front Left/Right channels). However, by routing another Mixer track (other than the master) to other audio devices outputs (e.g. the rear channels of 5.1 surround-sound card), you can create a separate sub-mix for band headphone monitoring or studio monitoring. - Parallel internal / external routing - It is worth noting that the Mixer track External audio IN / OUT can function in parallel with the internal routing functions so that any track can receive external and internal audio sources simultaneously OR output to the Master Mixer track and any other available audio device output. Mixer routing is described in more detail below: How to Route Audio - Mixer tracks - can take an input audio signal from one or more internal Instrument Channels by setting the Mixer Track Selector to a given track OR one external source (External audio OUT). After processing the input they can pass the audio to another location, usually the Master track (M) but it can be another Mixer track (as shown by the cables) or directly to a audio device output (External audio OUT). - To send audio from one insert track to another - Select the source track you wish to send FROM and then Left-click the send control switch on the destination track you want to send TO. A send-level knob will appear on the destination track that can be used to change the level of signal passed to the destination. By default the SOURCE track will maintain routing to the Master track - If you don't want to create a parallel path to the Master Mixer track, select the source track and disable the send switch on the master by Left-clicking it. - External Inputs - Most ASIO drivers provide inputs (External audio IN) for microphone, line-in, etc. You can route these inputs to any or all Mixer tracks. Note that the ASIO inputs will not replace but mix together with any input audio the track receives from other sources (instruments, other tracks, etc). - External Outputs - Most ASIO drivers provide outputs (External audio OUT). You can route any Mixer track to any - Surround sound 5.1 or 7.1 - You can set a group of Mixer tracks to output (External audio OUT) to the individual channels of a 5.1 surround system for surround-sound mixing. A Surround sound template is available in the File menu > New from template > Utility > Surround panner. After opening this template, select audio device outputs using the Output menu to match the pre-named Mixer tracks. - FL Studio as VST - Additional outputs also appear when using the FL Studio multi output VSTi connection. Limitations of routing: - If you are using the 'Primary Sound Driver' audio driver only one track at a time can output to the primary output (usually this is the Master Track). This limitation does not apply if you use an ASIO driver, then any Mixer track may be routed to any available output. - FL Studio will disable routing options that would create a feedback loop (for example trying to route a track to itself). - Use the Send Level knobs to adjust the amount of signal sent from an insert track to the send tracks. - The specially named 'Current' track can only receive audio from the currently selected track. Its main purpose is to hold an Edison plugin, ready to record any selected tracks audio OR visualization plugins, such as WaveCandy. Prepare for Recording The record arm switch prepares a track for audio recording (to a *.wav file). The input and/or any internal audio routed to that track will be recorded. Mixer reference diagram See the main Mixer page for a full description. NOTE: Most controls are automatable (Right-Click and select 'Create automation clip').
OPCFW_CODE
First part: Running a module locally for prediction Deep Learning is nowadays at the forefront of Artificial Intelligence, shaping tools that are being used to achieve very high levels of accuracy in many different research fields. Training a Deep Learning model is a very complex and computationally intensive task requiring the user to have a full setup involving a certain hardware, the adequate drivers, dedicated software and enough memory and storage resources. Very often the Deep Learning practitioner is not a computing expert, and want all of this technology as accessible and transparent as possible to be able to just focus on creating a new model or applying a prebuild one to some data. With the DEEP-HybridDataCloud solutions you will be able to start working from the very first moment! The DEEP-HybridDataCloud project offers a framework for all users, and not just for a few experts, enabling the transparent training, sharing and serving of Deep Learning models both locally or on hybrid cloud system. The DEEP Open Catalog (https://marketplace.deep-hybrid-datacloud.eu/, also known as “marketplace”) provides the universal point of entry to all services offered by DEEP. Its offers several options for users of all levels to get acquainted with DEEP: - Basic Users can browse the DEEP Open Catalog, download a certain model and apply it to some local or remote data for inference/prediction. - Intermediate Users can also browse the DEEP Open Catalog, download a model and do some training using their own data easily changing with the parameters of the training. - Advanced Users can do all of the above. In addition, they will work on more complex tasks, that include larger amounts of data. The DEEP-HybridDataCloud solution is based on Docker containers packaging already all the tools needed to deploy and run the Deep Learning models in the most transparent way. No need to worry about compatibility problems, everything has already been tested and encapsulated so that the user has a fully working model in just a few minutes. To make things even easier, we have developed an API allowing the user to interact with the model directly from the web browser. It is possible to perform inference, train or check the model metadata just with a simple click! Let’s see how all this work! In this post we will show how to download and use one of the available models from the DEEP Open Catalog in our local machine. These instructions will assume the user is running on linux but the docker containers can run on any platform. First we browse the catalog and click on the model we are interested in among the many that are already in place. Once we click on the model of our choice we will see something similar to this: In this case we have selected a module classifying plant images according to their species using a convolutional neural network architecture developed in Tensorflow. Under the name of each of the modules in the DEEP Open Catalog we find some useful links: - Link to the GitHub repository including the model source code - Link to the Docker Hub repository of the docker containing all the needed software configured and ready to use - In case this is a pretrained model, a link to the original dataset used for the training. Before starting we need to have either docker or udocker installed in our computer. We will be using udocker since it allows to run docker container without requiring root privileges. To install udocker you can just follow this very simple instructions: git clone https://github.com/indigo-dc/udocker pip install . We can now just follow the instructions on the right part of the module page and type the following commands: udocker pull deephdc/deep-oc-plants-classification-tf udocker run -p 5000:5000 deephdc/deep-oc-plants-classification-tf This will download (pull) the docker container from Docker Hub and run it on our local machine. The run methods includes the option -p 5000:5000 which maps the port 5000 from our local server into the port 5000 in the container. We have now the DEEP API running on our localhost! You can go to your preferred web browser and enter localhost:5000 in the address bar. This will open the DEEP as a Service API endpoint. It looks like this: As you can see in the image different methods can be chosen. You can either return the list of loaded models (in this case we are just running the plant classification example) or the metadata of your models. You can also do some prediction on some plant image of your interest or even train the classification neural network on a completely new dataset. All this directly from your web browser. Let’s now try out the prediction method. We can either use a local file or the URL to some online plant image to perform the classification. For this example we will use a locally stored image. We click on Select File and browse our file system for the image we are interested in. In this case we will use the image of a rose. If you want to reproduce this example you can find the image here. Now that we have selected the image we can click on Execute. The first time we perform a prediction with a given model the process takes a little while since the Tensorflow environment must be initialized. Afterwards, the prediction will be extremely quick (less than one second in many cases). The prediction for our roses gives us the following output: The result shows us the 5 most probable species. The most probable one is the Rosa Chinensis with a probability of 80%. Our module has predicted correctly! Together with the prediction we can find a link pointing to Wikipedia to check the species. The output is given in JSON format that can be very easily integrated with any other application needing to access the results. In this example we have seen how to use one of the DEEP-HybridDatacloud modules running a Deep Learning model in just a few simple steps on our local machine. If you want more detail, you can find the full documentation here. In next posts we will see how to train a model using the DEEP API and how to run on a cloud system. Stay tunned!
OPCFW_CODE
Một phương pháp ghi chú khá hay, nên tìm hiểu và áp dụng để tăng khả năng ghi nhớ của bạn. It amazes me how much school has changed since I graduated. One change is the Cornell Notes method. Despite being one of the most popular systems around, I’m told there isn’t an online paper supplier. Rather than pulling out a ruler or calling college bookstores, I thought this would be an opportunity to show you how to create a Word template for Cornell notes (Check Resources sections for Cornell Notes template for Microsoft Word.) If you’re not familiar with Cornell Notes and the benefits, then take 5 minutes to watch this video presented by a teacher. Word templates are a special type of file designed for reuse. Templates provide the structure and more items such as auto text entries and macros. They are the basis by which Word documents are created whether it’s a new document or a sales letter. In fact, Word starts by opening a blank page based on an auto start macro in the normal.dotm template. Microsoft comes with many pre-built templates and groups them by function. You may have other templates that add-in tools or programs have created. You see this interface when you select File | New. You’ll see a listing of your available templates. On the top row How to Create the Cornell Notes Template If you’ve not seen the Cornell Note-taking system, it divides a 8.5″ x 11″ page into three sections: Cue Column (1), Note taking Column (2) and Summary (3). Depending on your preferences, some people like to have the note taking area (2) lined like notepaper. For our template, we’ll add the lines. Setting the Template Page dimensions - Open a new Word document - From the Page Layout tab, select Margins - Click Custom Margins… from the bottom of the drop down. - In the Page Setup dialog enter 0 for Bottom, Left and Right margins. For Top, use 1″. - Click OK. (If you get a message saying your margins are outside the printable area, click Fix and then OK.) - Press your Enter key once. - Press Enter1x in case you ever want to add leading text like class name. Creating the Table - From the Insert tab, select Table. - From the Insert Table menu, select Insert Table… - On the Insert Table dialog, enter 2 for columns and 34 for rows. - Click OK. You should now see your table. - Right-click in any table cell in column 1. From the menu, click Selectand then Column. This should turn column 1 blue. - Right-click again and select Merge Cells You should now have 2 equal-width table columns. The first column will not have any lines. Setting Table Column Widths and Row Height In the initial example, you can see that the 2 columns are differing widths so we need to define those. You may also adjust the column widths and row height to your desired settings. - Right-click column 1 and select Table Properties… - Click the Column tab, and enter 2.4 for the Preferred width. - Click the Next Column button >>. - Enter 6.0 for Column 2 Preferred width. - Click the Row tab - Click the check box for Specify height and type 0.25. - In the Row height is field, select Exactly. - Click OK. If you go to Print Preview, you will see the cell lines in the Cue column (1) do not display and you have a summary area (3)at the bottom. I intentionally added 1″ before the table as it makes it easier if you need to adjust the position or add a description. Saving the Template - From the File menu, select Save As. - In the Save As dialog, navigate to your Templates folder. This will vary based on your profile. As example, mine is: The Microsoft Community has several posts on template locations. Alternatively, you can right-click on an existing icon in your My Templates area and look at the file location. - At the bottom of the dialog, type Cornell Notes as your File name. - Change the Save as type to Document Template (*.dotx). - Click Save. Using the Cornell Note-taking Template - From the File menu, select New. - Click the icon for My Templates on the top row. - Click the Cornell Note template. - Click OK. Your document will open and you can make further changes. For example, some people may want to adjust the top area to type class name and date. That’s why I added the paragraph break before the table. Other people put their name in case the notes are lost. Finally, print out how many copies you’ll need and head to class. Cornell Notes Template for Microsoft Word
OPCFW_CODE
1px Transparent Border around image after resize in photoshop I'm not sure what setting or anything has caused this, but when I resize an image in Image > Image Size, the resized image gets semi-transparent 1px border! There seem no obvious settings to cause something like this. It does'nt seem to do it 100% of the time, I notice it more when I'm cutting up a design and paste images into new documents. Here I have recreated it with a simple 2-layer image: This is an artifact of the resampling method. If you take a 500px square image of red (no other layers and where the red layer is NOT locked as background layer) reduce it to 100px, the transparency is there when using bicubic resampling, but does not happen when using nearest neighbor. I tried this with noise-filled layers and it still occurs, but is a lot less noticeable. It's too bad that nearest neighbor makes the image look like total garbage. It seems to me that this is a bug in the implementation of the resampling algorithm rather than an artifact inherent to the resampling method. I could be mistaken, but if I were to write my own resize algorithm the new pixels would be sampling only from old pixels, and if none of them have any transparency, none should be introduced from the samples. I almost feel as if PS is attempting to sample from beyond the image with edge cases, and getting 0 for opacity/alpha, but that's just total speculation on my part. +1 For actually giving an explanation instead of just a work-around. I was running into a similar problem but just found a solution, in case anyone finds this useful. In short, Layers seem to sample from outside the canvas when resized, therefore introducing transparency to the border pixels, but Background Layers don't suffer from this artifact. You can convert a single Layer to a Background Layer by selecting it and going to Layer > New > Background from Layer or you can convert multiple Layers into a Background Layer by going to Layer > Flatten Image. Here's what my layer panel looks like before: And here's what it looks like after: Now I can resize this image and save it out without introducing any transparency. Months of this hell; THANK YOU! This is the best solution in my opinion. Thanks!! Duplicate the resized layer and merge it down. Repeat. This will remove half-transparent edges of opaque layers. I agree it's a pain in the ass though. And while you do that, record it as an action for later use. You never know when you're going to need it again. :) I have had this a few times, just simply add a layer at the back of the document in a single colour like black, or a colour that matches the edges of your work and should be all sorted! If the photo is important, I combine two images: 1, an image downsampled using the bilinear method, and 2 (on top), the same image reduced using bicubic sharper. That gives me a better photo, with only the edge pixels from the lesser reduction method. (First downsample with the bilinear method, copy the result, then go to history and return to the full-sized image, then downsample using bicubic sharper. To finish, paste the copied first image below the second. Then flatten the image or go to File > Save for web. And yes, sometimes the result is worth the effort. The best solution I have found for this is to use the old Bilinear sampling method when I need to resize and avoid the 1 pixel semi-transparent border. You can find it under Image > Image Size > Resample: (select Bilinear). It doesn't resample quite as nicely as the Bicubic method, but I find it's good enough and it does solve the problem. Try this workaround: say you want a target resolution of 200×100, resize it to ~202×102 instead, and manually remove the semi-transparent border using the Single Row/Column Marquee Tool. Be sure your images is flattened before resizing, the only layer should be the background layer ! I Don't know why it works but I found that converting the item in question to a smart object first then doing the resize (and you can rasterize the layer after if you want) it will not convert semi transparent pixels to grey but preserve the transparency instead. Try it out! My solution isnt that good but I just duplicate the layer a bunch oF times until the borders aren't transparent anymore. simple but does the trick So the same as this older answer higher up.
STACK_EXCHANGE
I use multiple versions of ARCHICAD so I can train and support users on all recent versions. When I start up ARCHICAD 21 on my new MacBook Pro (purchased in March), it says that it is out of date, and suggests that I download a hotfix. However when I... This past week, ARCHICAD USER featured the work of Peter Twohy (2e Architects) in our monthly webinar: https://www.youtube.com/watch?v=6Gq8nVRUeQo Peter does some beautiful design work in ARCHICAD, and uses Twinmotion to create stunning imagery. I'm experimenting with some library part scripting, and would like to access the values of a Property (assigned to an instance of an object or door or window) inside the GDL script. I don't know if any of the Request statements can pick up this info.... Thursday November 21 at 1 pm PST (US Pacific Time) Free ARCHICAD USER training webinar 10 Cool ARCHICAD Tricks You Can Use Info/Registration: https://archicaduser.com I'm going to share a number of my favorite "tricks" to help you get more mileage ou... Are there any landscape architecture firms that use ARCHICAD for their entire workflow? If so, I'd love to get references so I can get some insight into how well it's working out for them. BACKGROUND: I have been contacted by a 17 person firm that is... Minh - Thank you for passing along the reference to the MacWorld article. The Terminal command allowed me to change the security setting so now I can update ARCHICAD 21 on my MacBook Pro running Catalina. That is a relief! Much appreciated. Eric Gerry - I appreciate that you're trying to figure out a way to help me out. I did add another parameter to the object, and was able to add it to the interface (as a quick test I put it into the Description list) so it could be edited for each instanc... Thanks Gerry and Joachim for answering this question so I don't spin my wheels testing things. It's frustrating that this capability (for an object to access Properties of an instance) is only available in Labels and Zones. The specific application i... Thanks for mentioning me Karl. I am rather busy, but occasionally do one-on-one sessions via GoToMeeting screensharing. I also have a weekly ARCHICAD Coaching Program webinar in which I answer questions from attendees, also via GoToWebinar. For more ...
OPCFW_CODE
from collections import defaultdict from utils.file_utils import saveJson def pre_process_wiki_db(wiki_path, output_path): dic = defaultdict(set) with open(wiki_path, 'r', encoding='utf-8') as file: for line in file: r1 = line.replace("\n", "").split('|') dic[r1[0]] = list(map(lambda x: int(x), r1[1].split(','))) saveJson(dic, output_path) def pre_process_wiki_db_wordcount(): # 暫時沒有用 dic = {} with open("../dataset/wiki_db_wordcount.txt", 'r', encoding='utf-8') as file: for line in file: r1 = line.replace("\n", "").split(' ') dic[r1[0]] = int(r1[1]) saveJson(dic, "../dataset/wiki_db_wordcount_j.json")
STACK_EDU
<?php namespace Rubix\ML\AnomalyDetectors; use Rubix\ML\Learner; use Rubix\ML\Persistable; use Rubix\ML\Datasets\Dataset; use Rubix\ML\Other\Helpers\Stats; use Rubix\ML\Other\Helpers\DataType; use Rubix\ML\Other\Specifications\DatasetIsCompatibleWithEstimator; use InvalidArgumentException; use RuntimeException; /** * Robust Z Score * * A quick *global* anomaly detector that uses a robust Z score to score and * detect outliers within a dataset. The modified Z score consists of taking * the median and median absolute deviation (MAD) instead of the mean and * standard deviation (*standard* Z score) thus making the statistic more * robust to training sets that may already contain outliers. Outliers can be * flagged in one of two ways. First, their average Z score can be above the * user-defined tolerance level or an individual feature's score could be above * the threshold (*hard* limit). * * References: * [1] P. J. Rousseeuw et al. (2017). Anomaly Detection by Robust Statistics. * * @category Machine Learning * @package Rubix/ML * @author Andrew DalPino */ class RobustZScore implements Learner, Persistable { const LAMBDA = 0.6745; /** * The average z score to tolerate before a sample is considered an outlier. * * @var float */ protected $tolerance; /** * The threshold z score of a individual feature to consider the entire * sample an outlier. * * @var float */ protected $threshold; /** * The median of each feature column in the training set. * * @var array|null */ protected $medians; /** * The median absolute deviation of each feature column. * * @var array|null */ protected $mads; /** * @param float $tolerance * @param float $threshold * @throws \InvalidArgumentException */ public function __construct(float $tolerance = 3.0, float $threshold = 3.5) { if ($tolerance < 0.) { throw new InvalidArgumentException('Tolerance must be 0 or' . " greater, $tolerance given."); } if ($threshold < 0.) { throw new InvalidArgumentException('Threshold must be 0 or' . " greater, $threshold given."); } $this->tolerance = $tolerance; $this->threshold = $threshold; } /** * Return the integer encoded estimator type. * * @return int */ public function type() : int { return self::DETECTOR; } /** * Return the data types that this estimator is compatible with. * * @return int[] */ public function compatibility() : array { return [ DataType::CONTINUOUS, ]; } /** * Has the learner been trained? * * @return bool */ public function trained() : bool { return $this->medians and $this->mads; } /** * Return the array of computed feature column medians. * * @return array|null */ public function medians() : ?array { return $this->medians; } /** * Return the array of computed feature column median absolute deviations. * * @return array|null */ public function mads() : ?array { return $this->mads; } /** * Compute the median and median absolute deviations of each feature in * the training set. * * @param \Rubix\ML\Datasets\Dataset $dataset * @throws \InvalidArgumentException */ public function train(Dataset $dataset) : void { DatasetIsCompatibleWithEstimator::check($dataset, $this); $this->medians = $this->mads = []; foreach ($dataset->columns() as $column => $values) { [$median, $mad] = Stats::medMad($values); $this->medians[$column] = $median; $this->mads[$column] = $mad; } } /** * Compute the per feature z score and compare the average and max values * to a tolerance and threshold respectively. * * @param \Rubix\ML\Datasets\Dataset $dataset * @throws \InvalidArgumentException * @throws \RuntimeException * @return array */ public function predict(Dataset $dataset) : array { if (is_null($this->medians) or is_null($this->mads)) { throw new RuntimeException('The learner has not' . ' not been trained.'); } DatasetIsCompatibleWithEstimator::check($dataset, $this); $p = $dataset->numColumns(); $predictions = []; foreach ($dataset as $sample) { $score = 0.; foreach ($sample as $column => $feature) { $median = $this->medians[$column]; $mad = $this->mads[$column]; $z = abs((self::LAMBDA * ($feature - $median)) / $mad); if ($z > $this->threshold) { $predictions[] = 1; continue 2; } $score += $z; } $score /= $p; $predictions[] = $score > $this->tolerance ? 1 : 0; } return $predictions; } }
STACK_EDU
Do you thrive in an agile environment where check-ins reach customers in a matter of hours and all failures are investigated as live site issues? The Bing UX Platform team has been passionate about developer and feature agility for over 5 years. Our platform now supports over 600 engineers, features and performs validated deployments to data centers around the world, multiple times per day! We are now looking at microservices to turn the crank on agility again! We’re building a next-generation, fully integrated, one-click-onboarding microservice platform to run Bing’s most important and performance critical workloads at scale. We are marrying the best Microsoft technology with the best from open-source to provide a platform that will handle the heavy lifting associated with operationalization of services - build, validation, deployment, capacity, monitoring and logging, and instrumentation. If you want to experience fulfilling challenges with start-up urgency at Microsoft, this is the team for you. State-of-the-art technologies. A dynamic, agility-focused working environment. No bureaucracy; only code and data- based decision making. We’re looking for senior-level engineers with expertise in multiple aspects of computer science and engineering: High-reliability platform development and architecture Service Containerization and process isolation Highly agile development and deployment processes Familiarity with open-source tools like Docker, Mesos, and If you are serious about working across teams not just to use the latest technologies but help define them, delighting developers through unlocking the full potential of agile (development, validation, deployment, and monitoring), and pleasing millions of customers this is the most exciting place you can be right now. 5+ years relevant platform/framework design, testing, development, service deployment, and maintenance experience 5+ years of solid development experience in an OOP language (C/C++/C#/Java) is required Minimum BS in CS or equivalent is required Development experience using web client technologies (JS/Typescript, CSS, HTML) on large-scale applications is Experience with open-source tooling is a plus Microsoft is an equal opportunity employer. You will receive consideration for employment without regard to race, color, gender, sexual orientation, gender identity or expression, religion, national origin, marital status, age, disability, veteran status, genetic information, or any other Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to email@example.com. |Frontend Engineering Director Manager||San Francisco, CA, United States NEW||January 30th 2018| |Lead System Architect||Ahmedabad, Gujarat, India NEW||October 23rd 2017| |Java Cloud Native Developer||Hyderabad, Telangana, India NEW||March 19th 2018| |Senior Project Manager||Santa Ana, CA, United States NEW||December 13th 2017| Oklahoma City, OK, United States Pune, Maharashtra, India Bengaluru, Karnataka, India Contra Costa Community College District To allow people seamlessly work & play together. Navigate & balance your personal interests and professional pursuits. All in one place yet separate. Glocal Circles brings it all together.
OPCFW_CODE
Overview of load balancing The farm-cluster-node concept allows you to configure and manage how the data is analyzed. Creating individual clusters, you can for example, define unique data sources to monitor specific software services, configure separate client aggregations schemes or different applications monitoring where the data should be analyzed and reported in a per application manner. Report server load balancing provides greater scalability, usability and data integrity: - Application users are automatically load balanced across all nodes in the cluster. - As demand increases, more nodes can be added to seamlessly scale the solution. - The primary node provides a single entry point to view all data. - Avoid duplicating traffic across report servers in large scale deployments. - Configuration is shared to all nodes in the cluster reducing configuration overhead. In situations where any of the individual clusters is under-performing you can add an extra node, or nodes, to the cluster in order to alleviate the workload for that cluster. Once the new node is added to the cluster, the primary node of the cluster will automatically balance evenly the number of observed unique users between all the nodes located within the cluster. The unique users moved to the new load balancing node will retain their historical data on their original report server node. With time, moved users data will be located on the new node in its entirety. The amount of time depends on your data storage settings. If you wish to manually distribute load across report servers this can be achieved with filtering configured through the report server advanced properties editor. Once an existing node is removed from a cluster, the load is not redistributed until the next execution of the sampling task and all historical data processed by that node will remain with that node. While the load balancing occurs on a unique user level, some unique users may contain different amounts of data associated with them for example, number of operations. In such instance, while the number of users is balanced evenly between all nodes, some nodes will have a larger processing load. This, combined with possible differences in system hardware, may cause for some the nodes to perform diversely. The software service packs do not need to be applied in any specific order within the load balancing cluster. As a general rule, the software service packs and patches should first be applied to the primary node and then all other nodes. It is recommended to keep the same software version on all Nodes within the cluster. If all nodes share the same database server, it is recommended to deploy the service packs successively as not to overload the database server. Limitations of load balancing A load balanced cluster provides a great scalability, but there are limitations that should be observed when following this style of deployment. Every data source located within a cluster will report its monitoring data to a primary node of a given cluster. Next, that primary node will distribute the monitored data among all nodes of the same type within the cluster using the unique users load balancing method. Even though, all data sources transmit data only to the primary node, all data sources must be connected to every node in the cluster to ensure execution of any administrative and diagnostic tasks . If combining existing deployments, or operating in large environments, this can add extra overhead to the network management. Uneven load balancing While the load balancing occurs on a unique user level, some unique users may contain different amounts of data associated with them for example, the number of operations. In such instance, while the number of users is balanced evenly between all nodes, some nodes will have a larger processing load. This, combined with possible differences in system hardware, may cause for some the nodes to perform differently. URL aging is performed per report server and not per cluster. This means that an operation could be aged out on some report servers and not on others. This could lead to different numbers being reported for operations when viewing them historically from when they were first recorded. In a standalone report server deployment, a report server will determine if an operation is eligible to be aged out or not. If it is aged out, the data will be collected in the All other operations metric. If you examine this operation for yesterday, and it has been aged out, it will be missing. In the clustered environment, this could be aged out on some of the report servers but not all. This will cause the statistic to be different than they were earlier.
OPCFW_CODE
sudo apt-get update && sudo apt-get install git cmake make gcc g++ clang libmysqlclient-dev libssl1.0-dev libbz2-dev libreadline-dev libncurses-dev mysql-server libace-6.* libace-dev sudo apt-get update && sudo apt-get install git cmake make gcc g++ clang default-libmysqlclient-dev libssl1.0-dev libbz2-dev libreadline-dev libncurses-dev mysql-server libace-6.* libace-dev sudo apt-get update && sudo apt-get install git cmake make gcc g++ clang libmysqlclient-dev libssl-dev libbz2-dev libreadline-dev libncurses-dev mysql-server libace-6.* libace-dev To configure MySQL in Ubuntu 18.04 and similar (set root password and other settings) read this guide. Note: on latest versions of Ubuntu the default mysql version is 5.7. If you’re using this version, read this. Install XCode using the App Store, then open the terminal and type: For those who don’t have Homebrew installed, you can easily install it typing: ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Then use it to install the required packages: brew install openssl readline cmake ace coreutils bash bash-completion md5sha1sum mysql56 brew link mysql56 --force Install Visual Studio >= 15 ( 2017 Desktop Community ) Microsoft downloads Install CMake version >= 2.8 Install latest version of Git Extensions MySQL Server Community Edition ( 5.6 or higher ) These files are shipped with MySQL Server but to make it easier we packed the libs and include files for both 32 bits and 64 bits. Extract the files to a known location, e.g, C:\MySQL, directory structure must be following: C:\MySQL\include and C:\MySQL\lib\debug (Move libmysql.dll and libmysql.lib there) Install OpenSSL version 1.0.x (Do not install the Light version) Download the 64bit version. Or you can get both if you plan to compile both 32 and 64bit, they can coexist side by side. AzerothCore does not officially support MySQL version >= 5.7, but there is a way to get it up and running. You have to remove NO_ZERO_IN_DATE and NO_ZERO_DATE flags from MySQL’s sql_mode variable in the MySQL config file so that all queries updates and core statements can be applied correctly. You will find some useful information on StackOverflow about How to use AzerothCore with MySQL 5.7
OPCFW_CODE
XSS tool that can help you identify and mitigate XSS vulnerabilities is XSStrike. XSS or Cross-Site Scripting is a common web application vulnerability that allows an attacker to inject malicious code into a website, potentially stealing sensitive information from unsuspecting users. XSS attacks can be devastating, and protecting your website against them is essential to ensure the security of your users’ data. This XSS tool is designed to automate the process of detecting and exploiting XSS vulnerabilities in web applications. XSStrike is a Cross Site Scripting detection suite equipped with four hand written parsers, an intelligent payload generator, a powerful fuzzing engine and an incredibly fast crawler. It is automated and advanced XSS tool. How it Works? XSStrike works by analyzing a web application for potential XSS vulnerabilities. It does this by sending various payloads to different parts of the application, such as input fields, URLs, and headers, to see if it can trigger an XSS attack. The tool then reports any vulnerabilities it finds, allowing the user to take action to fix them. Instead of injecting payloads and checking it works like all the other tools do, this tool analyses the response with multiple parsers and then crafts payloads that are guaranteed to work by context analysis integrated with a fuzzing engine. Here are some examples of the XSS payloads generated by XSStrike: Apart from that, XSStrike has crawling, fuzzing, parameter discovery, WAF detection capabilities as well. It also scans for DOM XSS vulnerabilities. Use Arjun to discover hidden parameter on website. - Reflected and DOM XSS scanning - Multi-threaded crawling - Context analysis - Configurable core - WAF detection & evasion - Outdated JS lib scanning - Intelligent payload generator - Powerful fuzzing engine - Blind XSS support - Highly researched work-flow - Complete HTTP support - Bruteforce payloads from a file - Powered by Photon, Zetanize and Arjun - Payload Encoding One of the standout features of XSStrike is its ability to detect blind XSS vulnerabilities. These are vulnerabilities that don’t produce any visible effects when a payload is injected, making them harder to detect. XSStrike can detect these types of vulnerabilities by analyzing the network traffic generated by the vulnerable application. Another useful feature of XSStrike is its ability to bypass various XSS filters. Many web applications use filters to prevent XSS attacks, but these filters can be circumvented using various techniques. XSStrike has a built-in bypass engine that tries various techniques to bypass these filters, allowing it to detect vulnerabilities that other tools might miss. Clone the repository from Github: After successfully clone the repository, go to the folder and install all the requirements: Now XSStrike is installed on your machine, run the tool to check if it is installed correctly. How to Use XSStrike? Scan a single URL Test a single webpage which uses GET method. Supplying POST data Testing URL path components Want to inject payloads in the URL path like http://example.com/search/<payload>, you can do that with Using this option while crawling will make XSStrike inject your blind XSS payload defined in core/config.py to be injected to every parameter of every HTML form. To read all usage options. please read here. Using XSStrike is relatively straightforward. The user provides the tool with the URL of the target web application and any additional parameters required for the scan. XSStrike then begins analyzing the application for vulnerabilities. Once the scan is complete, XSStrike generates a report that lists any vulnerabilities it found. The report includes information about the vulnerability, such as the affected parameter and the payload that triggered the vulnerability. The user can use this information to fix the vulnerability before it can be exploited by an attacker.
OPCFW_CODE
I guess its binary because as far as SAFE is concerned (which is the purpose of this forum) it has no mechanism to help out. And since this very problem is also in the current web to almost the same level of a problem. And due to the “streisand effect” could even be worse on the web where juicy stuff is duplicated far and wide to counter any attempts at censorship. I cannot see how the network being anonymous can help at all or can be changed to help. It’ll happen anyhow. Now if you want to discuss Applications and safe sites and how they can deal with the situation then that is far from binary and a huge discussion in itself. Yes this could be quite interesting. For anonymous storage it would be interesting if SAFE’s censorship resistance makes this particular problem worse or not. The private stuff needs only be uploaded once and people will learn that very quickly and so there isn’t the spreading of the private material to many places on SAFE. So in general I think it would be buried/lost very soon after uploading. The “streisand effect” has played a big part in spreading private material far and wide whenever someone tries to remove the private material. How far it is spread depends on the noteworthiness of the person or material. There are already web sites (backed up to the internet archive site) that dedicate themselves to the posting of this private and embarrassing material now. And whenever anyone tries to get the material removed the “streisand effect” proves itself alive and well. Its pretty much binary for this particular thing and possibly SAFE will be better since hte “streisand effect” doesn’t get a chance to rear its “ugly” head (very often). But then again its not so much binary as “whats different - not much” Ah this one would raise many topics from whats worse to whats better. And some pretty binary and some with a lot in between. I don’t see it that way. We need to concern ourselves with what can be changed or helped and what is out of the realm of software. So the posting of voyeur/private material is pretty much out of SAFE’s ability to affect and not the purpose of SAFE. But to free humans from the bonds of insecurity and surveillance, then that is SAFE’s purpose in being written. To solve world hunger then I’d say that is outside of SAFE. To solve people’s losing their digital identities because of web hosts being hacked, then that too is an outcome of SAFE’s purpose. Really you have to go back to WHY SAFE was developed and what benefits to mankind that will bring. Outside of that we can only discuss whether SAFE will make xzy issue better or worse and they are worthy of discussions. And in the case of your “privacy/voyeurism” then I say SAFE is essentially no different to what we have today (in the balance of things) and since its not a reason for SAFE being created then its not a SAFE “problem”. SAFE essentially does not make it worse or better.
OPCFW_CODE
Switch live satellite from 5 to 15 mins Also we are checking if there are problems when the Sat consumer switches from 5 mins to 15 mins. We want to switch to 15 mins data for a few hours, check nothing bad happens, not spikes in forecast, forecast is continues to be made then swicth back 2023-07-11 10:57, changed Satelilte consumer to get 15 mins data 11:00, start sat consumer, data from 10 to 11 11:16 Satellite consumer finishes 11:17 PVnet forecast started, no logs, so not sure what has been loaded 11:23 PVnet finished at had no errors, and no spike in forecasts 11:45 PVnet did not run, as there were zeros in the satellite data 12.15 PVnet did not run, as there were zeros in the satellite data 12.43 the satellite data goes from '2023-07-11 09:15 to '2023-07-11 11:00' 12.45 PVnet did not run, as there were zeros in the satellite data 13.15 PVnet did not run, as there were zeros in the satellite data 13.45 PVnet did not run, as there were zeros in the satellite data 14.15 PVnet did not run, as there were zeros in the satellite data 14,30 Deploiedy version 2.1.12 14.45 PVnet did not run, as there were zeros in the satellite data. In index 0-10. The satellite file datestamps are '2023-07-11T11:00:00.000000000', '2023-07-11T11:15:00.000000000', '2023-07-11T11:30:00.000000000', '2023-07-11T11:45:00.000000000', '2023-07-11T12:00:00.000000000', '2023-07-11T12:15:00.000000000', '2023-07-11T12:30:00.000000000', '2023-07-11T12:45:00.000000000', '2023-07-11T13:00:00.000000000' Later 15:00 turn 5 mins back on import xarray as xr d = xr.open_dataset('zip::s3://nowcasting-sat-development/data/latest/latest_15.zarr.zip',engine='zarr') Unfortunately it looks like the 5 minute satellite data wasnt turned off. This caused it to just laod the 5 minute data Swicth satellite to 15 minute data, we leave it over night to run and see if it runs ok Managed to run PVnet locally with 15 minute satellite data with fix above Version 2.12.13 now runs on 15 minute satellite data Turned satellte backup off at 2023-07-13 09:15 @peterdudfield could this be closed now?
GITHUB_ARCHIVE
/usr/bin/pine (with /usr/bin/pinegpg-install) improperly requires non-pass-phrased GPG private key This might also be considered a GPG issue but it seems to be part of pine -- Unlike PGP, the GPG signing routine will not prompt, as shipped, for a pass-phrase to unlock the GPG private key for signing (and encrypting) Then going to use gpg-sign, we get: [compose and selected to send with gpg-sign] gpg: Warning: using insecure memory! gpg: no default secret key: secret key not available gpg: [stdin]: clearsign failed: secret key not available Hit return to continue. that is -- it is expecting a cleartext GPG private key. [the nomenclature is also inconsistent with the more customary 'private key' term, which one I do not have an immediate solution ... A wrapper shell script might be invoked when STARTING pine to query for the passphrase, and save it to a pipe, which the gpg-sign could then query; or a 'ssh-agent' type helper (GPG has one marked experimental) could be invoked. The corresponding dialog with PHP private key under passphrase looks like No configuration file found. Pretty Good Privacy(tm) 2.6.3i - Public-key encryption for the masses. (c) 1990-96 Philip Zimmermann, Phil's Pretty Good Software. 1996-01-18 International version - not for use in the USA. Does not use RSAREF. Current time: 2001/12/05 07:21 GMT You specified no user ID to select your secret key, so the default user ID and key will be the most recently added key on your secret keyring. You need a pass phrase to unlock your RSA secret key. Key for user ID: email@example.com 1024-bit key, key ID 7BFB98B9, created 1998/11/25 Enter pass phrase: ... and given a good pass-phrase, it signs, encrypts, or both the data, and off it goes ... There have been so many numerous security-ish style bug reports against pgp4pine in the last year, none of which seem to have an easy fast solution, that I am seriously considering removing pgp4pine support from Do you know of any other pine plugin for pgp/gpg support that may be a firstname.lastname@example.org needs to be involved here. It is a structural problem with the way GnuPG handles pass phrases in text mode. There is not a good wrapper solving this and the 'insecure memory' issue, and it probably needs to be addressed there, and then have a proper wrapper written. The Up2date folks will be concerned as well, in that they use considerable mode package verification. It may be that Jeff Johnson and Beecrypt, or the openssl crypto facilities need a wrapper which is well formed, to solve this one. Beecrypt needs to supplant Openssl (license issue) anyway. Shall we take this discussion off Bugzilla or continue it here? I wont debate that your ideas are not good, as they sound like a possible solution to the problems at hand. I've Cc'd Nalin at your request in case he would like to also comment. PINE does not support crypto directly of course, and they state in their documentation that the do not plan on ever supporting crypto directly. This means that either a send filter must be used similar to what we're using right now, or someone needs to whip up proper crypto support for PINE and patch it in. Due to the horrible license of PINE, the latter is not realistic, even though IMHO it is the better solution. There are numerous pgp4pine bugs open in bugzilla currently, and also IIRC some I have deferred. Since there are so many bugs reported with this incredibly horrible hack, that leaves us with a few solutions. 1) Fix it - which would require a significant amount of engineering resources allocated to it 2) Rewrite a "proper" replacement for it which also solves all of the various problems people have reported. This also requires a serious resource investment. 3) Investigate other pre-existing solutions to drop into pine 4) Drop pgp support from PINE While #1 or #2 would be best for PINE users likely, the resources invested to accomplish this IMHO do not have a big payoff for the distribution as a whole considering the finite number of hours available. I consider #3 and #4 to be much more viable. While PINE is considered an important package, the drop in PGP support is not. If you or someone else are willing to write up support for #1 or #2, then it stands a chance of happening. In lieu of that, I probably will not touch these problems, and instead opt for #3 or #4, since I believe my available man hours are better spent more effectively on other areas of the distribution. Realistically, we wont be making our own fork of PINE, or working on the pgp4pine code. We rely on upstream maintainers of both packages to implement this stuff in a supportable and secure manner. If they can't or wont do it, then PGP support should be dropped entirely from our PINE packages. Having PGP support that is not really secure, is not really a secure thing to do. Being unable to allocate the resources to the problem to adequately and securely fix it, makes it something that should be dropped. I'm considering doing this for future Closing bug WONTFIX.
OPCFW_CODE
The development of a Fish tournament system rests on two assumptions. First, our company will run the server software on its isolated machines (these could be rented or owned). Second, the participants (our customers) will run their own "AI" players, which run on the participants’ remote computers. Our company will also inject "house players" to even the odds. Additionally, our company will add visualization components so that it can broadcast the tournaments and earn advertisement dollars. The software needs three principal components: the game logic, the communication layer, and (when it reaches a certain size) a database. This course ignores the database components to reduce the prerequisites and make it a one-semester project. The description of Fish suggests the following software components: A player-referee interface (protocol) to which the creators of external players program. The player interface must spell out all phases of Fish: how to place their avatars on the initial board; how to take turns; and how/whether to receive information about the end of a game and tournament. Given the goal, this must come be formulated in both logical and communication terms. A player implementation to validate the interface. A referee supervises an individual game after being handed a number of players. The referee sets up a board Remember from Fish that some other component may be in charge of specifying the dimensions. and interacts with the players according to the interface protocol. It removes a For dealing with an entire tournament of games, we will need to build a tournament manager that runs rounds of games. The tournament manager signs up players for tournaments, allocates players to games, creates referees to run games, and collects tournament statistics. It also informs a tournament observer of on-going actions. Finally, the game logic calls for data representations of tiles, avatars, and boards plus a component that can check the rules, on behalf of both the referee (does the player’s action satisfy the rules?) and the players (which actions is the player component legally allowed to perform?). The player interface and these game pieces make up the common ontology that players and referees use to communicate. The communication between a tournament manager and players rests on a simple exchange of texts. Beyond the game logic, the system will also need components for dealing with remote-player communications. These components will be based on the logical interfaces. The server component will perform the communication-sign-ups for remote players but will leave the logical-sign-up to the tournament manager. You may wish to think of the communication sign-up as a "registration" step, which may result in actually being signed up or ending up on a wait list. The client component will perform the communication-sign-up for players on remote machines, connecting them to a remote server. Our build plan consists of three phases, each ending in a product that illustrates the basics of our eventual product. The first two phases concern the logic, the last one with communication. The goal of the first phase is to build a complete game implementation, including game observers and possibly GUI-based players. This phase will thus implement the core of the system, including "house players." the basic game pieces: tiles, avatars, and the board; a rule checker, needed by both players and referees; the player interface and basic implementations; and Once we have a player interface, we could ask some early adopters to write an implementation in our language. The goal of the second phase is to construct the tournament management system, still in our chosen language. At this point, the company can demo the entire product on a computer (or several using remote-windowing systems). Constructing this complete system in one language should allow us to debug the logic layer in a systematic manner, without interference from bugs in the communication layer. The goal of the third and final phase is to break up this monolithic prototype so that we can connect the manager to "house players" as well as remote players (constructed in any language). For this step, we will use the remote proxy pattern to "splice" in communication components and separate players from managerial software. We will then be able to build a sign-up server that accepts remote-player connections, collects them for a certain period, and then hands this collection— At this point, we can demonstrate this system to our investors as the alpha release of our product.
OPCFW_CODE