Document
stringlengths
395
24.5k
Source
stringclasses
6 values
There is a strong need for high-accuracy and efficient modeling of extreme-mass-ratio binary black hole systems because these are strong sources of gravitational waves that would be detected by future observatories. In this article, we present sample results from our Teukolsky EMRI code: a time-domain Teukolsky equation solver (a linear, hyperbolic, partial differential equation solver using finite-differencing), that takes advantage of several mathematical and computational enhancements to efficiently generate long-duration and high-accuracy EMRI waveforms. We emphasize here the computational advances made in the context of this code. Currently there is considerable interest in making use of many-core processor architectures, such as Nvidia and AMD graphics processing units (GPUs) for scientific computing. Our code uses the Open Computing Language (OpenCL) for taking advantage of the massive parallelism offered by modern GPU architectures. We present the performance of our Teukolsky EMRI code on multiple modern processor architectures and demonstrate the high level of accuracy and performance it is able to achieve. We also present the code’s scaling performance on a large supercomputer i.e. NSF’s XSEDE resource: Keeneland (a 201 TeraFLOP/s, 120-node HP SL390 system with 240 Intel Xeon 5660 CPUs and 360 NVIDIA Fermi M2070 graphics processors, with the nodes connected by an InfiniBand QDR network). In this article we demonstrate that recent mathematical and computational advances made to our time-domain Teukolsky EMRI code have enabled it to achieve a high-level of accuracy and efficiency. We emphasize the computational advancements made, that make use of the OpenCL framework to take advantage of the massive parallelism offered by modern many-core GPU architectures. The order-of-magnitude gain in computational performance we obtain in this manner plays a critical role in our code achieving the desired level of accuracy and efficiency. The ability to perform high-accuracy and long-duration EMRI computations has enabled various interesting advances in gravitational physics. Using data generated by this code we have been able to make significant contributions to the development of effective-one-body models and gravitational waveform generation, that will ultimately positively impact the data analysis of current and future detectors (such as NSF’s LIGO and future space-borne missions). In addition, results from our code have brought forth a better understanding of the “anti-kick” which is an intriguing aspect of the phenomenon of gravitational recoil in decaying binary systems due to gravitational wave emission. And finally, our code has also helped test Cosmic Censorship in the context of the capture of a small test particle by a near extremal Kerr black hole. Justin McKennon, Gary Forrester and Gaurav Khanna. High Accuracy Gravitational Waveforms from Black Hole Binary Inspirals Using OpenCL. Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the eXtreme to the campus and beyond. Article No. 14. 2012. arXiv:1206.0270v1 [gr-qc] [doi: 10.1145/2335755.2335808] [Free PDF]
OPCFW_CODE
Dr. Michael Nelson on "Web Archives at the Nexus of Good Fakes and Flawed Originals" "You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise..." The authenticity, integrity, and provenance of resources we encounter on the web are increasingly in question. While many people are inured to the possibility of still images being altered, the democratization of software to alter and synthesize audio and video will unleash a torrent of convincing "deepfakes" into our social discourse. The historical record will no longer be monopolized by institutions such as governments and journalism, but will become a competitive space filled with social engineers, propagandists, conspiracy theorists, and aspiring Hollywood directors. While the historical record has never been singular or unmalleable, it has seen neither this scale of would-be editors, nor with such skill. Web archives have a role to play in verifying the integrity and priority of resources. Unfortunately, web archives have a 1990s, ad-hoc approach to trust, interoperability, and audit. We implicitly trust the Internet Archive in the same way we used to trust email, Google, Apple, and Facebook. That we do not currently associate web archives with surveillance, spam, and subterfuge does not mean they are somehow impermeable in a way the other tools and services are not, it only means that the theatre of conflict has yet to encompass web archives. As the political, cultural, and economic stakes of disinformation rise, we can expect two primary changes. First, existing, trusted web archives will be attacked. Obvious vectors will be the machines and facilities themselves, but more subtle attacks will be pages designed to be archived, which then masquerade as different pages, obfuscating the provenance of otherwise untrustworthy sources. The second approach is the result of lowered cost, in terms of hardware and tools, to establish a web archive. When web archives were expensive, there were a limited number of known entities capable of running them. We now have a dynamic marketplace of web archives, many of which are short-lived, and most which are owned or operated by those with several degrees of separation in our friend-of-a-friend network. In summary, is that really an archived tweet from 2016 showing your favorite politician in an unflattering situation? Or is it a deepfake, injected into a trusted archive, and then replicated across several less well-known archives, all of which are secretly operated by the same entity? About the Speaker: Michael Nelson, PhD is a professor of computer science at Old Dominion University. He previously worked at NASA Langley Research Center from 1991-2002. Through a NASA fellowship, I spent the 2000-2001 academic year at the School of Information and Library Science, University of North Carolina at Chapel Hill. He is active in the Open Archives community and is an editor of the OAI-PMH, OAI-ORE, Memento, and ResourceSync specifications. He has developed many digital libraries, including the NASA Technical Report Server. In 2007, he received an NSF CAREER award. His research interests include web science, repository-object interaction and digital preservation. Further information can be found at: https://www.cs.odu.edu/~mln/
OPCFW_CODE
SQL query to find available rooms from date x to date y In the following relational schema, how can I derive the available "Basic" types of rooms which are not booked from a DATE RANGE. This is my shot. I am working with oracle in sqlplus SELECT * FROM ROOM r, BOOKING b WHERE NOT EXISTS (SELECT * FROM BOOKINGROOM br WHERE br.ROOMNO = r.ROOMNO AND br.BOOKINGID = b.BOOKINGID AND ARRIVEDATE < '01-FEB-2013' AND DEPARTDATE > '23-FEB-2013'); I also want the query to be 'canned query' so I manually add the end range and start range dates. Sub query answer would be preferred. INSERT INTO BOOKING VALUES (2314, 1001, TO_DATE('10-MAR-2013', 'DD-MON-YYYY'), TO_DATE('15-MAR-2013', 'DD-MON-YYYY'), 1225.00); Date comparison has been an issue perhaps in the below answers. The simplest approach would be: SELECT DISTINCT r.* FROM Room r LEFT JOIN BookingRoom br ON r.FloorNo = br.FloorNo AND r.RoomNo = br.RoomNo LEFT JOIN Booking b ON br.BookingId = b.BookingId AND (b.ArriveDate <= &end_range AND b.DepartDate >= &start_range) WHERE b.BookingId IS NULL AND r.Type = 'BASIC'; If you absolutely must use a subquery, try this: SELECT DISTINCT r.* FROM Room r LEFT JOIN BookingRoom br ON r.FloorNo = br.FloorNo AND r.RoomNo = br.RoomNo LEFT JOIN Booking b ON br.BookingId = b.BookingId WHERE (b.BookingId IS NULL OR b.BookingId NOT IN ( SELECT BookingId FROM Booking WHERE (ArriveDate <= &end_range AND DepartDate >= &start_range) )) AND r.Type = 'BASIC'; Isit possible with sub query? Sure, but a subquery will likely be less performant. Any reason you prefer a subquery? You can add the 'ArriveDate' and 'DepartDate' filters as a 'canned' query without using a subquery. I find sub query difficult to understand, thereby really want to use this example to understand. Left Join I understand is simple. You mean, you enjoy working harder for its own reward? ;) The LEFT JOIN is the simplest way to find items that don't have matches in the range requested. the above is also not a canned query. A canned query will have something like arrivedate = &arrivedate which prompts, as opposed to fixing a value. So substitute '01-FEB-2013' and '24-FEB-2013' with parameter names - it doesn't affect the logic of the query. ok thanks, I have been working on LEFT JOIN with other examples, so SUB QUERY is what I had in mind. tried your example and get an error (missing expression) at line with IS NULL OR NOT IN @MooHa: my bad, fixed now. gave it real hard shot, but you condition is incorrect as rooms are available despite being booked. :( it lists all rooms of type BASIC. it is not eliminating the booked room. ROOM 01 (BASIC TYPE) is booked from 10-mar-2013 to 15-mar-2013, and the range given is 14-mar-2013 to 20-mar-2013 yet it still lists room 01 Try this query, which will detect conflicting bookings whether they are enclosing, enclosed by, or partly overlapping with the desired date range. SELECT DISTINCT r.* FROM Room r LEFT JOIN BookingRoom br ON r.FloorNo = br.FloorNo AND r.RoomNo = br.RoomNo LEFT JOIN Booking b ON br.BookingId = b.BookingId AND b.ArriveDate < &end_range AND b.DepartDate > &start_range WHERE b.BookingId IS NULL AND r.Type = 'BASIC'; nope :( booked room still displays, see my way of inserting data into booking @MooHa For what dates (arrival, departure, and range)? it lists all rooms of type BASIC. it is not eliminating the booked room. ROOM 01 (BASIC TYPE) is booked from 10-mar-2013 to 15-mar-2013, and the range given is 14-mar-2013 to 20-mar-2013 yet it still lists room 01 @MooHa Then I think you have a problem with your field types/date comparisons. Check this SQL Fiddle: http://www.sqlfiddle.com/#!4/57a5b/1/0 very odd, but the issue was when I was inserting date, it was with TO_DATE('03-JUL-1980', 'DD-MON-YYYY') structure
STACK_EXCHANGE
In this Document we will some quick Tips for Troubleshooting Wireless Authentication with ACS. Go to ACS > Monitoring and Reports, and click Launch Monitoring & Report Viewer. A separate ACS window will open. Click Dashboard. In the My Favorite Reports section, click Authentications – RADIUS – Today. A log will show all RADIUS authentications as either Pass or Fail. Within a logged entry, click on the magnifying glass icon in the Details column. The RADIUS Authentication Detail will provide much information about the logged attempts. ACS Service Hit Count can provide an overview of attempts matching the rule(s) created in ACS. Go to ACS > Access Policies > Access Services, and click Service Selection Rules. Quick TIPS for Troubleshooting PEAP Authentication Fails with ACS Server When your client fails PEAP authentication with an ACS server, check if you find the NAS duplicated authentication attempterror message in the Failed attempts option under the Report and Activity menu of the ACS. You might receive this error message when Microsoft Windows XP SP2 is installed on the client machine and Windows XP SP2 authenticates against a third party server other than a Microsoft IAS server. In particular, Cisco RADIUS server (ACS) uses a different method to calculate the Extensible Authentication Protocol Type:Length:Value format (EAP-TLV) ID than the method Windows XP uses. Microsoft has identified this as a defect in the XP SP2 supplicant. For a Hotfix, contact Microsoft and refer to the article PEAP authentication is not successful when you connect to a third-party RADIUS server. The underlying issue is that on the client side, with windows utility, the Fast Reconnect option is disabled for PEAP by default. However, this option is enabled by default on the server side (ACS). In order to resolve this issue, uncheck the Fast Reconnect option on the ACS server (under Global System Options). Alternatively, you can enable the Fast Reconnect option on the client side to resolve the issue. Perorm these steps in order to enable Fast Reconnect on the client that runs Windows XP using Windows Utility:- Go to Start > Settings > Control Panel. Double-click the Network Connections icon. Right-click the Wireless Network Connection icon, and then click Properties. Click the Wireless Networks tab. Choose the Use Windows to configure my wireless network settings option in order to enable windows to configure the client adapter. If you have already configured an SSID, choose the SSID and click Properties. If not, click New in order to add a new WLAN. Enter the SSID under the Association tab. Make sure that Network Authentication is Open and Data Encryption is set toWEP. Choose the Enable IEEE 802.1x authentication for this network option. Choose PEAP as the EAP Type, and click Properties. Choose the Enable Fast Reconnect option at the bottom of the page. HiWe have a dna Space installation, and the get the data previously from CMX, and now through SPace connector, so a part of our maps, is in a sub folder Companylocation1location2from CMX location ... It seems that a new feature called 'Easy PSK' is supported as of WLC version 17.5. Looking at the description in the documentation, this is something that could potentially be interesting for our environment. The documentation about the specifics on the w... I have a 5508 WLC which is in HA. The current show boot gives below output(anhv4-01sr-wlc-02) >show bootPrimary Boot Image............................... 18.104.22.168Backup Boot Image................................ 22.214.171.124 (default) (active)My question... Hello, I have been able to add a MAC address to the disabled client list in my WLC. I was able to do this under Configuration -> Security -> AAA -> Disabled Client. I have tested this and it does provide the desired results however, now... This post is only to remind you all that Cisco has unsupported TDWR channels (120, 124 and 128) from ETSI domain for newest Catalyst 9100 series access points (https://www.cisco.com/c/dam/en/us/products/collateral/wireless/access-points/sales-tool-c96-744...
OPCFW_CODE
switching between newcommand and default I've defined the following macro: % makes tables more readable \renewcommand{\arraystretch}{1.1} \setlength{\tabcolsep}{5pt} Which add extra tabbing space in the tables. However, in one place in my report I need to use the default as it messes up the structure of the page (I have a side-by-side table and figures). I wonder if I could, for some table environments, reset the tabbing space to default? You can create a command to store the old length so you can go back to it later. For lengths, you could do like this: \newlength{\myparindent} \setlength{\myparindent}{\parindent} \setlength{\parindent}{2em} %restore original \setlength{\parindent}{\myparindent} Store the modifying macros in a macro that you can call where you need it: \documentclass{article} % Store modifiable content \newlength{\storetabcolsep} \AtBeginDocument{% \let\storearraystretch\arraystretch% \arraystretch \setlength{\storetabcolsep}{\tabcolsep}% \tabcolsep } % Makes tables more readable \newcommand{\increasetablespace}{% \renewcommand{\arraystretch}{1.5}% \setlength{\tabcolsep}{12pt}}% % Restore stretching defaults \newcommand{\restoretablespace}{% \let\arraystretch\storearraystretch% \arraystretch \setlength{\tabcolsep}{\storetabcolsep}}% \tabcolsep \begin{document} \increasetablespace \begin{tabular}{ccc} \hline A & B & C \\ \hline 1 & 2 & 3 \\ \hline \end{tabular} \bigskip \restoretablespace \begin{tabular}{ccc} \hline A & B & C \\ \hline 1 & 2 & 3 \\ \hline \end{tabular} \end{document} The idea is to store any content that you potentially modify within the document \AtBeginDocument. Then, via macro calls \increasetablespace they're updated to suit your stretched-out needs, and restored using \restoretablespace. i'd be inclined to call it \defaulttablespace (even if it's hard to spell correctly) instead of \restoretablespace, since the latter might imply i'm restoring it to some alternate that i've defined that isn't the default.
STACK_EXCHANGE
Dollar store DIY decor is a great way to decorate your home for fall. If you like this video, I have a full fall playlist full of projects and inspiration for you here: http://bit.ly/WWFallVids Dollar Tree Pumpkin Video I Mentioned: https://youtu.be/J_CE_byGq98 Christmas in July Video featuring a Christmas faux metal bucket: https://youtu.be/xyarMsbHoKs Supplies + Links: Miter box & Saw (Under $20 on Amazon!) – https://amzn.to/315SxGz Minwax Dark Walnut Stain: https://bit.ly/2z353uF Brown Paint – Nutmeg Brown: https://bit.ly/328gOw6 Orange Paint – Pumpkin Orange: https://bit.ly/32bersn White Waverly Chalk Paint: https://bit.ly/3lMFdyX Buffalo Check Pitcher: https://bit.ly/3ij0UVj Painters Tape I recommend: https://bit.ly/3ioLoax #DollarTreeDIY #FallDIY #BuffaloCheck #Farmhousedecor ★ CONNECT WITH ME ON OTHER PLATFORMS & SHARE YOUR CREATIONS! ★ Tag #WhiskeyandWhit @Whiskeyandwhit Business Inquiries: email@example.com ★ AMAZON FAVORITES* ★ A lot of the products I get questions on are linked in my Amazon storefront for easy shopping. Shop my Amazon store: https://amzn.to/2UQwTjC My Camera: https://amzn.to/2Nzdqom ★ PRODUCTS I LOVE & COUPON CODES* ★ Shop my favorites through Like to Know It here: https://www.liketoknow.it/whiskeyandwhit CRICUT – I have an Explore Air 2, here’s the latest I’d recommend for a beginner: http://shrsl.com/2ao8h Free Mrs. Meyers 5-Piece Set: Use this link and you’ll get a ton of free goodies with your first order from Grove Collaborative: http://bit.ly/2zG4wwM Pregnancy & Postpartum FAVES: Get 20% off your first Kindred Bravely purchase when you order through my link: kindredbravely.com/whiskeyandwhit (discount applied at checkout, some exclusions apply)! Goli Gummies – all the benefits of Apple Cider Vinegar minus the icky taste. These things are delicious! Learn more and purchase here: https://go.goli.com/whiskeyandwhit and use code “WhiskeyandWhit” for a discount! Fetch Rewards (easy peasy rewards from grocery shopping!) – https://bit.ly/2Ay6caM Enter E2YAY at the sign-up screen and you’ll get 2,000 Fetch Points ($2.00 in points!) when you complete one receipt. *Indicates a section contains affiliate links. ★ COMMENT & DISCLOSURE POLICY ★ As a believer in positivity and making my corner of the Internet a fun and safe place to be, rude, offensive, mean, malicious, or hurtful comments will be removed. There are plenty of creators on this platform, so if you aren’t a fan of my channel, no worries. I’m sure you can find someone that you are a fan of. 🙂 I will also do my best to respond to questions and comments to continue to connect with you. Music licensed via Epidemic Sound *Indicates a section contains affiliate links. As an Amazon Associate, I earn from qualifying purchases. Meaning, at no cost to you, I may earn a commission from links I share. It helps support my channel by sharing things I would normally share anyway. Thanks for supporting Whiskey & Whit!
OPCFW_CODE
The best Side of Blockchain You might think that the not enough Regulate could suggest chaos, but that’s not true in any respect. That’s since Blockchain, the technological innovation guiding Bitcoin is Probably the most correct and secure techniques ever created. If you’ve always needed to individual some cryptocurrency, a new app may very well be a great way to Obtain your hands … In case you have specialised computer hardware, you can in fact make use of your processing electric power that will help process Bitcoin transactions. This is referred to as “mining” and consumers are rewarded in Bitcoins for his or her processing electrical power. We’ll comprehensively go over Bitcoin mining in another report. An Unbiased View of BlockchainBlocks keep batches of legitimate transactions that are hashed and encoded right into a Merkle tree.[one] Each and every block contains the cryptographic hash on the prior block in the blockchain, linking The 2. The 5-Second Trick For What Is BitcoinBitcoins are entirely Digital coins created to be ‘self-contained’ for his or her value, with no need for banking companies to maneuver and shop The cash. By style and design, a blockchain is resistant to modification of the information. It's "an open, distributed ledger which will history transactions among two parties competently and in a verifiable and long lasting way".[seven] To be used as being a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering into a protocol for inter-node communication and validating new blocks. Everything about Bitcoin MiningThe idea of aquiring a currency to implement inside the electronic environment was the prize For numerous cyberpunks. It might get An additional seven years in advance of A different breakthrough would occur. In 1997, ‘HashCash' was invented by Adam Back again. Mr. Back's evidence-of-function procedure will be the breaking grounds for Bitcoin. The very first blockchain was conceptualized by an individual (or team of individuals) referred to as Satoshi Nakamoto in 2008. Nakamoto improved the design in an important way using a Hashcash-like technique to incorporate blocks towards the chain devoid of necessitating them to get signed by a trustworthy celebration. The Basic Principles Of What Is BitcoinBut how can miners make earnings? The more computing energy they handle to build up, the greater prospects they've got of resolving the cryptographic puzzles. After a miner manages to solve the puzzle, they get a reward as well as a transaction payment. Blockchain engineering may be integrated into several parts. The primary use of blockchains now is like a distributed ledger for cryptocurrencies, most notably bitcoin. Here are a few operational items maturing from evidence of strategy by late 2016. Bitcoin's success has spawned several competing cryptocurrencies, often known as "altcoins" including Litecoin, Namecoin and Peercoin, as well as Ethereum, EOS, and Cardano. Currently, you can find literally 1000s of cryptocurrencies in existence, website here with the mixture market price of above $two hundred billion (Bitcoin at this time represents over 50% of the total benefit). Liteshack allows visitors to watch the my response community hash level of a variety of cash throughout 6 diverse hashing algorithms. They even presented a graph with the networks hash price in order to detect traits or signs that most of the people is either attaining or dropping fascination in a specific coin. How Blockchain can Save You Time, Stress, and Money.Cryptocurrencies maintain the guarantee of making it much easier to transfer resources straight amongst two get-togethers in a very transaction, with no need for the trustworthy 3rd party for instance a bank or credit card enterprise; these transfers are facilitated in the use of community keys and personal keys for stability needs. Because of this in many scenarios Bitcoin is less expensive to utilize than standard wire transfers or income orders. Also, not like fiat currencies Bitcoin was designed to be electronic by nature, this Recommended Site means you'll be able to add more layers of programming in addition to it and switch it into “sensible revenue”, but a lot more on that on later films.
OPCFW_CODE
Is it necessary to use HTTPS for server side requests? I'm allowing users to log in to my site using third party credentials (let's call it SocialFoo). Users submit their username and password for SocialFoo via a POST over HTTPS. After the POST is made, on the server side, I validate the credentials via an API call to SocialFoo. Is there any security benefit to making the API call to SocialFoo via HTTPS rather than HTTP? Since this request is theoretically not exposed to user, is there a security risk beyond packet sniffers at the data center? Usually, at list while your session is active, some authentication token is transmitted to the server with every your request. To prevent this token from being captured and tampered with while crossing the network, ensure that you use SSL with all pages that require authenticated access. This doesn't really address my question. I'm asking about a secondary request done server side - it is in accessible by the user as far as I understand. As I mentioned in the question, the first request is over HTTPS so your comment does not address my question. During the first request authentication ticket was sent to the client. For the following requests client sends this ticket back to the server to authenticate this particular request. So these requests definitelly must be secured by https too. Why this not address your question? YES There is always a risk of having information stolen if it is not transmitted securely. As a general rule of thumb, any and all sensitive data going in or going out should be encrypted. It is not only a "security benefit" to use HTTPS, but should be a requirement. Can you please address the specific risk? I understand your point but am looking for details. Someone can be packet sniffing your server, the API service's servers, or any network traffic in-between. In a shared host, a hack can be used to scan other's logs to steal information. In a VPS system, recording network traffic can get the information. There are dozens of ways to hack, sniff, stalk, watch, steal, and record information. Take your pick. To say "please address the specific risk", are you asking a question for which volumes of books have been written. There isn't just ONE risk; there are dozens. If you don't understand that, try researching cybersecurity in general... I would say that the answer to your specific question is no. SSL encryption is used to protect network traffic while it is transmitted on the network. If both servers are on the same local network and that network is located in a single data center, then the only reason for using ssl is to protect against threats on that network in that data center. Another question is if it is good security practice to not protect against threats on the local network. There's not really any risk aside from anyone sniffing the traffic between you and the remote host. SSL protects you from just that when used properly though. Encryption prevents your data from being understood when only observed and authentication lets you know it most likely has not been tampered with. Your specific question about sniffers at data centers... If both hosts are located in data center under the same switch, you're probably fine. If they're not, then you need to ask about VLAN bridging between your two switches, assuming you're renting both switches for both hosts. If your data center won't let you just up and bridge the two, then you will want to use SSL or any other well used scheme. If you want to see how easy it is to look at traffic under your switch (or even further sometimes), just run tcpdump (Linux) or Wireshark (Windows). Just be safe, use SSL. There's not really problem with performance these days either, unless you're using 16384 bit keys, are running your own CA and doing real-time CRL checking, and need over 1,000 connections / second. Or, you're using old hardware... in which case, at least use 2048 bit keys. Also depending on your business or the content of your transmissions, it may be required as a regulation in your country to secure the data or at least make a reasonable attempt at maintaining confidentiality or authentication of some kind. An added note: You mentioned that since the request isn't exposed to the user, you might not need SSL. SSL protects the user's data. It doesn't do anything to prevent a user from reverse engineering your protocol, tampering with data before it's transmitted, or anything like that. I may be misunderstanding your point, but I figured I would throw this out there.
STACK_EXCHANGE
M: Ask HN: Need Questions to Ask the Copy Hacker on TechZing - jayro We've scheduled an interview with Joanna Wiebe of Copy Hackers for this coming Tuesday and we'd like to get a dozen or so really good questions for her to answer on the air. If you're a startup and you'd like help with any of the following issues, then please email your question to .<p><pre><code> - Communicating your value propositions - Describing your features - Communicating reasons to buy - Messaging "what we do" - Improving first impressions - Working on tone and feel - Communicating value - Cutting technical writing without losing info - Using testimonials effectively and writing FAQs </code></pre> Please include your name, the URL of your website, what your specific question is and as much information as you think she'll need to answer your specific question. R: rhizome Who? R: tamersalama If you're asking about Joanna Wiebe: <http://news.ycombinator.com/user?id=bloggergirl> <http://www.copyhackers.com/> Edit: And [http://www.copyhackers.com/2011/10/18/how-1-hn-post- compelle...](http://www.copyhackers.com/2011/10/18/how-1-hn-post-compelled-me- to-leave-intuit-create-new-startup-for-startups/)
HACKER_NEWS
The rise of machine learning bolstered by heterogeneous computing as an important tool in science seems to gather more evidence daily. Recently a group of scientists from Fermilab, MIT, CERN, University of Washington and elsewhere demonstrated a new ML technique for accelerating identification of high energy particle signatures using Microsoft’s Project Brainwave platform that deploys FPGAs. The researchers were able to demonstrate a 30x to 175x reduction in time required compared to existing methods in their work which they emphasize is a POC effort. The researchers have written a paper posted on arXiv (FPGA-accelerated machine learning inference as a service for particle physics computing) and a there’s brief account of their work posted on the MIT website. In brief, the team was able to train their new system to identify images of top quarks, the most massive type of elementary particle, some 180 times heavier than a proton. Repeating a refrain well-known to computer scientists, the physics researchers note in their paper the importance of the emergence of heterogeneous computer architectures (CPUs plus accelerators) to speed a wide range of computations including AI methods. Finding code optimized for these new systems, however, can be problematic. They write, “To capitalize on this new wave of heterogeneous computing and specialized hardware, particle physicists have two primary options: - Adapt domain-specific algorithms to run on specialized accelerator hardware. This option takes advantage of specific human expert knowledge, but can be challenging to implement on new and potentially changing hardware platforms with different computing paradigms (such as CUDA or Verilog). - Design ML algorithms to replace domain-specific algorithms. This option has the advantage of running natively on specialized hardware, but it can be a challenge to map specific physics problems onto ML solutions. In this instance, the researchers chose the second option where a known ML algorithm is adapted to solve the physics problem: “We focus on the acceleration of the ResNet-50 convolutional neural network model and adapt it to physics applications. As an example, we interpret jets, collimated sprays of particles produced in LHC collisions, as 2D images that are classified by ResNet-50. We keep the same architecture but train new weights to distinguish top quark jets from light quark and gluon jets. Using a publicly available dataset, we compare our model against other state-of-the-art models in the literature and find similarly excellent performance. We also discuss the potential for Brainwave to be used in other particle physics applications. For example, neutrino event reconstruction deploys large convolution neural networks in their experiments and large network inferences are a bottleneck in their cur- rent computing workflow. Coprocessor-accelerated machine learning inference could be deployed for such neutrino experiments today.” They also commented on the Microsoft Brainwave platform: “…FPGAs as a computing solution offers a combination of low power usage, parallelization, and programmable hardware. Another important aspect of FPGA inference for the particle physics community, compared to GPU acceleration, is that batching is not required for high performance; FPGA performance is not diminished for serial processing. The Brainwave system, in particular, has demonstrated the use of FPGAs in a cloud system to accelerate ML inference at large scale.” It’s an interesting paper and one of what seems like a number of emerging efforts by domain scientists to tackle use of AI on heterogeneous architectures and capture their lessons for others to take advantage of. Link to paper: https://arxiv.org/pdf/1904.08986.pdf Artificial intelligence interfaced with the Large Hadron Collider can lead to higher precision in data analysis, which can improve measurements of fundamental physics properties and potentially lead to new discoveries. Image credit: FermiLab
OPCFW_CODE
In CSS, is ".class1.class2" legal and common usage? I am quite used to seeing div.class1 and #someId.class1 but what about .class1.class2 ? And I think it is identical to .class2.class1 ? Because there was an element with id someId but now we have two elements of this type showing on the page, so I want to add a class and use the class instead of id, therefore the .class1.class2 instead of #someId.class1 I didn't know you could do this, very helpful. It will select items with both classes. So not items with either one. <span class="class1 class2"></span> I'd have thought if it did this it would be wrong. Surely .class1.class2 means children of class1 that are class2? So for example, This is styled .. applying it as shown in your example above makes no sense to me, create a new style with a single name if you only want it to apply to classes with 2 names... otherwise your CSS gets very confusing. If there's a space between them (eg. .class1 .class2) that would be true. And you could use it if you style both seperately, but the css for 1 screws up the css for the other, so on elements that have both you need to override that. Yes, it is both legal and common. In the element, you would have something like this: <div class="class1 class2">Hello</div> This will be interpreted by the browser if you give your element does two class: .class1.class2{width:500px;height:300px;} <div class="class1 class2">&nbsp;</div> If you do like this, it will not be interpreted, resulting on a div with no styles: .class1.class2{width:500px;height:300px;} <div class="class2">&nbsp;</div> This will be interpreted (resulting on an element with a dimension of 500px X 300px: .class1 {width:500px;} .class2 {height:300px;} <div class="class1 class2">&nbsp;</div> The common use of css, is to tell the browser that a certain element with and ID or CLASS of a certain name will get a set of styles, or tell the browser that a certain ID or CLASS will get a set of Styles, like so: Ex 1: .class1 {width:500px;} -> elements with this class will get 500px of width. Ex 2: div.class1 {width:500px;} -> only a div element with this class will get 500px of width. Ex 3: div.class1, h1.class1 {width:500px;} -> only a div and a h1 element with this class will get 500px of width. You can read valid information about css at: W3C CSS SYNTAX PAGE It's nice for syntactic styling. To give you an example, let's say you have the following html: <div class="box"> </div> <div class="box"> </div> <div class="box"> </div> <div class="box"> </div> You can add a second (and third, forth, etc.) class that modifies "box". For example: <div class="first odd box"> </div> <div class="second even box"> </div> <div class="third odd box"> </div> <div class="fourth even box"> </div> Then, in styling, to style different box groups, you can do the following: .odd.box { } .first.box, .fourth.box { } .first.box, .even.box { } Just wanted to confirm the answer given by Jouke van der Maas, which is the right answer. I would like to quote the following excerpt from the CSS 2.1 specification: 5.2 Selector syntax A simple selector is either a type selector or universal selector followed immediately by zero or more attribute selectors, ID selectors, or pseudo-classes, in any order. The simple selector matches if all of its components match. [snip] Since the .classname selector is equivalent to the [class="classname"] selector, it is an attribute selector. Note the "in any order" bit. Hence the selector .class1.class2 is identical to the selector .class1.class2 and matches both elements like <span class="class1 class2">Hello World</span> as well as <span class="class2 class1">Hello World</span> which is the same thing, as well as <span class="class1 class2 class3">Hello World</span> etc... You can also get even more fancy.
STACK_EXCHANGE
Unformatted text preview: fied do we have a consumer equilibrium corresponding to constrained utility maximization. Let's do an example, Suppose you have a square root utility function and the following data on prices and income: U(X,Y) = X0.5Y0.5 PX = 1 PY = 1 M = 10 32 We calculate the MRS as: MRS = Y X = 0.5Y 0.5X =Y X Now we can say, MRS = PX so, PY Y=1 X 1 or, X=Y and, X + Y = 10 thus, X + X = 10 2X = 10 X=5 Y=5 Thus, the optimal solution for this problem is to choose X*=Y*=5 for a maximum utility of (X0.5Y0.5) = (50.550.5) = 51 = 5. We also have an exactly satisfied budget constraint of: X + Y = 10 5 + 5 = 10 Consumer Demand Functions Let's take this a step further and generate a consumer's demand function. To do this we need to calculate the consumer equilibrium for every value of prices PX and PY. This can be done by solving for the quantities of goods demanded in terms of the prices alone. 33 So now we change our example above as follows: U(X,Y) = X0.5Y0.5 PX = ? PY = ? M = 10 We calculate the MRS as: MRS = Y X = 0.5Y 0.5X =Y X Now we can say, MRS = PX so, PY Y = PX X PY PXX + PYY = 10 Now, given the two relationships above, we can solve the optimal quantities of X and Y as before (this time, however, will yield a demand function not just a single consumer equilibrium). Y = PX X PY PXX = PYY PXX + PXX = 10 2PXX = 10 X = _5__ PX This gives us the demand for good X as a function of price PX. Similarly, we can get the demand function for good Y as a function of price PY. Using these functions, and varying the prices for each, one can trace out the demand curve for each good. and, leads to... so, 34 Rational Behaviour Producer Equilibrium Since the theory of producer equilibrium does not differ significantly from that of the theory of consumer equilibrium, we will examine producer theory in the context of a comparison with consumer theory. CONSUMER EQUILIBRIUM goods utility given prices given income indifference curve budget constraint X,Y U = U(X,Y) PX , PY MRS = -slope PX X+ PY Y = PRODUCER EQUILIBRIUM inputs K,L output Q = f(K,L) given prices PX r... View Full Document This note was uploaded on 05/25/2010 for the course ECON 301 taught by Professor Sning during the Spring '10 term at University of Warsaw. - Spring '10
OPCFW_CODE
What flash trigger will work to fire a monolight, which will in turn trigger others? I have 3 Bowens Monolights, Model 400D. I now have an Fuji XPro1 digital camera. What I would like to do is to fire one of the monolights via a transmitter on the hot shoe of the camera and have a receiver plugged into the one of the Bowens lights, which has a 6.3mm socket. The other two should fire off automatically, as they have a Monocell plugged into them. That's if I can get the one light to fire. What flash trigger would work with this setup? Any basic radio trigger will work for this setup. The only issue may be the sync voltage of your monolight. Most radio triggers will not work correctly if the voltage is too high. Another option is to have all your monolights run as optical slaves and trigger them all by firing a speedlight on your camera at the ceiling (or towards one of the flash heads if you are outside). Generally speaking, any flash radio trigger that works as a transmitter with the XPro1 hotshoe, and that has a receiver you can plug into the 6.3mm socket on the monolight. The main thing to look for with manual radio triggers with mirrorless cameras is whether you can get the on-camera unit to be a transmitter. The majority of the manual radio triggers out there are designed to work with Canon or Nikon dSLRs. And some of the transceiver units were designed to auto-switch into transmitter mode by sensing a signal on something other than the sync pin. Which pin and the type of signal is system-specific, so the Canon units work properly on Canon and Nikon units work properly on Nikon. But a Nikon unit won't work properly on Fuji X because the pins/contacts are placed different from each other and no contact is actually made. And a Canon unit won't work properly on Fuji X, even though the pin/contacts line up, because the signal is on the wrong pin. Upshot: with Fuji X, the on-camera unit never goes into transmitter mode. An example of a trigger like this to avoid would be the Yongnuo RF-603 (mark I). So whatever trigger you're interested in, it's probably good to find a Fuji X messageboard and ask around or search to see if the one you like is compatible with the XPro1, first. Or, just make sure either that the transceiver unit has a way to explicitly set it into transmitter mode, or that there are separate dedicated transmitter and receiver units so there's no autoswitching you have to override. Also be aware that Canon/Nikon units may have an odder fit and more difficulty seating properly on the Fuji X hotshoe. Sometimes using a Nikon unit will work better than trying to use a Canon. With the receiver units, the only thing you need to discover is if a receiver unit in the triggering system has a sync output port, what type of port it is (PC, 2.5mm, and 3.5mm being the most common), and then get the correct cable to hook that up to your monolight (e.g., a 3.5mm -> 6.7mm cable). For a list of some popular manual-only radio triggers, see: http://flashhavoc.com/flash-trigger-guide-manual/ If a flash trigger is a standard single pin connection then it should work with any camera that has a hot shoe. Are you talking about ETTL and ITTL compatible triggers? @HarryJamesSanderson but some manual-only triggers are not single-pin. The Yongnuo RF-602/603/605 have a full complement of pins and a subset of contacts and uses them for non-sync functions (wake-up, auto Rx/Tx switching). So that's why there can be unexpected consequences on cameras with different TTL communication protocols but identical pin/contact placement.
STACK_EXCHANGE
Instructions provided describe how to rotate symbols in a Schematic diagram based on feature attributes. Symbols in a Schematic diagram can be rotated based on the underlying feature attributes by configuring a Direct Property. Depending on the underlying data, it may be necessary to first convert from a Geographic value to Arithmetic. This conversion is done with a Script Attribute if using ArcGIS 9.2 Service Pack 5 or higher. Figure out if the underlying rotation value is Geographic or Arithmetic. Schematics only understands Arithmetic. This article assumes that a conversion is needed. If the data does not need to be converted, skip steps 6 through 9 below. - Edit the Schematic Dataset by using ArcCatalog. Right-click the Schematic Dataset and select ‘Edit Project...‘. - In the Schematic Designer application, expand the Element Types tree view node. Follow the next steps for each Element Type that requires rotation. - Right-click the Element Type that needs symbols rotated and select ‘Create Attribute...’ to open the Create Element Type Attribute dialog box. - Type in a name for the attribute in the Name text box. Pick Static Attribute from the Type drop-down control. Click OK on this dialog box, which opens the Attribute Editor dialog box. - The Attribute Editor dialog box displays a list of all the fields in the underlying feature class. Double-click the field that contains the rotation data and click OK on the dialog box. This now keeps the value in that field in-memory when a diagram is opened or created. - If the value needs to be converted, use a Script Attribute. Right-click the Element Type and select ‘Create Attribute...’ to open the Create Element Type Attribute dialog box. - Type in a name for the attribute in the Name text box. Select Script Attribute from the Type drop-down control. Click OK on this dialog box. - On the right-side of the Schematic Designer form, click the Add Parameter button. This adds a new Parameter Name1 line. Use this drop-down control to select the name of the attribute that was created in step 4. - Click in the Script field and type in the formula: 'name of attribute from step 4' * -1. So if the attribute created in step 4 was named 'RotValue', the script would end up looking like: RotValue * -1. - Now there is an attribute created with the data that can be used for the rotation. To end this process, create a property that does something with the attribute. Right-click the Element Type and select ‘Create Property...’ to open the ‘Create Property...’ dialog box. - Type a name for this property in the Property Name text box. Select the 'Direct' option instead of the default, which is 'Textual'. Use the Direct Effect drop-down control to select 'Rotation' and click OK on the dialog box. - On the right side of the Schematic Designer screen, use the Attribute Name drop-down control to pick the attribute that this Direct Property will use. If the underlying data did not need to be converted, select the attribute defined in step 4 above. If it did need to be converted, select the attribute defined in step 7 above. - Save the work and exit out of the Schematic Designer. Now test the work in ArcMap by either creating a new diagram or opening an existing diagram and updating it. Existing diagrams must be updated for the new functionality to work. This is because the definition has been modified since those diagrams were created.
OPCFW_CODE
Disk Utility has not found ANY permissions to repair for many months now through multiple updates. Mac OS X Lion 10.7.4 (11E53). I tried running permissions repair through terminal and it ran just fine and found nothing to repair. By the way, I have no RAID drives... just 4 internal and 2 external. I have sent a bug report also. My machine has started crashing on start up or starting up slowly.... which may or may not be related. Just adding my 2 cents. I know this is old Michael, and who knows if you are still around, but your post gave me a good laugh I'm not making fun or anything, but the way it's written makes it sound like you couldn't repair permissions on your wives. I absolutely know how that feels!!! Anyway, again, not making fun, just thanking the list for a hearty chuckle. If the diskutility GUI does not repair permissions how can the command line do any better? Diskutil used the same repair packages. So with 10.6.8 and RAID volume it does not seem possible to repair disk permissions. I have tried the GUI, command line and Cocktail - they all of course do the same thing, which does not work. What does work is if you go in and change permissions manually with chmod. I have 1Tb external hard drive. And I am not professional. I had same problem exactly like yours. I tried to repair my HD but couldn't repair. Finally I found Miro was using my hard drive. I quit miro and tried again. It solved my problem. error message was something like- live volume or I don' t remember. l0l0's post is helping me do my repairs on an external softRAID tower via Terminal. "Repair Disk" didn't work in Disk Utility but I'm able to run things in Terminal. I used this step: then I did: sudo diskutil repairDisk diskx (mine was at disk8) then I did: sudo diskutil repairVolume diskx (mine was at disk8) These worked for me. I tried Disk Utility again hoping I had jumpstarted it but it still doesn't work. I wonder if Disk Utility is somehow not running with sufficient permissions...(the sudo makes you the boss of Terminal). I'm going to explore some more. Anyone trying to repairPermissions on an external or internal drive that doesn't have a bootable OS on it, it will be greyed out. Reparing Permissions is only for boot drives. I kinda have the same problem.. Disk utility will do the Verify Disk Permissions, and Repair Disk Permissions, but the Repair Disk button is not active. Upon disk verification it says I have to repair it but it won't work. I tried restarting it like it says but still won't work.
OPCFW_CODE
A Complete Guide To Organizing Your Workflow With Agile Epic If your project management teams aren’t meeting deadlines or if they feel overwhelmed, there might be a workflow problem. Adding epics in agile development can help your teams prioritize tasks and direct the teams to focus on what matters. What is an agile epic? In literature, you might think of an epic as a larger-than-life story, one that spans years or even generations. An agile epic is similar in that it is a large block of work that can easily be divided into smaller pieces called user stories (think chapters in a book), which in turn are organized into sprints, which are to be completed within a specific timeframe. Going back to the book analogy, a sprint is like planning to read two chapters before falling asleep. Rather than a task, think of an epic as a high-level goal or requirement. An epic might not have all the details a team needs; those are for user stories. In addition, epics aren’t written in stone, as they can change as the project dictates. Let’s say, for example, that your New Year resolution is to get in shape: - Theme – Get in shape - Epic – Run a marathon by the end of the year - Story – Join a gym - Story – Buy running shoes - Sprint – Complete by following Monday Perhaps you are ready to run a marathon by the middle of the year, or maybe you injure yourself and have to get in shape by swimming or riding a bicycle. Obviously, that’s a simplified example of an epic. DevOps epics would have far more user stories and sprints and a lot more variables. . . . What are the benefits of epics? Smaller pieces are easier to manage As most who’ve accomplished complex goals will tell you, tackling a project in small pieces is much easier than tackling it in large pieces. Beyond that, it is much easier to reverse engineer a small chunk of the entire project than potentially start from scratch when a problem occurs. Keeps the team’s eye on the goals Imagine that your job is writing a tiny piece of code every day without knowing the ultimate goal. Odds are you’d feel frustrated with your job, and you probably wouldn’t do it as well. An epic serves as a constant reminder of the end game. The goal is far more likely to be accomplished in the time you need it done and with the quality you need if everyone knows the endgame. Epics help plan sprints and the project as a whole As the product owner divides up the stories, they assign story points, which are estimates of how much time and effort is needed to complete the sprint backlog (to-do list). Product owners adjust story points as they learn more about the project and what the team is capable of. . . . How to create an epic? The process for creating an epic varies from organization to organization. With small to medium-sized organizations, the product owner might be an executive, such as the CIO or CTO, and user stories are assigned to teams. Larger organizations’ epics might originate from a vice president or even director. 1) Name the epic The epic’s name should clearly and concisely convey the goal. It would unambiguously communicate the goals but without a roadmap to accomplish them. For example, an epic might be titled “Add instant messaging to the website.” 2) Write the epic’s narrative An epic’s narrative digs a little further into the epic. The narrative should include the owner’s name, the objective, and the reason for the epic. For example, the epic narrative might read, “As the [executive or product owner], I want to add instant messaging to the website so customers can directly communicate with us without calling or emailing.” You might also include a short data-driven explanation of the reason, such as “A [percentage] increase in customer service response time leads to a [percentage] decrease in customer retention.” 3) Note the epic’s scenario Describe how you envision people will use the new features. 4) Determine the epic’s scope Before handing it off to a team, you should establish the scope or boundaries. For example, this might be where you denote the fields you want in the messaging app along with where you want the data to go. 5) Describe what completing the epic looks like Create a high-level list of the epic’s acceptance criteria. For example, in the messaging scenario, sales, marketing, or the department that’s ultimately communicating with customers in the app, will have to approve the messaging app, and development will have to show that the app is working. 6) Divide the epic into user stories Once you’ve completed creating the epic, it’s time to break it down into user stories. . . . How to break an epic into user stories? Breaking down an epic into user stories varies from project to project and company to company. While on one level, it makes sense to cut an epic up into equal-sized pieces with consecutive sprints, in most cases, that’s inefficient and might not even work, and it doesn’t allow for adjusting sprints as necessary. One sprint at a time You don’t have to worry about splitting the entire epic up all at once. In fact, that’s not recommended as there are too many variables that may come up. Instead, split off the enough for your first sprint, second sprint, etc. That gives you time to adjust as issues come up. Breakdown by workflow When you break an epic into user stories using a workflow breakdown, you break it up in order of the workflow. For example, the messaging app could include: analyze needs > write software > manage data collection > test. Breakdown by role Breaking down by role is as it sounds. Each person on the project is given their part of the project. Breakdown by sprint When you break an epic down by sprint, you’re promising to deliver small, completed pieces of the project rather than the finished version.
OPCFW_CODE
FMDB block my UI Fmdb in iOS, I have a task to convert 100 JSON string to my model from network and then represent them on the UITableViewCell, because I have 3 button to change the data on the UITableView , so I fmdb the data to local, when I change the data it first check to read the local data, and then request newest data to reload UITableView and save to database. All 3 changes have the same logic. Because 100 data to store I choose to asynchronous dispatch the for statement, then the problem is when the model inserts in to table it stuck my UI a little bit. My table has 10+ columns , is there any possibility just because too much columns cause insert assume too much time? "Jason"? Did you only hear the requirements and not see them written down? @Droppy json data, sorry to type wrong, the insert procedure is successfully, only the procedure block my UI ! "asynchronous dispatch" to which queue? @Justin Then you probably are not dispatching the procedure asynchronously @Droppy global queue with 0 , 0 by default ! @simpleBob i use dispatch_async method, then I'm sure it is! Well that sounds OK to me. We need code to continue investigation if you want to avoid guesswork. @Justin, but you don't do dispatch_async(dispatch_get_main_queue(),...), do you? @Droppy ok I will post it tomorrow morning, cause I am on the way home, just off work! Thanks , tomorrow I notice you,! @simpleBob not on the main thread , replace inner with global_queue! @Justin make sure you don't set the priority to DISPATCH_QUEUE_PRIORITY_HIGH @simpleBob I use dispatch_queue_t and then the problem none exist. and i try to use DISPATCH_QUEUE_PRIORITY_HIGH is not work for me. Now i want to know since the elements in the sqlite table already have lock && unlock in the FMDB source code , so why should we need queue to solve this multi-thread problem ? According to FMDB documentation, it's better to use FMDatabaseQueue for your case. To perform queries and updates on multiple threads, you’ll want to use FMDatabaseQueue. Using a single instance of FMDatabase from multiple threads at once is a bad idea. It has always been OK to make a FMDatabase object per thread. Just don’t share a single instance across threads, and definitely not across multiple threads at the same time. Instead, use FMDatabaseQueue. Here's the reference. I have a question when i use dispatch_queue_t and then the problem none exist. maybe FMDatabaseQueue is the same to handle multi-thread issue. now i want to know the elements in the sqlite table already have lock && unlock in the FMDB source code , so why should we need queue to solve this multi-thread problem ? @SevenJustin The database has lock && unlock, not the elements. If you do some update operations, and at same time, you wanna do some delete operations, it maybe occurs errors if you don't put your operations into a queue. so i got your meaning, then i will revise my insert or delete method to wrap the [db open]...insert or update....[db close] block with a lock. is it right? i use table as a singleSkeleton, and insert update delete methods all have operations with the [db open] and [db close], just like @SevenJustin [db open]->delete/update/add->[db close], and if you do high frequency operations, use db transactions and FMDatabaseQueue . i f I don't want to use FMDatabaseQueue and use lock instead, where should I revise? @SevenJustin You should use transactions feature to handle failed operations. can you give me any simple example?
STACK_EXCHANGE
How can I mount a composite rail kit on a ramp shallower than the brackets accommodate? I recently purchase stair railing kits to install on a handicap ramp for my mother-in-law's home. The ramp angle is approx. 11 degrees. The install instructions for the kit, indicate they are usable for angles 26-36. What options are there for 'correcting' the 26-36deg brackets for use with 11deg? Or is there a better option not involving bracket modification or use of brackets at all? I thought perhaps trying to cut 11deg channels for the rails in the posts, but cannot find an convenient/easy way to do so. Another idea is had would be to sand the mounting plate side of the bracket to reduce the post-to-bracket attachment angle from 90 to 75? I'm just fishing for ideas/suggestions. This cannot be an unusual issue for users of these kits. What are you mounting them to? For that shallow of a slope I'd be inclined to look at level railing kits and their associated hardware instead. That would be a much closer starting point. I'd then mill some 11° wedges from white vinyl to place behind the brackets, tilting them into position. If that's not a good solution, create some 20° wedges (31° - 11°). Pre-drill them so that longer mounting screws pass through smoothly at the bracket mounting hole locations. Solid vinyl board stock is readily available at home improvement stores. Try to find pieces that allow you to leave the factory milled finish on exposed faces. Cut faces will show a rough, porous surface. Ideally the wedges will be roughly 1/2" wider and taller than the brackets (for a 1/4" reveal all around), and smaller than the posts (to avoid flush joints, which are tricky and can be unsightly). The first suggestion won't work because the spindles would not be vertical. The other suggestions I like, but I don't think there is sufficient room for 1/4" reveal all the way around. I will see if I can find the blocks you suggest. The mounting is to 4x4 PT posts. Thanks. Maybe, but there may be enough play in the spindle cavities for a little tilt. Those assemblies aren't solid. And if there isn't room for 1/4", go with 1/8". I seriously doubt you wouldn't have at least that much space. I looked further into your suggestion. Unfortunately, the vinyl blocks are ridiculously expensive--$19 each. So, I will instead use PT 2x4s to make the wedges. On the plus side, the reveal will not be an issue. I don't think we're talking about the same thing. I'd have expected that you'd buy a single board for roughly that price and cut all your wedges from it. The only vinyl material available near me is 1.5x0.75x8ft for $8. That is cheaper than the block 12.5x8 3/8 x 1 7/8 ($18), but I don't want to piece several together to make one block. I wish I could find what you describe.
STACK_EXCHANGE
Restore old Belkin UPS? I have an old Belkin F6C650-USB-MAC UPS which I used without problem to support some Macs and PCs for a number of years (probably bought circa 2002). I then moved to a house where it wouldn't work because the 3-pin sockets weren't grounded (ouch!), so it was out of use for about 18 months. When I tried to use it in this house, it would not respond at all. Is there anything sensible to do other than recycle the old one and buy a replacement? (It's a 120V, 12A unit.) I'll try to keep my soapboxing to a minimum, but I have had universally bad experiences with everything Belkin. While it's not surprising at all that a UPS not used for 18 months would have dead batteries, and I agree with the recommendation that you are probably better off just replacing the device, I'd go with an APC. Sounds like the batteries is dead. I do not have a belkin, but I have changed batteries on a couple of old APC without a problem. Replacing the battery - and finding the right battery to replace it - is probably harder work than it is worth. There are no signs of life whatsoever, even over a period of a day or so. All the small UPS's (3 different brands) I have opened have used the same type of battery and all but one was very easy to replace the battery on. The cost for the batteries was IIRC less than 25% of a new UPS. I did it for my personal use when my time didn't cost anything. But for work I usually buy a new UPS because I know it will work, and I also get some warranty. Googling with the search term 'Belkin F6C650-USB-MAC UPS batteries' shows battery prices from about $18 to about $70. Looking at the Belkin site, it is moderately clear that they do not do much in the way of UPS any more - they have just a couple of current products. The APC web site can be a bit confusing; looking at BE750LM and it appears to be available only through distributors (but it is a Latin America model); go for BE750G and it is $99. So, a new unit might be around 3 times the cost of a replacement battery ($33). If you're comfortable with taking out the battery, look into buying a small desulphator. I've read of people restoring "dead" batteries with these. It would be a decent investment, too, since you can use it to extend the life of any lead-acid battery you own (car, boat, motorcycle, etc.). These aren't snake oil, either. They're widely known in solar, and other home-power circles. Of course, the UPS could be damaged, as could the battery itself, which the desulphator wouldn't help. You say the UPS doesn't respond at all. Every UPS I've had at least functioned as a decent power filter even with a dead battery. Maybe yours is toast? Some of the UPS's I have encountered didn't show any life when the batteries was completely dead, but worked when I replaced the batteries. This one shows no signs of life at all, even after being connected for a day or so...it will probably be simpler just to recycle and replace. @Jonathan: I have had the same problem. For me it worked when I replaced the batteries but that was with a UPS from APC. For me it was much cheaper to replace the batteries than to buy a new one. It is probably easier to buy a new one, that you know will work, but it will cost you 4-8 times more.
STACK_EXCHANGE
People who reprimand new members that does not use the search function fail to realise that there are members who love to spoon feed lazy babies. Newbies Corner is a good place to... 1) Answer newbie's question 2) Ask newbie to search (supply keywords if necessary) 3) Ignore newbie's question, wait for someone else to answer 4) Ask newbie to search and be rude/sarcastic I think the problem (and possibly what TS is referring to) is the 4th one, which I think is happening quite frequently. It has come to a point where it seems that they put down the newbies just to make themselves sound like an "old bird". There is no need to spoon feed if one don't like it, but there is no reason for one to be rude/sarcastic. Last edited by ahbian; 5th May 2010 at 10:23 PM. its just too frequent every now and then. this is something worth thinking about. make the forum a nicer place to surf. instead of getting shot everytime a newbie asks something. what will they think of this forum. if you want them to find, fine. but whats with the rude/sarcastic remarks? make this world a better place by starting with ourself. once, the ppl in the Mac User Group Singapore, shouted 'go google or make a search', i quit. switched to discussions.apple.com; good move! those guys there are knowledgeable, efficient, friendly and lastly, won't shout at you to 'go google or make a search' right. i think there are still friendlier ppl out there in this forum. hope to see more of this... http://discussions.apple.com/thread....ageID=10664756This is how I set mine up at no cost other than some time - if you do the same, Google is your best friend for the next couple of hours - there is loads of help out there to do this, and help through this thread will be way off-topic: http://discussions.apple.com/thread....ageID=11427316Google are your best friend :-) http://discussions.apple.com/thread....ageID=11230826See this, Google is your friend. google is your friend. type in subject, then apple support in the google search field. example: time machine apple support Apple has a search on the Apple Support page too. Go to support page, on left under "browse support" select the product. After product page loads, use the search box on upper right menu to enter subject and search. Google is your best friend, never forget that. I searched eSTATA 130804 and found that it appears to be a LaCie eSATA PCI Express card. You should be able to take it from there. My own point of view.... *Feel like helping - Answer the question even if its been asked a million times..... *Fed up from answering the same question again & again - Don't even want to read the POST. Scuba & Father... For Life I think everywhere is the same lah.....just take it as a pint of salt. I normally will skip.....and i only crack joke in kopitam. Last edited by allenleonhart; 5th May 2010 at 11:17 PM. the "jolly well ignore your question and let you rot" is perhaps not an attitude i agree with.. because the simplest riposte to that would be "why don't you just do that?" and then there is nothing to say to that. the road to hell is paved with good intentions, i think people here can be too sensitive and think that sometimes, someone is mocking them when they are not. so let's all have some form of open mind, try not to be too catty when people overreact to a simple joke... i do admit i enjoy a good laugh and all when someone asks if L lenses will have problems being mounted on a 400d, of course. i'm only human. it doesn't take a senior member or someone being so-called experienced in photography to see the humour and nonsensical nature of that, right?
OPCFW_CODE
MIssing cmdlet "ConvertFrom-Yaml" Summary of the new feature / enhancement Missing painfully an option to Convert a YAML to a Powershell Object like ConvertFrom-Yaml Right now, I must use a Workaround: https://www.powershellgallery.com/packages?q=powershell-yaml Almost 100 Mio Downloads should speak for itself. THanks for taking this into consideration. Proposed technical implementation details (optional) https://www.powershellgallery.com/packages?q=powershell-yaml Almost 100 Mio Downloads should speak for itself. Indeed, it shows that PSGallery is an excellent place to host cmdlets to extend PowerShell. This has been asked for previously in #3607 #16785 #16819 & possibly more too and ultimately has been declined each time to be added because we have a number of packages on the Gallery that provides this functionality already, with no .NET native YAML functionality. Some additional background conversations in dotnet repos https://github.com/dotnet/runtime/issues/83313 & https://github.com/dotnet/runtime/issues/109199 We've also had discussions around providing a community driven "DevOps" meta-module on the gallery, that works similar to the Az/Microsoft.Graph module for a download once requirement and would include a module like PowerShell-Yaml along with a number of others. I am tempted in marking this as resoltion answered & duplicate, but will leave doing so right now as to allow for other comments, however this would be for the cmdlets WG to discuss so marking it for under that WG & it may be re-discussed in the New Year I suggest building it in again. This appears to be the original repository for powershell-yaml As you can see by the number of forks(78) and the outstanding issues there is no right solution for how to do Yaml with PowerShell, should it be hashtables or PSCustomObject, how should numbers be parsed etc. I would far rather those discussions play out in the specific module than the PowerShell project having to decide on the one true parser or myriad of options to cater for everyone's requirements. As an installable module it can already be used with any version of PowerShell later than 3.0. I am tempted in marking this as resoltion answered & duplicate You'd think with all the links and references where the community asked for first party support, I'd have enough momentum to become reality 😏 @kilasuit With this reasoning, all cmdlets that contain ConvertFrom/To would have to disappear from Powershell. I plead for a built-in solution that is tested and secure. To my opinion ConvertFrom/To cmdlets shouldn't actually exist knowing that they aren't based on an approved verb. It would have been better if there was a more general cmdlet that contains common serialization/de-serialization logic along with error messages etc. which should be easier to maintain. The cmdlet would than be called "Convert-Object" with parameter sets containing mandatory parameters as: -ToJson / -FromJson -ToYaml / -FromYaml -ToCsv / -FromCsv -ToXml -FromString ... I think it is even not to late for such a direction. A set of ConvertFrom/To proxy cmdlets might than be made available (through e.g. a "The kitchen sink" edition) for downwards compatibility. The cmdlet would than be called "Convert-Object" with parameter sets containing mandatory parameters .. I think that would be the worst of all possible solutions, it means new format types can't be added without a new release of PowerShell. I suggest that you can have a Convert-Object command but it just has "-To" and "-From" parameters which accept the desired format. Then the command can internally invoke the appropriate ConvertTo-XXXX and ConvertFrom-XXXX without any knowledge of the formats and leaving the system extensible using the the existing ConvertTo/ConvertFrom pattern. eg Convert-Object -To CSV or Convert-Object -From JSON Which would delegate respectively to ConvertTo-CSV and ConvertFrom-Json Both ConvertFrom and ConvertTo are PowerShell approved verbs. Oops, you right in every respect. I simply forgot about that. I could have split the comment in two general ConvertTo-Object and ConvertFrom-Object but that would be even worse. I decided to delete the entry.
GITHUB_ARCHIVE
"I understand that we're smarter than me. That's one reason I like the idea of sharing." --Toba Beta, Master of Stupidity Summer of Love is a sub-program (or a project? Initiative?) to publish our internal systems as open source. Each system takes some work like stripping out stuff specific to just us, documenting a bit, adding license etc. It would be embarrassing to make noise about open source software, had we not published anything much of value. These systems are pretty damn neat, a lot of thought has been put to them. It would feel good to see others make use of them. Well, first we agreed that we can publish the systems (sure!) and ensuring the budget for the summer, circa 10000 euros (no problem). Then we set into finding a suitable person to do the actual work. We struck luck there, a good match was identified immediately and he agreed to take up this gig. Here’s the mail I sent to Futurice people (all of them) on Apr 15 2014. It sums the initiative up and might be interesting for other reasons as well. So the other day we were at Kaunis Kampela discussing open source. I had been reading on Google’s Summer of Code and how we missed this year’s application period… and how that doesn’t really matter, since we haven’t yet open sourced much, as a company, so our application would likely have been rejected. Brooding and shaking of heads ensued. Then Olli Jarva pointed out that we have a number of neat support systems developed by IT that could be published - perhaps we should assign someone to do just that? Perhaps a summer employee? Why yes… yes we should! If only we could find a damn nice sourcerer for the job… [a surprisingly short period of time passes] I’m Ville Tainio and I will be joining Futurice this summer. I’m currently studying Computer Science at Aalto University and mainly focusing on web software. I’m also currently the Master of Corporate Relationships at the Computer Science guild. My passion is to create cool things and help other people with technology. I want to be really proud of my work and I believe that our summer project is something that makes not just me, but everyone really proud of working for Futurice. I’ve been programming since I was 14 years old and I love learning new stuff about it every day. During my free time, besides programming, I play synth/french horn in a band called Blind Architect (https://www.youtube.com/watch?v=o1Gux8GWkHI check it out!). I also enjoy running and an occasional round of disc golf. I can’t wait to get started and to meet all of you! Ville will join us in June. He will provide more information about the systems and the progress of the work. We are talking about systems like FUM, IRMA, Password Safe etc… The work will include getting to know the systems, doing some pre-publishing tidying up, writing relevant documentation and pushing to public Github repositories, managing issues and hopefully even handling some pull requests from other contributors. And a whole lot of communication, at least in the shape of blog posts. So uhh, why? Because it’s the right thing to do. Potential short-term corporate-y gain is probably some good press, which might help out our recruitment. Possibly even some customer brand recognition. Yes, that’s pretty vague, but it’s enticingly warm and fluffy. Not entirely unlike fresh cotton candy. We could also put some effort in getting some of these systems adopted by our customers (or potential customers), since they are pretty damn sweet, and we’re in a good position to sell consultancy on the side. For our OSS program (see www.spiceprogram.org for a sneak peek) this is a good opportunity to show that we can walk the walk. PS. Kaunis Kampela is a local pub. The internal reception was delightfully positive. I received about a dozen mail replies (the introduction mail had ~200 recipients) that all applauded the initiative. More people gave thumbs up on Flowdock or live. Warm and fluffy indeed! So far there’s been some interest towards the released systems and solutions, but nothing that exciting yet. That’s okay and precisely what we expected. We plan to look into how to promote the systems (and the initiative) next. That’s a company advantage; an actual marketing organization and a lot of contacts. Ville continues the work and he will help with other Spice Program activities as well. We will talk internally to find out if our people would be interested in promoting some of the systems to our customers. Things of value have already been achieved, so it is all pretty laid back now.
OPCFW_CODE
Why do companies have a fiscal year different from the calendar year? There are many companies that have their year end occur on dates other than December 31. What are the advantages to this? I know some companies or entities have large incomes or expenses at certain times of the year, and like to close their books after these large events. For example where I work, the primary seasonal income comes after summer, so our fiscal year ends at the last days of October. This gives the accountants enough time to collect all the funds, reconcile whatever they have to, pay off whatever they have to and get working on a budget for the next year sooner than a calendar year would. There also might be tax reasons. To get all of your income at the beginning of your fiscal year, even if that is in the middle of the calendar year would allow a company to plan large deductible investments with more certainty. I am not to sure of the tax reasons. I can think of a few good reasons: A company, especially public, usually wants their fourth-quarter earnings to be the strongest of the year. That ends each fiscal year on a high note for the company and its investors, which helps public sentiment and boosts stock prices. So, travel agencies and airlines usually like ending their year in October or March, in the lull between the summer and winter travel seasons with a large amount of that revenue falling within the company's fiscal Q4. Oil companies sometimes do the same because fuel prices are seasonal for much the same reasons. The downside of the above approach is that you make or break your entire year on your last quarter, which can cause problems for companies with scheduled dividends. Usually that's not a huge concern, as dividends are based on profits, so if there aren't any profits there aren't any dividends. However, in such cases a bad Q4 which also sends the entire company's year into the red is a double-whammy for stock prices. If this is a concern, a schedule with strong Q1 and Q3 earnings, with decent Q4 (second or third-best season) and a "dump" in Q2 (weakest season) is typically the best overall bet. December is a really bad month to try to close out an entire year's accounting books. Accountants and execs are on vacation for large parts of the month, most retail stores are flooded with revenue (and then contra-revenue as items are returned) that takes time to account at the store level and then filter up to the corporate office, etc etc. It also doesn't tell the whole story for most retail outfits; December sales are usually inflated by purchases that are then returned in January after all the hullaballoo. As a result, a fiscal year end in January or even February keeps the entire season's revenues and expenses in one fiscal year. The downside of the above is that a February fiscal year-end normalizes your December earnings. Investors expect bad Q1 reports for most retail; consumers are looking at credit card statements and tightening belts. Conversely, investors expect huge calendar Q4 numbers; to lump some of the bad news of returns and lower revenues in with your best sales quarter may cause you to continually fail to meet speculator's expectations which will cause your stock to be devalued relative to competitors. My grandfather owned a small business, and I asked him that very question. His answer was that year-end closeout is very time-consuming, both before and after EOY (end of year), and that they didn't want to do all that around Christmas and New Year. Every day is the end of a year The Earth is at the same point in its orbit from where it was a year ago every single instant of every single day. Choosing which one to observe as “the end of the year” (calendar or fiscal) is completely arbitrary. 31 December as the end of the year was adopted by Julius Caesar when he introduced the Julian colander, however, most provinces continued to observe the old New Year in March and places outside the Empire naturally ignored it. Britain and its colonies continued to observe New Year at the end of spring on 24 March until it adopted the Gregorian calendar in 1752. However, 5 April remains the end of the Financial year (because 24 March 1752 Julian was 5 April 1752 Gregorian) for private businesses and individuals, however, government accounting years end on 31 March. Businesses adopt the financial year that suits their operations. Many countries do not end the financial year on December 31 Canada is 31 March. Australia is 30 June. The United States is 30 September. The United Kingdom has two as discussed above. Business with cyclical revenues usually end at the end of one of those cycles For example, most retail outlets have strong weekly cycles so many end their financial year at the end of the week. This means that some years have 52 and some have 53 weeks but that still gives more accurate accounts than ending one of Wednesday and the next on Friday. Seasonal business end at the end of the season Many sporting clubs and businesses have a winter season (e.g. most football codes) or a summer season (e.g. cricket). It makes no sense to end the financial year in the middle of the season. Even year round sports (e.g. tennis) have seasons and end their fiscal year at the end of one of those. Similarly, academic institutions end their financial year to coincide with the academic year. Foreign subsidiaries usually adopt their parent’s fiscal year Because their accounts have to feed up into the parent’s accounts. Maybe it's just because of the foundation date. If I start a company on August 1st, I would like its FY starts on that date too, in order to track my first whole year. Would be quite useless to finish my year on December, after just five months. I want to have data of my first year after a twelve months activity. In addition to the company-specific annual business cycle reasons and company-specific historical reasons mentioned in the other answers, there is another reason. Accounting firms tend to be very busy during January (and February and March) when most companies are closing and auditing their calendar-year books. If a company chooses its fiscal year to end at a different time of year, the accounting firms are more available, and the auditing costs might be lower.
STACK_EXCHANGE
The database uses in all business applications and financial transactions however, databases are not just used for business applications. Commonly database uses in all grocery stores, banks, video rental store and favorite clothing stores they all use databases to keep a record of their customer, inventory, employee and accounting information. Databases allow us to store data quickly and easily to manipulate this data in many aspects of your daily life. The same about this article it was stored in a database and its contents were retrieved and displayed in your browser. A database is a collection of information that is organized in the way that it can easily be accessed, managed, and updated. Sometimes we are bound to use database management systems (DBMS), in database software like HRMS, Accounts, inventory tools that are primarily used for storing, modifying, extracting, and searching for information stored in a database. Several types of database uses have been around us since the early 1960s however, the important thing is that the most commonly used type of database was not created until the early 1970s. Relational databases are a commonly used type of database. Created by E.F. Codd, relational databases have provided the digital organizational tool used by countless companies and individuals. Computer systems replaced older forms of paper communication and paper file storage. Computer databases are used in a way to store and manage large amounts of information digitally in no time. Companies began to use databases for a means of inventory tracking, customer management, and accounting fields. Transfer of information from paper to computer databases was a big achievement in information management and storage. Databases are more efficient than paper storage in that they take up less space, also the database is accessed easily by multiple users at once and can be transferred to long distances with no delay. The use of databases allowed for the rise of corporate infrastructure, credit card processing, email, and the Internet. Databases allow for data to be shared across the world instead of being housed in one location on a physical piece of paper. There are different types of databases that can be used in real-world operations. Flat file databases include generally plain text files which are used by local applications to store data. Flat files database systems are not as popular as relational databases. The relational database is the type of database with related tables of information. Each table has a number of columns or attributes and a set of records or rows. Relational database systems are popular because of their scalability, performance, and ease of use. It is the type of database which is flat, it means that each line can hold only one record. These are commonly used in the form of plain text. They are very simple in a format that’s why easily accessed when required, this makes them useful for simple tasks. This type of database represents the tree-type structure and a very good in an association like Windows File Explorer. To explain the hierarchical database model, we can use the parent and child structure that means a parent can have as many children as both wants, but each child has only one parent. The most famous hierarchical database is the IMS (Information Management System), created by IBM. The widely used database type is relational database mostly used in website designing. In the relational database system information is easily stored and the data is stored in the form of tables. More information can be easily added without re-organization of the tables. In a relational database, there can be an unlimited number of tables, each table containing different, but co-related, information. If the database is created this database can have several tables to keep different sets of information. These tables should not be in the same structure, like hierarchical database, since they are equally important types of database. It is the form of digital data much batter then file type and papers record that was very difficult to keep. Multiple users in different locations at the same time can view the data in more than one place. Because banks store their customer information and balances in a database, you can use any branch for deposits and withdrawals. Databases management allows more options because they are in a digital format. Companies use databases for inventory management and keeping a record of item pricing.
OPCFW_CODE
How to make a button's onclick function in one webpage execute a function in another webpage without php In my main html page, I have the following code: <p class="text" id="time">00:00</p> In my other html page, I have the following code: <button onclick="resettime()">RESET TIME</button> They both link to the same javascript page, which has the code: function timeIncrement() { document.getElementById('time').innerHTML = +mins+":"+secs; if(secs==60) { mins+=1; secs = 0; } } var timecheck = setInterval(timeIncrement,1000); function resettime() { clearInterval(timecheck); mins = 0; secs = 0; timecheck = setInterval(timeIncrement,1000); } Clicking the button doesn't reset the time to 0:0, so it doesn't work, so I was wondering if there's any way I can do this without going into php. I'm new to html/js, so sorry if this is a repeat, couldn't find a similar one. are you asking how to make resettime() work or how to call it from a second page? @Cruiser How to make it work, as the problem is with the onclick being on a button in a different page, right? unless there's more code you haven't shown, timeIncrement() has no knowledge of the min and sec variables. To call resettime() from another page you'd need to send a GET request to the second page and parse the url, then call resettime() based on the values passed Why would you want the rest button to be on a different page to the actual timer? Or am I just being stupid? No, I do want it exactly like that. I'm making this for a weird situation where the webpages are used offline and I need to use one webpage to alter content on the timer webpage. Two html means two windows. if you want to execute script from one window to another window, you should have the other window's object and both window should have document.domain. for having reference of another window in main html add the following  <script>window.open("other.html");//other html will open automatically</script> in other html, change the code to following <button onclick="resettimeinMainHtml()">RESET TIME</button> <script>function resettimeinMainHtml(){window.opener.resettime();// will reset in mail html}</script> I tried this, it didn't work, sorry :( I think this has to do with the second document not having access to the vars mins and secs, maybe? declare mins and secs globally and try It should work :O Can you tell what error you got in console.
STACK_EXCHANGE
Terminology and Components (VSK Echo Show) Most parts of the installation, build, and deployment processes of the reference video skill (Alexa Video Multimodal Reference Software) are automated. You don't need to worry about setting up every resource and service listed below. For your convenience, however, here is an outline of all the reference video skill components to help you get started. At a high level, it is important to know which services and tools you use. Command Line Interface (CLI) that unifies all the services you need to set up in order to create, build, and deploy your video skill targeting Echo Show devices. It is highly recommended to use this tool for video skill development on Echo Show devices. AWS CloudFormation is at the core of the reference video skill, so that you manage related AWS resources as a single unit, called a stack. AWS CloudFormation ensures creation, updating, and deletion of AWS resources by creating, updating, and deleting stacks. The CLI tool creates these resources for you. AWS computation service that lets you run code without provisioning or managing servers. Your lambda function is run only when requested by an Echo Show device running your video skill, and by the web player that streams video content. This is one of the most important components of video skill development on Echo Show devices. The reference project creates a lambda function for you. AWS Cognito is a user identity and data synchronization service that helps you securely manage and synchronize app data for your users across their devices. The Amazon Cognito User Pool is a user directory for adding sign-up and sign-in user features to your video skill. Apps within a user pool have permission to call unauthenticated APIs (APIs that do not have an authenticated user), such as APIs to register, sign in, and handle forgotten passwords. To call these APIs, you need a client ID and an optional client secret. You need to use these services to connect your skill to Echo Show devices and enable user authentication. The CLI tool helps you create these resources. If you have your own OAuth, you can go to Cognito AWS - User pool and configure the deployed skill as part of Account Linking configuration. Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. An AWS Identity and Access Management (IAM) user is an entity that you create in AWS to represent the person or application that uses it to interact with AWS. A user in AWS consists of a name and credentials. To get started, you must manually create an IAM user. DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. The reference video skill uses a DynamoDB table to support pagination of video content. The CLI tool creates this database for you. Uniquely named storage for all of the video content files of your project, also known as objects. Amazon Simple Storage Service (Amazon S3) stores these objects offering scalability, data availability, security, and performance. The CLI tool creates an S3 bucket to store the video content and the built web player (as a website). Login with Amazon allows users to login to registered third party websites or apps ('clients') using their Amazon user name and password. You must manually create a Security Profile and enable Login with Amazon. Video skill directives are sent from Alexa to your lambda function. Directives are JSON messages that contain instructions about performing a specific action, like getting metadata for a video. There are various directives with different names and payloads. Your lambda function must handle the incoming directives and return a response that conforms with the expected JSON. Overall, the basic model is a request (in JSON) sent from Alexa (the request is called a directive) and a response (also in JSON) sent by your lambda. The reference lambda handles this entire workflow for you. Now that you're more familiar with the various components and services used in the reference skill implementation, continue on to Step 1: Configure Developer Account Settings. Last updated: Oct 29, 2020
OPCFW_CODE
Regression in 1.1.32 - unable to build on mac m1 Starting with 1.1.32 I am unable to build ring on my machine. Downgrading to 1.1.31 fixes the issue. Rust version both 1.82 stable and nightlies reproduce the error for me. Also other crates fail to build for me, eg blake3 fails with missing assert.h System: mac m1 xcode 16.1 macos 15.1 Some more error details can be found in https://github.com/briansmith/ring/issues/1942 Error output: --- stdout cargo:rerun-if-env-changed=RING_PREGENERATE_ASM cargo:rustc-env=RING_CORE_PREFIX=ring_core_0_17_8_ OPT_LEVEL = Some(0) OUT_DIR = Some(/Users/dignifiedquire/rust_target/debug/build/ring-2479b8f424b81183/out) TARGET = Some(aarch64-apple-darwin) HOST = Some(aarch64-apple-darwin) cargo:rerun-if-env-changed=CC_aarch64-apple-darwin CC_aarch64-apple-darwin = None cargo:rerun-if-env-changed=CC_aarch64_apple_darwin CC_aarch64_apple_darwin = None cargo:rerun-if-env-changed=HOST_CC HOST_CC = None cargo:rerun-if-env-changed=CC CC = Some(clang) cargo:rerun-if-env-changed=CC_KNOWN_WRAPPER_CUSTOM CC_KNOWN_WRAPPER_CUSTOM = None RUSTC_WRAPPER = None cargo:rerun-if-env-changed=CC_ENABLE_DEBUG_OUTPUT cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS CRATE_CC_NO_DEFAULTS = None DEBUG = Some(true) cargo:rerun-if-env-changed=MACOSX_DEPLOYMENT_TARGET MACOSX_DEPLOYMENT_TARGET = None cargo:rerun-if-env-changed=CFLAGS_aarch64-apple-darwin CFLAGS_aarch64-apple-darwin = None cargo:rerun-if-env-changed=CFLAGS_aarch64_apple_darwin CFLAGS_aarch64_apple_darwin = None cargo:rerun-if-env-changed=HOST_CFLAGS HOST_CFLAGS = None cargo:rerun-if-env-changed=CFLAGS CFLAGS = None cargo:warning=In file included from crypto/curve25519/curve25519.c:22: cargo:warning=In file included from include/ring-core/mem.h:60: cargo:warning=include/ring-core/base.h:71:10: fatal error: 'TargetConditionals.h' file not found cargo:warning= 71 | #include <TargetConditionals.h> cargo:warning= | ^~~~~~~~~~~~~~~~~~~~~~ cargo:warning=1 error generated. --- stderr error occurred: Command env -u IPHONEOS_DEPLOYMENT_TARGET "clang" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-2" "-fno-omit-frame-pointer" "--target=arm64-apple-macosx15.1" "-mmacosx-version-min=15.1" "-I" "include" "-I" "/Users/dignifiedquire/rust_target/debug/build/ring-2479b8f424b81183/out" "-Wall" "-Wextra" "-fvisibility=hidden" "-std=c1x" "-Wall" "-Wbad-function-cast" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wnested-externs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wstrict-prototypes" "-Wundef" "-Wuninitialized" "-gfull" "-DNDEBUG" "-o" "/Users/dignifiedquire/rust_target/debug/build/ring-2479b8f424b81183/out/fad98b632b8ce3cc-curve25519.o" "-c" "crypto/curve25519/curve25519.c" with args clang did not execute successfully (status code exit status: 1). cc @madsmtm I guess this is probably due to passing apple minimum os in target? I guess this is probably due to passing apple minimum os in target? I can't see how that would make TargetConditionals.h non-existent? Rather, I think the issue is that the clang binaries that people are using is not the trampoline in /usr/bin/clang (see https://github.com/rust-lang/rust/pull/131477 for some details on that), which means that SDKROOT isn't set automatically? I.e. I think that the workaround done below is wrong, and we should just always pass SDKROOT: https://github.com/rust-lang/cc-rs/blob/2050013e69bc07e275a65583dd26e8905dfdff73/src/lib.rs#L2544 That said, still don't understand why people's builds weren't broken before this? Will try to take a closer look soon. If it helps on my machine I have clang: /opt/homebrew/opt/llvm/bin/clang export CPATH="/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include,/Library/Developer/CommandLineTools/usr/include/c++/v1,$CPATH" to make things work So, the change to the commandline options that we pass between 1.1.31 and 1.1.32 is that instead of passing --target=arm64-apple-darwin, we now pass --target=arm64-apple-macosx15.0. I wonder if homebrew's Clang has special handling for the -darwin target? Ah, indeed it does! Homebrew installs a config file for *-apple-darwin, but not for *-apple-macosx: https://github.com/Homebrew/homebrew-core/blob/f8b6e58cbe7b244511167ecc13f8c2992d8b50a7/Formula/l/llvm.rb#L499-L513 Hmm, though that's only since https://github.com/Homebrew/homebrew-core/pull/196094, which was merged a week ago. Which version of Homebrew do you have? I've tried to fix this in https://github.com/madsmtm/homebrew-core/tree/clang-macosx, but am unsure at this point that that's the correct way forwards. In any case, I feel like this is really a Homebrew issue (and the method that they're using feels brittle for many reasons). Opened https://github.com/Homebrew/homebrew-core/issues/197278 I tried to skim #1252, but I couldn't find the answer. Is there any reason why you don't want to use *-apple-darwin in the target triple? It's what Apple Clang uses: ❯ xcrun clang --print-target-triple arm64-apple-darwin24.1.0 ❯ xcrun -sdk macosx15.1 clang --print-target-triple arm64-apple-darwin24.1.0 I tried to skim #1252, but I couldn't find the answer. Is there any reason why you don't want to use *-apple-darwin in the target triple? Because semantically it makes it very clear to Clang that the target is macOS, and not one of the other Darwin platforms. If you use echo > foo.c && clang -v foo.c, you'll see that the actual triple that Clang resolved to passing to the compiler backend was arm64-apple-macosx14.0.0 (or similar). The *-apple-darwin triple seems mostly like a backwards compatibility hack to support overriding the triple by providing a different SDK root (the triple changes to arm64-apple-ios18.0.0 if you run with xcrun -sdk iphoneos clang -v foo.c). And practically, it's easier to exactly specify the macOS deployment target that way (otherwise we need to maintain a mapping from macOS to Darwin versions). (Though I guess we could set MACOSX_DEPLOYMENT_TARGET / -mmacosx-version-min=, and let Clang sort it out). otherwise we need to maintain a mapping from macOS to Darwin versions Having such a mapping isn't that bad: https://github.com/Homebrew/brew/blob/40f4ab25468bce1640b5ec7ff34fed271cad6e4f/Library/Homebrew/macos_version.rb#L34-L43 Any updates on this? I think @madsmtm is working with upstream (homebrew/llvm) to fix this
GITHUB_ARCHIVE
Why does CoffeeScript requires parenthesis in the following cases? I'm a CoffeeScript beginner. This is an output from: http://js2coffee.org/ .js: var prevPost = Posts.findOne({position: this.position - 1}); .coffee: prevPost = Posts.findOne(position: @position - 1) .js: Posts = new Meteor.Collection('posts'); .coffee: @Posts = new Meteor.Collection("posts") And why not parenthesis here? .js: Posts.update(nextPost._id, {$set: {position: nextPost.position - 1}}); .coffee: Posts.update nextPost._id, $set: position: nextPost.position - 1 My wild guess (also a newbie to coffeejs) is that it has to do with the number and/or type of arguments. Notice how in your first two cases you only have one argument, and in the second you have two, one of which is an object? What you are doing with the return of the method seems to matter too. var foo = foo.bar(... vs foo.bar(... Is your question about j2coffee or about CoffeeScript? @Michael_Scharf CoffeeScript Language The Coffeescript documentation gives a good insight regarding this issue: You don't need to use parentheses to invoke a function if you're passing arguments. The implicit call wraps forward to the end of the line or block expression. console.log sys.inspect object → console.log(sys.inspect(object)); Style From polarmobile/coffeescript-style-guide you can see more definition about why and when to use parenthesis: When calling functions, choose to omit or include parentheses in such a way that optimizes for readability. Keeping in mind that "readability" can be subjective, the following examples demonstrate cases where parentheses have been omitted or included in a manner that the community deems to be optimal: baz 12 brush.ellipse x: 10, y: 20 # Braces can also be omitted or included for readability foo(4).bar(8) obj.value(10, 20) / obj.value(20, 10) print inspect value new Tag(new Value(a, b), new Arg(c)) Questions Q: Why does CoffeeScript requires parenthesis in the following cases? A: It doesn't Q: And why not parenthesis here? A: Like parameters line breaks are another way to improve readability, in the example you provided since there is some complexity in the arguments the js2coffee is wise enough to suggest the use of line breaks. You can test this and see that the output will be the same. In CoffeeScript you can omit the parenthesis in all cases: prevPost = Posts.findOne position: @position - 1 @Posts = new Meteor.Collection "posts" I think it is a matter of style if you drop the parenthesis. If you are in the "mood" of omitting parenthesis, be aware that you cannot omit them when there is no argument. This will assign foo to bar bar = foo This will assign the returned value of the function foo to bar: bar = foo()
STACK_EXCHANGE
import {Document} from 'idai-components-2'; import {getAtIndex} from 'tsfun'; /** * @author: Thomas Kleinke */ export class ModelUtil { public static getDocumentLabel(document: Document): string { return (document.resource.shortDescription) ? document.resource.shortDescription + ' (' + document.resource.identifier + ')' : document.resource.identifier; } public static getRelationTargetId(document: Document, relationName: string, index: number): string|undefined { const targetIds: string[]|undefined = document.resource.relations[relationName]; if (!targetIds) return undefined; return getAtIndex(targetIds)(index); } } export const hasEqualId = (l: Document|undefined) => (r: Document): boolean => (l != undefined && l.resource.id === r.resource.id); export const hasId = (doc: Document) => doc.resource.id != undefined;
STACK_EDU
Hi, On Sunday, February 05, 2012 22:29:18 Heiko Schulz wrote: > Because no one, not Tim Moore, not me, not anyone, did say this! > The wiki article is btw by Tim Moore. > > But they said, that the bottleneck of GPU's is not the vertice count, but > other things. That never meant that you have to use many vertices as you > want. > > "... As many vertices as needed- not more or less!" ACK. Currently OpenGL wise we are basically geometry setup bound - at least for the models. This really means that vertices are not an issue. That still means that for setting up that one draw with 3 triangles is about as heavy as setting up say 500 triangles, but the conclusion of this is *not* that you should schedule as many triangles ass possible. The right conclusion is to collapse as many triangles as *sensible* for culling into a single draw. A simple help is to switch on the onscreen stats in the viewer or flightgear and see if the yellow bar is longer than the orange one. A yellow bar longer than the orange one means that the cpu is not able to saturate the gpu. Again, beware this does in no way mean that you should write complicated shaders to saturate the gpu! This rather means that the geometry setup/state change time - the yellow one - must decrease! How to achieve that is a very long chain of right decisions. Starting with the modeler: Avoid having leaf geometry with very small parts. Collapse as much triangles as sensible for culling into the same leaf geometry. Sure if geometry needs a different texture or anything that requires a different OpenGL state you cannot collapse this into the same leaf. May be it makes sense then to collapse the textures into a common one. Then you might be able to collapse more geometry. Avoid animations that are not really required. If you need a transform node in between some geometry, the geometry below the transform needs to be drawn seperate which takes time on the cpu. Everything that needs geometry to be drawn seperately should be avoided to that level where it stops to make sense because of culling. May be introduce some level of detail. Not just avoid the whole model in some distance, but also provide a static model without animations, simple geometry for the mid range. May be provide some more culling friendly and detaild with the animations you need for the short range. Keep in mind that culling a model that you are close to should make you split the model into more parts that could possibly be culled away. But for culling a far away model is probably very sufficient to cull it either as a whole or not. Avoid state changes. Use as much common state as possible. Osg does a good job in sorting together draws using the same state, but if the overall scene contains plenty of different state osg cannot change that. A new texture is a new state for example. A new shader is a new state. .... Once your orange bar is longer than the yellow one, you can start to care for the shaders and their execution. When thinking about shaders, keep in mind that different GPU's perform very different on the same shader. Appart from OpenGL we spend a lot of time in scenegraph traversal. This is mostly due to plenty of structural and often needless nodes in the scenegraph. The reason or this is that historically the xml files did some implicit grouping for *every* animation in the xml file. To make that reilably work I had to introduce a huge amount of group nodes in the xml file loader. These really hurt today as they introduce a whole lot of group nodes that just have a single child which need to be traversed for update and for cull. Profiling shows that Group::traverse is the most used function in flightgear. The lowest hanging fruit could be to optimize away the redundant nodes from the top level model that gets loaded by a database pager request. We cannot do that recursively once a model part is loaded since the mentioned grouping nodes might be referenced in any level in the model hierarchy above the currently loaded model. So only the top level model could do this without braking tons of models. And regarding all that, even if your box already is gpu bound this does not mean that other driver/hardware combinations are gpu bound too. Always: As much as needed and as few as possible. Ok, there are plenty of other aspects of performance tuning a scene graph, but these are the ones I think are most important for flightgear as of today. Mathias ------------------------------------------------------------------------------ Try before you buy = See our experts in action! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-dev2 _______________________________________________ Flightgear-devel mailing list firstname.lastname@example.org https://lists.sourceforge.net/lists/listinfo/flightgear-devel
OPCFW_CODE
# frozen_string_literal: true # Released under the MIT License. # Copyright, 2017-2022, by Samuel Williams. # Copyright, 2017, by Kent Gruber. module Async # Represents an asynchronous IO within a reactor. # @deprecated With no replacement. Prefer native interfaces. class Wrapper # An exception that occurs when the asynchronous operation was cancelled. class Cancelled < StandardError end # @parameter io the native object to wrap. # @parameter reactor [Reactor] the reactor that is managing this wrapper, or not specified, it's looked up by way of {Task.current}. def initialize(io, reactor = nil) @io = io @reactor = reactor @timeout = nil end attr_accessor :reactor def dup self.class.new(@io.dup) end # The underlying native `io`. attr :io # Wait for the io to become readable. def wait_readable(timeout = @timeout) @io.to_io.wait_readable(timeout) or raise TimeoutError end # Wait for the io to become writable. def wait_priority(timeout = @timeout) @io.to_io.wait_priority(timeout) or raise TimeoutError end # Wait for the io to become writable. def wait_writable(timeout = @timeout) @io.to_io.wait_writable(timeout) or raise TimeoutError end # Wait fo the io to become either readable or writable. # @parameter duration [Float] timeout after the given duration if not `nil`. def wait_any(timeout = @timeout) @io.to_io.wait(::IO::READABLE|::IO::WRITABLE|::IO::PRIORITY, timeout) or raise TimeoutError end # Close the io and monitor. def close @io.close end def closed? @io.closed? end end end
STACK_EDU
PSYB01H3 Study Guide - Final Guide: Partial Correlation, Null Character, Social Class 51 views13 pages For unlimited access to Study Guides, a Grade+ subscription is required. Dec. 6th, 2010 Lecture Scales of measurement: A review No numerical, quantitative properties Levels represent different categories or groups 2. Ordinal – minimal quantitative distinctions Order the levels from lowest to highest 3. Interval – quantitative properties Intervals between levels are equal in size Can be summarized using means No absolute zero 4. Ratio – detailed quantitative properties Can be summarized using mean Analyzing the Results of Research Investigations—What to do with data? Three basic ways of describing the results: 1. Comparing Group Percentages -used for nominal scale variables Example: Ask boys and girls whether they like school. -Like or dislike is nominal (categorical variable) -Ask 100 boys and 100 girls -Find that 60 boys and 75 girls like school -What would you report? -Perform statistical analysis to determine if difference between groups is significant. *Here, we are thinking about ways to become familiar with data, and to describe it. Comparing the number of people in each nominal group (in the form of a percent score) is one way of presenting the information, and of relating our findings to our research question. 2. Correlating Individual Scores -Obtain pairs of observations from each subject (each individual has two scores; one from each of the -Ask whether variables go together in a systematic fashion by calculating Pearson r correlation (Determines strength and direction of relationship) Another way of describing data is to state how the scores are related to each other (i.e. do the variables, as operationalized by the scores obtained) vary systematically together. ***What types of data may be used? only ratio and interval data may be used Notice the variability in the data. The linear relationships in these examples are quite realistic. Individuals vary in their responses, yet we can still see a pattern of how the variables are related to 3. Comparing Group Means For Experimental Designs -Compare the mean (average) response of experimental group with the mean response of control group -in experiments, remember that we are sampling from a population, and then randomly assigning to our control and experimental groups (2 groups in most simple design; may have more). When we collect data (in the form of scores) within each of the groups, the data is pooled within each group and then compared across groups to see if there are differences in the overall scores. This is done by calculating the mean score within each group. *Think about how variability within each group might affect these pooled scores. ♦Want to study the effect that stroking a dog has on resting heartrate ♦How might we design this experiment?? ♦20 subjects; have 1 very cuddly dog named Kaija…. assign to experimental and control group. Have them do everything the same except patting the dog for 20 minutes in the experimental group….(your independent variable is the contact condition: no contact or dog patting) Sample Data: (Resting Heart Rate) Dog Patting Group No-Dog Group Here are our individual scores…notice how we calculate the average/mean score for each group (we will look at the formula for calculating the mean score later in the lecture). Basically, we add each score together (within each condition or group) and then divide by the number of participants within that Graphing frequency distributions 1. Pie Chart: useful for nominal data 2. Bar Graph: might be visually helpful for nominal data as well 3. Frequency Polygons: Used for interval/ ratio scales. Each point on the graph represents the number of participants who ended up with a particular score on a measure. This example shows the number of people who scored each score, as per the group they belonged to. Visually, we get a sense of how the groups, and the individual scores within each group, differed from each other 4. Histogram: Depicts scores for comparison 5. Stem and leaf plots : Useful because they give us a sense of the shape of our data, as well as the actual scores. Plots are constructed based on the range of scores seen. E.g: Range = 48 - 95. Stem40 and the leaf = different variants of all the scores in the 40’s It’s always good to graph individual scores for each group to get a visual sense of what data looks like. Found by adding all the scores and dividing by the number of scores (X=X/N) (Where X=sum of values in a set of scores; N=number of scores) * x (bar) stands for mean. Indicates central tendency with interval or ratio scales **Graphing our means in comparison can be useful but we have to note the scale used on the y axis!! Because it may be deceiving to view… ***Outliers have been tossed out. Therefore mostly used for data analysis 2. Median (Mdn)
OPCFW_CODE
New format for caching latent and Text Encoder outputs Overview Policy: File management per image, supporting multiple architectures and data types Format: Using .safetensors format Latent Cache Filename Format {original_image_base_name}_{width}x{height}_{architecture}.safetensors Example: image1_1024x0768_sd.safetensors Internal File Structure Tensor Data The following tensors are stored as needed: latents_{latents_height}x{latents_width}_{data_type} latents_flipped_{latents_height}x{latents_width}_{data_type} alpha_mask_{latents_height}x{latents_width} The data type can be one of the following: fp32, bf16, fp16, float8_e4m3fn, float8_e5m2. The priority of types is fp32>bf16=fp16>float8_e4m3fn=fp8_e5m2, and if a higher-level type has already been stored, a new one will not be stored. fp16 and bf16, and multiple fp8 may coexist. Also, when saving a higher-level type, the lower-level type is deleted. {latents_height}x{latents_width} is used for multi-resolution training. Metadata (JSON format) { "architecture": "sd", // or "sdxl" etc. "width": "1024", "height": "768" "crop_ltrb_{latents_height}x{latents_width}": "0,10,0,10", // comma separated values "format_version": "1.0.0" } Key Features File Management Individual file management per image Multiple data types consolidated into a single file Quick image size retrieval through filename parsing ({width}_{height} part) without opening the file Type Support Support for multiple data types (fp32, fp16, bf16, fp8) Support for multiple fp8 implementations (E4M3, E5M2) Functionality Includes flipped data for flip augmentation Alpha mask retention Text Encoder Cache Filename Format {original_image_base_name}_{architecture}_{max_token_length}_te.safetensors Example: image1_sd3_512_te.safetensors, image2_sdxl_77_te_safetensors Internal File Structure Tensor Data Depending on the architecture, the following is an example: lg_out_1_fp16 t5_out_1_fp16 t5_out_1_masked_fp16 t5_attn_mask_1 lg_out_2_fp16 t5_out_2_fp16 ... The key has a suffix at the end that indicates the data type. The type and priority are the same as for latent. The attention mask is optional. The attention mask is a long tensor, so it doesn't have the data type suffix. When the attention mask is enabled, the key of the output will have the suffix _masked before the data type suffix. There is always a suffix indicating the index of the caption before the masked or the data type suffix. The index is 1-based. The tensor data doesn't have a batch dimension. Metadata (JSON format) { "architecture": "sd", // or "sdxl" etc. "caption1": "a caption for the image", "caption2": "another caption for the image", // "capiton2", "caption3" ... optional "format_version": "1.0.0" } Key Features File Management Individual file management per image Multiple data types consolidated into a single file Type Support Support for multiple data types (fp32, fp16, bf16, fp8) Support for multiple fp8 implementations (E4M3, E5M2) Functionality Includes optional attention mask. Supports multiple captions. .safetensors may be zipped in the future, in which case the extension will be .zip. Please feel free to share your thoughts and suggestions on this specification. EDIT: Fix fp8 dtype to float8_e4m3fn and float8_e5m2, captions as list. Use x instead of _ between the height and the width to match the existing name. Add crop_ltrb to the metadata. Fix the metadata because safetensors can have Dict[str, str] only. Change format version to major.minor.patch. Add the max token length to the file name for the text encoder cache. The tensor in the text encoder cache doesn't have the batch dimension, because it is cumbersome to read the entire tensor for reading one of the captions, or update part of the tensor. Recommendations: Use ml_dyptes to support different data types. (Optional) Use lance as data container storage. Compared to safetensors, lance has 3 advantages: 1、version auto-control 2、randomized loads (which can be used for shuffle). 3, for huggingface datasets support, and cloud storage directly read imageuri. More advantages on large datasets. For the text encoder cache (very important with T5XXL!) it is important that it is working well with multiple captions (already contained in the proposal) and wildcards. It would also be good when it supports shuffling. The wildcard support is easy when all possible combinations are created in advance and then just treated like multiple captions. But this would break the current behavior (as I understood it from reading the code) as the probabilities change. Currently each multi-caption-line is chose with same probability and then in this caption the wildcards are randomly selected: photo of foo image of {bar|baz} would currently lead to 50% photo of foo 25% image of bar 25% image of baz and a naive caching instead to 33% photo of foo 33% image of bar 33% image of baz So when this should be a non breaking change you will need to add the probabilities somehow in the file. Or you are making it a breaking change and accept the change of the probabilities. (I'm actually fine with both but it should be documented) About how to support shuffling I have no idea how to cache that. Except to also precompute (sample) them and put them all in the cache. But then the cache might become very big. Due to the big text encoders (T5XXL) I think caching them is even more important than it was in the past so that the VRAM can be freed. Multi captions and wildcards are different things working in the same direction and I think they are supporting each other very well. So having the combination of both is a great thing to have. I know that currently they are not cached but it would be really great when in future both could be cached (to save VRAM). Note: It should be possible to preprocess the caption files to convert wildcards to multi-captions (ignoring the impact on probability). But it would be better when the sd-scripts do it themself. Sure. My point was that the format might need a field for the probability as well. Even when the first implementation doesn't need it (most likely initialize it with a uniform distribution) Image latent cache: Is there a need to add other variations apart from latents_flipped_? Use cases: Brightness / Color variations (monochrome image, boosted saturation, very dark, very light image) Different area crops, possibly rotated, of the same width * height (but this will probably require different captions too and better handled as separate images?) {original_image_base_name}: Max 250 characters of basename with illegal characters filtered out? Or known safe string , e.g. {sha256(original_image_file)}? Use cases: Windows machines where there will be problems with filename lengths > 255. Windows / Linux dual boot (Windows software for image processing and Linux for training): I've once created a filename containing the | on NTFS partition mounted on Linux. Second is optional: printed warning if latent file is not possible to open is good enough. May I ask when this new format and zip function will be implemented in the training script? I am attempting to finetune SD3.5 with a large dataset, and currently, based on the size of the npz files for now, the cache will exceed 10 TB. 😂 if we are training with different captions on same dataset, we should use different folders to not overwrite each other text encoder cache right? @kohya-ss if we are training with different captions on same dataset, we should use different folders to not overwrite each other text encoder cache right? @kohya-ss That's right. Regardless of caching, the only way currently to use different captions for images is to use different folders. thanks this is important Hopefully, a multi-level subtitle cache can be implemented, plus it would be nice to be able to set the REPEAT or weights individually.According to the training report of Dalle-3, the ratio of tags to natural language descriptions is best 5% and 95%. We can set 1:3:6, tags : short descriptions :long descriptions And here I was happy to have 3-4 captions in my .allcaps and enabled shuffle_caption = true in my dataset.toml ! So now I just learned that I have to duplicate all datasets x number of caption/tags lines or : stop training delete caches shuffle .txt file lines find ./ -type f -name "*.allcaps" -exec sort -R -o {} {} \; resume training regenerate all chaches hope it works This is a major problem that captions can't be cached as multiple different strings. Can't we implement a quick fix and just duplicate the cache file. It's _flux_te.npz for text encoder right ? So just having a few more _flux_te_0X.npz would not be worse than duplicating the dataset right ? All in for a new dataformat then ! I find that enabling multiple captions + multiple crop in a single .safetensors is a great idea and this should be pushed. I can think of having zoomed crops of hands, feets, faces, clothing, accessories ... Mixing cropping size would be great and each crop could have crop_x_caption1 ... @kohya-ss it would be so amazing if you could embed some metadata to decide whether re-cache or not like when prompt changed or make newer version i dont know now i am making different folders for every training :D Are we planning to support multi-channel alpha masks? I've been using a customized loss function that needs multiple pieces of information (values) for each pixel. It would be nice if the masks could support more than one value per spatial element.
GITHUB_ARCHIVE
Virtualization can help with this and here is how to host your whole environment locally in your machine using VirtualBox. In an Enterprise you will find Solaris, Windows, OSX, Unix and what not. I personally use a Mac Book Pro with VirtualBox. In VirtualBox I can run as many VMs as I need to recreate the surrounded environment. Ideally we should have an environment following the next features: - The guest should have connectivity to internet taking advantage of your network configuration in the host - The guest should communicate to the host even if no internet connection is available - The guest should automatically work independent what network you are using, from work, from home or even on the road with no local network at all Virtual box will allow these features if we just add a first network interface as NAT (for internet) and the second as host-only (for connectivity with the box) We will show this process using a virtualized instance of an Advent Geneva Server (A Solaris Box with some proprietary Accounting Software). It is useful to have this server locally to be able to mess with RSL reports for example without affecting other members of the dev or operations team. In the host find the vboxnet0 interface to be sure you use an IP in the same network later on in the guest: $ ifconfig -a ... vboxnet0: flags=8843 mtu 1500 ether 0a:00:27:00:00:00 inet 192.168.56.1 netmask 0xffffff00 broadcast 192.168.56.255 ... In the Solaris guest ... Configure first interface (NAT) to use DHCP $ touch /etc/dhcp.e1000g0 $ rm /etc/hostname.e1000g0 $ touch /etc/hostname.e1000g0 $ ifconfig e1000g0 plumb $ ifconfig e1000g0 up $ ifconfig e1000g0 dhcp start And second interface (host-only) to use a static IP $ vi /etc/hosts ... 192.168.56.2 genevadev loghost ... $ rm /etc/dhcp.e1000g1 $ vi /etc/hostname.e1000g0 192.168.56.2 $ ifconfig e1000g1 plumb $ ifconfig e1000g1 genevadev up ... Be sure to define DNS available from any network to connect to the internet (only to be used in the case you do need internet in the guest) If you really need to access local resources you might want to add local DNS as well. $ vi /etc/resolv.conf nameserver 18.104.22.168 nameserver 22.214.171.124 Make sure you see both interfaces (the NAT and host-only). They should show up as e1000g0 and e1000g1. The first with an internal 10.0.xxx.xxx IP and the second with the static IP you picked (for this example 192.168.56.2) $ ifconfig -a Clearly the static IP must be in the same subnetwork (192.168.56.xxx). You should be able to connect to the internet from the guest and at the same time ssh into it using its 192.168.56.2 IP. Do not forget to add in your host the new created server resolution: $ vi /etc/hosts genevadev 192.168.56.2 The question about sharing this environment or building it from scratch is one interesting topic. An even more interesting is left as homework and it deals with devops. In reality regardless the path you follow you should automate your actions and there is where things like http://vagrantup.com/ and puppet/chef can help you further.
OPCFW_CODE
The Definitive Guide to rightnow technologies Case SolutionI've all-around 5 yrs of expirence in tests as well as enhancement. From Feb08-until day, i am Functioning as expert section-time considering that i obtained marride n relocated to hyderabad. But now I'm planning to change and try to find a full time career in testing. I've passed through all the responses higher than and found that A lot of them have accomplished a training course from some where by and now hunting for a work or break from one to this testing industry. It is minor difficult to counsel or remark anything at all upfront considering the fact that Mastering tests is a single part and working towards the exact same of working on some jobs is yet another that is additional crucial than Mastering. i will giv more sensible way of doing tis..go n bag position 1st in any enterprise..following that in that organization change to that pjt that promotions abt telcom domain..pretty pratical soln.. i need to know that right now I'm Functioning in networking area. and i am a B.tech but i would like to go in program testing. It transpire to me Once i was seeking out for an additional work, practically 5 months i faced a similar trouble, i modified my resume format virtually 12 occasions And eventually made it. If Anyone are intrested to get work in computer software screening or would like to study projects,Software Screening, Consultancies who r recruiting as contract personnel for MNCS and Resume preparing. Each detail is with in 2 days from serious time professional from chennai I've learnt computer software tests for an Institute. But I've some earlier working experience (like 2 decades) as specialized aid (Internet hosting). Am i able to contain this in my resume? Will this impact for testing Vocation, if I set my resume in almost any task portal? hii iam laxman iam doing application check engg but i want to know in regards to the perfomance tests plz tell about it I understand how keen you will be entering into occupation. I will advise something for you, 1st Make self-confidence in you. For this you should be very clear on technological topics in testing. My self Amit,I have arround two.5 yrs of testing top article exp, specially in mannual, I've knowledge of tools like rational robo also.But nevertheless I'm not getting any calls from the companies, currently i am Doing the job in NTPC Hydro confined and my position profile here is not purely on testing so I need to depart this task immediately. I want some strategies from all of you guys in existence. Remember to support a.s.a.p. Have u labored on webbanking applications?I just want to know how testing will b carried out within the banking apps. Hello rajatha this is sreenivas i listened to about that company name in previous.i dont know completely,eventhough i try out to find out,some of my pals Doing the job in bang. ok byeee i did DAC(Diploma ahead of time Computing) from chandigarh CDAC in 2001. following that i tried for task but i didnt get any good results. since i have been executing a position as being a graphics designer. I have 10years gab . am i able to attend the interviews as fresher? if it is possible, plz give the data of up coming approach.
OPCFW_CODE
Besides these aesthetic changes, San Andreas Multiplayer offers many changes in the way you play the game. You can read the full procedure given below of how to install and easily play this game on your android devices. So you can have good time with this game. In this game you will have to navigate the mission throughout the game using the map and complete every missions provided by different boss. It is Open world game where you play as an ex-gangbanger. So just download the apk file from here and enjoy the game on your any android device.Next Just click on the provided download link which is given below to play this game on your mobile or tablet. The city has been changed and it incorporates all vehicles and characters we used to discover there, and in addition radio stations, movement and bunches of little points of interest that make this mod to be astounding. This is the best rockstar game of grand theft auto series but this is very costly and you will have to buy if you want to download from google play store but here we are providing this game for absolutely free without any human verification or any survey. In the same way, the game offers a lot of novelties at a playable level. In fact, one of the most entertaining things about the game is when rival gangs face off and fill the city with gunshots and explosions. Its amazing how a game that is more than 11 years old can surprise you.Next This gta san andreas apk has best graphics in compare to gta vice city apk so you will enjoy the game more. In the latest version of San Andreas mobile Apk you will get some good quality graphics which makes it better than other games. As you go through missions you will see that different mission unlocks when you complete the given mission. In the game, you can buy garages, stores, bars, houses, pick up hookers and all other things that give the San Andreas unique charm. It is developed by Games Mobile Pro for Android platforms with the version or higher. You can also buy guns and ammunition which will help you to complete the missions in which you will have to fight with your enemies.Next If you want to add or remove any controller button then you can do it from controller setting which is very useful. Once you complete a mission you can earn respect of your boss as well as money. You can use these money to buy real states like house, garages, cars and much more. The local crime bosses send you out on missions ranging from carjacking to kidnapping to drug running. San Andreas Multiplayer is an excellent mod for Grand Theft Auto: San Andreas as it allows you to continue enjoying one of the best games in history. In addition to this basis, we'll have gang fights and races, different weapons to destroy other bands and many ways to have fun committing crimes out there. Lastly, the official report from Virus Total gives you the guarantee that the app is 100% safe for this and any of its previous versions. The most remarkable one, of course, will be the possibility to confront other players in the city of San Andreas. You can walk in the city and can fight with the people for the money. Now Rockstar has decided to give it free of charge. The named of him is Carl after the death of his mom he returns home to get the vengeance from the people who killed her Mom. By using on-screen control options, you can have control over the movements of your character and camera. The most noticeable is definitely that you can face off against other players in the city of San Andreas. If you are looking for it then you are on the right webpage. You can run over your opponents, shoot them, throw them into the air — basically do whatever you want to them in the huge city created by Rockstar. Who on Earth doesn't know Grand Theft Auto? For Android Direct Download Links! If you want to enjoy the game then you can give mission and after completing the mission you will award the money.Next You can run over your enemies, shoot them, blow them in the air, etc. San Andreas Multiplayer is a mod of the Windows version of Grand Theft Auto: San Andreas that lets you enjoy the same great Rockstar game online against your friends and other players from around the world, with up to 500 gamers playing at the same time on one server. Maybe you have already played this game on computer but after the success of gta vice city apk, Rockstar introduced gta sand andreas apk for android device by which you can experience the same graphics and missions on your android device. This story connects with the earlier version of grand theft auto where Carl Johnson returns to his home when his mother has been murdered. You just have to download the apk file and install it on your android device.Next The city has been changed and it includes all vehicles and characters we used to find there, as well as radio stations, traffic and lots of small details that make this mod to be excellent. Download from below given link. . Just download the apk file and read the procedures carefully which is give below to install and play the game easily on your mobile or tablet. This game is based on the city of Los Santos which is American city. Multi Theft Auto: San Andreas is an excellent mod for Grand Theft Auto: San Andreas that lets you extend the life of the legendary Rockstar North game until infinity.Next This modification for the original game adds a lot of changes compared to the traditional version of the game. You'll experience all the excitement of robbing cars and motorbikes, completing missions sent you out by the crime bosses, all this in a 2D top down view. The Game revolving around the Different cities and you can go everywhere in the city with the help of cars, motorcycles, bicycles and other vehicles. . .Next
OPCFW_CODE
Obj-C: Alternatives to pass by reference So, I got a 2 variables declared in a class, let´s call it model.h and in this class i initialize instances of the classes Car and Road. If I want the Car to know about road what other options are there than passing a reference of Road to Car in Model that is good in this case? .h is the file name extension that indicates a header file -- you wouldn't name actual classes Car.h or Road.h. Caleb: It´s for clarity - as class usually got a header an implementation file There's really no need -- just use the class name. @TomLilletveit You are actually making this harder to follow, not easier. How is this a question about passing by reference (I don't see any code)? Do you mean "Alternatives for creating a circular reference" via header files? Is this about passing instances or including headers? You seem possibly a little bit unclear on the difference. As I understand it, passing pointer to a pointer is not a great design pattern to use throughout your app. It's reserved in objective-c for only select few operations, namely NSError. See bbum's response in the thread below. Arguments by reference in Objective-C Judging by its acceptance and it's author I'd say it's pretty good advice, though perhaps not "gospel". You could certainly pass pointers to pointers all you want, but it's probably not the best approach. The alternative is to consider what values you're considering passing by reference (meaning pointer-to-a-pointer) and then think about the design of the methods/functions so that you might encapsulate those values into a class that is returned from your method/function. I wouldn't consider passing by reference to mean 'pointer-to-a-pointer'; in normal usage it means passing in a pointer. Any Obj-C class is passed by reference, although there are rare cases where you want to modify the reference, like the NSError one you've identified. I think this is what the OP means, but I'm not certain. @sapi, you're right. The semantics are little confusing but I did mean "pointer to a pointer" and not passing by reference. I understand that obj-c objects are always passed by reference. I've amended my answer. In Objective-C, objects are always passed by reference, but you can mitigate that by carefully designing your classes. For example, NSString and NSNumber are always passed by reference, but you can treat them as if they're passed by value because they're immutable. You can't alter an NSString (only an NSMutableString) or NSNumber, so it doesn't really matter whether you're passing by value or by reference. Similarly, copying objects and using -isEqual: to compare them later can simulate pass-by-value. NSDictionary does this—-setObject:forKey: copies the key and uses only the copy, while -objectForKey: uses -isEqual: and -hash to compare keys. In your case, you might be able to make Road objects immutable (or at least never be mutated after they're loaded) if your "map" of roads is fixed and you don't need to link roads back to cars. If you do that, then passing by reference is effectively a helpful optimization, not a bug waiting to happen. There really are no other options. Objects are always accessed via pointers in Objective-C. There's no way to declare a variable that is an object, you can only create variables that refer to objects. I was thinking if there where alternatives like how I design my classes like using a Singelton etc.. but singelton is not suitable in this case.
STACK_EXCHANGE
Thriven and thronovel Cultivation Chat Group – Chapter 1322 – Big shots stealing the limeligh unite pollution recommendation-p1 Novel–Cultivation Chat Group–Cultivation Chat Group Chapter 1322 – Big shots stealing the limeligh fill heavy The important-eyed environment didn’t need to directly destroy Melody Shuhang. It wished to obtain the whereabouts with the ‘Nine Virtues Phoenix, az Sword’ from him and damage the ‘small world’ that moved the aura of your Third Wielder of the Will right before eradicating him. “Kill people with big bellies first.” Just before the mild even have got to reach her entire body, it had been swallowed by endless darkness. “The size of a earth?” Feng Qiaozi immediately pa.s.sed this content on the bow-wielding old mankind. It absolutely was a pity that they couldn’t even like it right before it had been extracted from him. Other Daoist Tyrannical Song was obviously a truly effective helper! Cultivation Chat Group The big-eyed world didn’t want to directly wipe out Song Shuhang. It needed to get the whereabouts of your ‘Nine Virtues Phoenix arizona Sword’ from him and eliminate the ‘small world’ that moved the aura from the 3rd Wielder from the Will just before getting rid of him. A Letter Addressed to the Abbe Raynal, on the Affairs of North America The Sage’s eyeball were taken away by Senior Skylark… Honestly discussing, the Sage’s eye’s capacity to impregnate whomever it stared at was only a remarkable cheat. I ask yourself which kind of skill this Skylark’s Attention has… Immediately after Song Shuhang switched his head… s.p.a.ce break up away, along with a massive rock and roll how big a little hill drilled right out of the fracture. Right after the rock arrived, it turned into a sphere, and an eyesight established about the sphere. Melody Shuhang honestly replied, “In a goal.” Was it possible that among the three deities which has a Divine Kingdom got attacked him? “I am very grateful to do this. Well, Fellow Daoist Feng Qiaozi, farewell. I’ll be waiting for the good news from your glory. Anyway, give thought to ‘Senior Skylark’ who kept just now. She is not herself right now. If she returns, you should use caution and be aware of your basic safety. In those days, you ought to prevent her as far as possible,” Song Shuhang warned him. Soon after turning up, Skylark clutched her eventually left vision. “d.a.m.n it. This isn’t the aspect I became deficient, this isn’t a few things i wished.” This fellow had actually been summoned right here. “You’re courting death!” Skylark roared. Her figure flickered, and she shown up near Track Shuhang. The moment Song Shuhang converted his head… s.p.a.ce separated away, plus a huge rock and roll the actual size of a smaller mountain drilled out from the crack. Whom could it be for? “Fellow Daoist Tyrannical Piece of music, I don’t believe I’ll manage to reduce the consciousness of that ruler of the Netherworld for a lot longer. I’ll look at you yet again if fate permits it.” Xuan Nu Sect’s Skylark waved at Piece of music Shuhang, and after that her determine sped absent to the long distance. “Where do that infiltration come from?” Tune Shuhang looked up to the recognize the location where the lighting possessed are derived from. The good thing is, he was only a forecasted body system, so whether or not his correct leg was severed, he didn’t really feel any ache. mighty boss cleaner ingredients Having said that, their spatial fasten acquired not had the opportunity to halt the major-eyed world. It absolutely was as though the secure they had positioned was as vulnerable like a spider world wide web before it. chocolate chip cookies for 3 Right behind it, the projection of your even bigger figure got blossomed, prepared to bust through s.p.a.ce and go down. All at once, the ‘holy lighting lock’ the big-eyed world acquired put together was triggered and attacked Skylark. Skylark jogged frenziedly as she shouted to Track Shuhang, “Fellow Daoist Tyrannical Melody, you need to abandon rapidly. When that fluid stainless steel soccer ball regains handle, it may well brain backside on this page to help make hassle on your behalf.” “The spatial locking mechanism was ruined. Exactly what is the beginning of this thing?” In the main battles.h.i.+p of Thirty-Three Divine Beasts’ Sect, the bow-wielding older mankind frowned. “The proportions of a earth?” Feng Qiaozi immediately pa.s.sed this concept into the bow-wielding aged guy. Right after Melody Shuhang sent the transmitting, his number turned out to be transparent. Fellow Daoist Tyrannical Track became a truly effective helper! Inside the key battles.h.i.+p, the bow-wielding ancient mankind stroked his beard, smiling a bit. “Fellow Daoist Tyrannical Tune has added a lot to the prosperity of this challenge plus the suppression of your phony deities. When this fight has finished as well as battleground continues to be cleaned up, we have to supply him with a fantastic share from the spoils.”
OPCFW_CODE
So this is a rant about all the trendy new interaction devices. In fact, not so much against the devices themselves, but the way people keep trying to use them to emulate old things. Our knowledge, our science, our tools have been built in an incremental way. Of course, who would invent a spanner when nuts and bolts don't exist ? And even though sometimes, someone has a great idea that is very far-fetched, it also often meets with doubts and perplexity. This is how we just avoided getting computers 100 years early, which might have bumped human technology on a very different path. And so this is how after getting computers and monitors, people feeling the urge to interact with those beasts soon made out the keyboard, and then the mouse. For a very long time keyboard and mouse have reigned supreme over the world of user interfaces. Hypertext Editing System. Original photo by Greg Lloyd Of course there had been attempts at alternative ways, but these interfaces never took off and were doomed to stay a niche market. Probably a few schoolboys and girls from the 80s in France remember the Thomson light pen ? And there have been graphical tablets for decades now, and people using 3DVR stations have those gloves, and videogames have their own things filled with buttons. But we all know the latest trend that seems to take off for real, is touch screens. This is not the dawn of touch interfaces, it has been pioneered for long by the graphics industry. Despite a few attempts at PDAs and smartphones (P900 anyone ?) that were expensive toys, the widespread use of touch devices has more likely begun with the nintendoDS and then the iPhone. So what's wrong with touch devices ? I would say feedback. When you learnt to use a keyboard for so long, feedback is what makes you know you mistyped even before seeing it. This does not happen with touchscreen emulated keyboards. Now worse, the software thinks it is smart enough to guess what you intended to type, even in some situations where you did *NOT* make a mistake. This is all because we want to use touchscreens the way we used keyboards. But touchscreens will never be keyboards, they are screens ! Kids like to look at pictures and slide them to see the next one. This is intuitive and cool. Probably the kids born around now will never have to use a real keyboard and won't miss it. But for me, as much as the mouse supplements the keyboard but can't substitute to it, so much can be said about the touchscreen. How many people use mouse emulation with the numpad ? Don't type technical words ! This all-touch trend, I call it "fuzzy interface". And by the way, I find it much more straining to type on the touchscreen because you have to check continuously that no error has been made, either by you OR the software. Here is an anecdote to close this case, redacted from a Google Summer of Code mailing list for mentors : . Random person says ''company XYZ can help assess candidate students abilities blablabla....'' . Person in charge from Google answers ''actually I am hot interested in other companies reviewing our students..." ... 8hrs later, same Google person : ''Not, not hot. Touch screen keyboard :-)" Happy touching !
OPCFW_CODE
What does unsupervised mean? Unsupervised is a branch of machine learning within AI that allows data to be understood without any or much previous information. This is the flip side of supervised learning, which relies on labeled data to build the best model. Unsupervised is generally useful in situations where the end goal is not too clearly defined, so evaluating unknown patterns or trends could be an advantage. The rest of this blog contains some examples of unsupervised machine learning and how they can be used in real world applications. Perhaps some of the examples can overlap with data that you possess? Clustering is a very common example of unsupervised, taking big data – typically numeric data, as pattern-finding can be difficult and time consuming – and finds commonalities and clusters of interesting sets. This mode of machine learning has some interesting uses. For example, music players and audio streaming platforms like Spotify and Apple Music will use this type of machine learning to cluster music into surefire recommendations. The AI engine will automatically generate a range of features with which to bind the music into clusters, allowing previously-unprocessed music to be instantly classified and packaged for users. Geo-clustering can have many uses, the most obvious being the ability to provide insight in geo-tags, usually in the longitude-latitude format. One of the ways to achieve this is by using ‘k-means’, which is a clustering algorithm. With k-means, a predetermined number of clusters is provided as input and the algorithm generates the clusters within the un-labeled dataset. If used on a large scale, you may have to factor in the curvature of the earth, but more simply it can provide insight and even assist with scheduling a route with many points of location. Natural language processing is an exciting topic within machine learning. Language can often be the most unstructured and random in form, especially when factoring in multiple languages. This method uses many layers of preprocessing the text or audio data to make it workable. This can then feed into modelling and pattern mining to identify commonalities in language. At its most direct, it can identify patterns in huge data sets of text transcription or audio of language. At its best, it can attempt to understand language. Classic examples are voice assistants like Siri and Amazon Alexa, but this approach can also help with language-based chatbots and service call automation. Like the more common convolutional neural network (CNN), except auto-encoders excel at the things CNN is not so great at. Just like a neural network, it’s trained to produce an output which is similar to its input (so it attempts to copy its input to its output) and since it doesn’t need any labels, it is unsupervised when the training happens. This can power a huge range of applications, from Natural-language understanding to improving image recognition to identify images and link with tags that have not been assessed before by the AI. Possible examples could be a smart image searching tool or a language recognition tool working with unstructured data. Making the most of the data While the above examples are only a brief summary of what unsupervised machine learning can achieve, the theme is consistent: these AI systems can be best applied to random, unstructured, mixed-up data to provide structure and meaning that would surpass manual and even end-user ability when it comes to processing. With more and more companies newly investing in AI, unsupervised learning is set to become a prominent way to unlock the value in oceans of unstructured data waiting to be processed.
OPCFW_CODE
This is a very rough beginning, one where I'm not even sure if I'm starting in the right place. I have a "smart" AV Matrix switch from GoFanco. For anyone wondering, it is a device that allows multiple HDMI video signal inputs and then can output them to multiple outputs. You could use it to broadcast one HDMI signal to multiple outputs, switch the signal being broadcast, send them only to two outputs, etc. In short, it's a powerful HDMI switcher. This one is "smart" too, in that it has a cloud portal and can even be (clunkily) connected to an Alexa skill for control. Thankfully though, it also has local control, through a web interface of a local URL for the device, as seen in the image below: I've looked all over and there isn't any documentation about an API for the device. The most I can find are some HEX codes that can be used when controlling the device through a serial cable. If it has a local, web-based UI, surely there's a way I can isolate commands the web UI kick off and have Hubitat "spoof" them, right? Can anyone advise on how you would go about finding out what messages the web UI is sending to the device, and then maybe on how to have Hubitat send the same messages? Am I going about this the right way even? Found what you likely found, and it's promising at least: From the sound of the author's reason for writing this tool, the way the device uses HTTP POST is a mess, so he created this service. It was easy to configure and run, and after setting that up, visiting the local URL of the service and clicking various links does control the device. For example, I can visit loadmap?map=1 and it applies my AV mapping set up in mapping 1. I do wish there was a simple way to go directly to what this service app is doing, but I can't find it. At a simpler level, how would I use Hubitat to trigger the service? For example, I have a rule machine rule set up to trigger off a virtual button. I assume the action needs to do one of these: However, which would it be and how should the message/URL be formatted if I wanted Hubitat to simulate my clicking that URL to load mapping 1? Sorry, I know this is a complete beginner type of question, and I'm likely even missing some nuances on what I'm asking. I assume the "Send HTTP Post" option, with the properly-formatted request would be used to cut this middle man service out to accomplish the same thing, but I again am not sure how to read what the service is actually sending to my device.
OPCFW_CODE
F. Vahid, A. Pang, K. Downey, C. Gordon Author Archive for: patrick About Patrick Thornham This author has not written his bio yet. But we are proud to say that Patrick Thornham contributed 29 entries already. Entries by Patrick Thornham C. Gordon, R. Lysecky, F. Vahid Based on the twelfth edition of the well-known textbook written by authors J. David Irwin and R. Mark Nelms, this zyVersion contains the complete text of the original textbook and adds interactive elements to help keep your students engaged. See the variety of tools that provide an exceptionally hands-on approach to presenting digital design by combining theory and practice, including various web-based simulators, like a circuit simulator, finite-state machine simulator, high-level state machine simulator, datapath simulator, and more, plus numerous tools, like a Boolean algebra tool, a K-map minimizer tool, and more. Please join us for a walkthrough of our Control Systems Engineering zyVersion. This zyVersion is based on the 8th edition of the well-known textbook written by author Norman S. Nise In this webinar you will learn about our Data Structures Essentials titles along with our newly created labs for Data Structures in C++, Python, and Java. Based on the ninth edition of Fundamentals of Engineering Thermodynamics by Michael J. Moran, Howard N. Shapiro, Daisie D. Boettner, and Margaret B. Bailey, this zyVersion contains the complete text of the original textbook interactive elements to help keep your students engaged. Learn about the only interactive learning solution for Intro/Fundamentals of Data Science. This new zyBook covers data preprocessing, regression techniques, supervised and unsupervised learning algorithms, decision trees, neural networks, ensemble methods, and model evaluation techniques. A Smooth Term & Successful Students For many computer science and engineering instructors, anxiety and stress are all too familiar feelings. To support our community of zyBooks instructors, we’re exploring the topic of mental health in STEM higher education and raising actionable strategies to help professors overcome burnout. Join us for an exciting discussion with the author to learn more about the latest digital offering of his material! Join us for a live panel discussion with some of your colleagues in the HBCU computer science instructor community. Cay Horstmann Presents: Java Late Objects and Big C++ now with zyBooks! Based on Koretsky’s Engineering and Chemical Thermodynamics textbook, this zyVersion includes over 150 animations and 500+ participation questions that help students engage and interact with the material. zyBooks utilizes the Say, Show, Ask approach to learning. With less text and more action, students learn through short readings (say), animations (show), and learning questions with answer-specific feedback (ask). Learn the story behind our acquisition, our plans to convert trusted texts into intuitive and interactive zyVersions, and get a chance to evaluate the new Operating System zyVersion. Based on Goodrich and Tamassia’s Algorithm Design and Applications 1/e, this zyVersion contains the complete text of the original plus new interactive animations and learning questions to engage the student. Cay Horstmann discusses teaching CS in a post-covid digital world. We will demonstrate his new zyBooks version of Big Java Late Objects (and a potential preview of upcoming C++). Add autograded Coral activities to your CS0 and CS1 course to offer a simple introduction to programming. This recording shows the latest updates for the Computer Organization and Design zyBook, as well as a peek into what the Labs and Simulator will look like for our upcoming RISK V and ARM versions of this product. Learn about the MIPS simulator, auto-graded labs, and more. What if you could save an additional 5-10 hours per week, without sacrificing student results in your Computer Science courses? Let us guess, you’ve never been able to find a textbook that quite fits your course the way you’d like it to. Or, your students lack the basic math skills needed to be successful in your course, but you don’t want to assign a second book to focus on math. Let us show you how we can help. In this webinar, we will share the current state of research on mental health in undergraduate engineering students, including ongoing research to understand the impact of COVID-19 on student well-being. F. Vahid, A. Pang, K. Downey, C. Gordon J. Kelly, J. Bruno, and A. Edgcomb, 2121 C. Gordon, R. Lysecky, and F. Vahid 2021
OPCFW_CODE
Search the Community Showing results for tags 'rpi'. I am trying out my new Nano on Raspberry Pi 3b+ and have issues with getting Nano online. I have ran the wp6.sh and checked the default gateway and confirmed the ports but it ain't getting online (bulletins not updating). Any suggestions or ideas on what I should try? Is it a Pi issue? Cheers, Jack Hi all, I was playing with hoover.pl last night, which works great, apart from when it gets to line 108; (system("$iwconfigPath $interface mode monitor")) && die "Cannot set interface $interface in monitoring mode!\n"; It returns the above die error, with the reason given being; Error for wireless request "Set Mode" (8B06) : SET failed on device wlan1 ; Device or resource busy. I am using a RPi3 with an Alfa Wi-Fi card connected to one USB port. The internal Wi-Fi chip is wlan0 and the external Alfa card is wlan1. I have the internal chip wlan0 connected to my home Wi-Fi, as intended, so that I can SSH to the RPi. However, I believe the issue above is stemming from the fact that, when I run iwconfig, both wlan0 and wlan1 are showing as being connected to my home Wi-Fi. I don't want this; I would like wlan0 to connect to my home Wi-Fi, but wlan1 to stay available to use in monitor mode. I have tried; iwconfig wlan1 down iwconfig wlan1 mode monitor (and/or) airmon-ng start wlan1 iwconfig wlan1 up No luck; wlan1 still insists on reconnecting to my home Wi-Fi and setting itself back to Managed mode. How can I stop wlan1 (Alfa card) connecting to my home Wi-Fi, but leave wlan0 (RPi3 internal) connected to it? Thank you. Hi all, How would you go about setting up a stand-alone Raspberry Pi, which would; Be powered by solar, battery, or any other method. At least a few days power, if possible. Have some sort of internet connection available, so one can SSH / NetCat to it. (Dongle?) Be as small and discreet as possible, so it doesn't get stolen. *edit* Think weather monitoring station, but too far away to connect to the same WiFi network as your home PC, and not in range of any free WiFi hotspots. Hello all, so got an idea, looked in the forums didn’t really find anything really related to this, thought ı would share it and ask for help :) where did this come from?: -shipping cost of pineapple to my location is 204USD -rpi has better harware spec (which is good when ur programming skills are a bit rough like mine) -u can add touchscreen to rpi Hardware Setbacks: rpi doesn't have pins to set a mode, unlike pineapple | possible solution, using the gpio on RPI Current stat: -someone shared the mkv4 ui online with in 2 days ı was able to get most of the UI working :) -ordered a couple of usb-wlan card, will test how they will work (rpi doesn like them all) -did try frutyfy, didnt really like it :/ Goal: (hardware) hardware: RPI2 os: Kali ( if kali v2 for gets released by august 7 for pi will continue with that) screens/panels: rgb lcd and 5inc tft support Network: 2 wifi cards, 1 Ethernet Where I need support: the source of mkv5 UI ( the github is empty) some coding in the gui (for the tft screen) possibly help on making the ui for on RPI Goal: (software) UI: pineapple :) multiple boot options: -wireless router (tor/vpn/normal) -plug to see network info (ip/subnet/dns, outgoing ports) needs screen and an online server -vpn gate way (plug the cable in, connect from home) future updates: adding media center functions to disguise the evil inside, so you can plug it to a tv it will work like a media center but will be a pentest box in the back :) open to any ideas/ resources etc
OPCFW_CODE
you get a tiger pattern car seat I put in an envelope with your name on it I am happy to receive an invitation from your side. I put in some cloth. I take out a needle and turn your cloth into a cover for your favourite possesion I put in a volume control I click the mute check box, so now the volume control has no real use. I put in some gasoline. I give you a car with your gas and some I put in because you didn't supply enough yourself lol I put in a box of firestarters You get to be Prodigious. I put in a headphone. I give you the other one, welcome to stereo I put in a retro record player You get to record alternative rock. I put in a lighthouse. I give you light !!!!!!! I put in a bus I give you a Mercedes... I put in some coke. I give you a pepsi I put in a set of ipod headphones You get to listen to POD's concert I put in Mother Earth. You get a father earth depressed at home in his underwear watching tv because he has lost his wife :( I put in compass You get to discove America. I put in coal. you get the miracle of fire I put in some crisps You get crabs I put in blue whale You get the river Thames I out in a comb You get all the honey you want. I put in some clothes. I give you wardrobe, a lion and a witch. I suppose you could put your other clothes in the wardrobe. As for the lion and the witch maybe you could just shove them through the back of the wardrobe. I put in an antellope You get a cantelope....I put in an Edsel You get a broken Ned Cell I put in a stylus You again get a confused ~s.o.s~ who hasn't come across the word stylus. I put in some flour. You get some delicious bread-flavoured astronaut food. I put in a fishing rod. You get to catch a salmon. I put in a firewall. You get burned wall. I put in AntiVirus You get lifetime virus immunity. I put in a backpack. You go backpacking through australia which is nice I put in a skype phone You get to call me for free. I put in a kangaroo. You get a Wombat at large I put in an Ibanez Custom You get a Les Paul Custom instead because they're better. I put in a CRT Monitor Ive been here (and there) for quite some time as a guest Been wondering - seeing DW with (hardly) no new content from people like Dani or other staff - ... Hello everyone, I will take this chance to reintroduce myself to the community. I was a member on the Daniweb forums back in 2006 and 2007. I can see a ... I just posted something and had to go through the "Click the photos with street signs" test before my post was accepted. I was logged in. I don't remember ever ...
OPCFW_CODE
Upgraded Traefik and now K8s Ingress Controller isn't working I had previously installed Traefik 2 via Helm on my K8s cluster at home. I don't remember the specific version but I know it was some version of Traefik 2. My cluster is actually several Raspberry Pis where I installed Kubernetes with K3s. For many months it had been working great. When I created Services with Ingress objects, Traefik would detect this and act like a load balancer, exposing the service to a specific host name. Today I ran a job that was apparently a little too resource intensive because it evicted the Traefik pods for OOM errors. Even after killing the large job, the Traefik pods didn't come back up on their own, so I uninstalled Traefik (via helm uninstall) and then used Helm to install the latest version (2.9.10). In particular I used the following command to install it (where I substitued <IP_ADDRESS> with my real IP): helm upgrade --install traefik --set dashboard.enabled=true --set rbac.enabled=true --set="service.externalIPs={<IP_ADDRESS>}" --set="additionalArguments={--api=true,--log.level=INFO,--providers.kubernetesingress.ingressclass=traefik-internal,--serversTransport.insecureSkipVerify=true}" traefik/traefik However, although Traefik is now running without any obvious errors, the Ingress Controller functionality doesn't seem to work. When I look at the Traefik dashboard under Services, I see api@internal, dashboard@internal, noop@internal, ping@internal, and<EMAIL_ADDRESS>but nothing else. Previously I saw services listed there on the dashboard to correspond to each Ingress object I had defined. I can post the full YAML of any of my Ingress objects if desired, but suffice it to say that all of them have the kubernetes.io/ingress.class: traefik annotation defined. Has something changed with the most recent version of Traefik? The helm upgrade --install command that I used is identical to the one I used many months ago. But like I said, this time around the Ingress Controller functionality no longer seems to work. Can you check the logs of the traefik pods and verify there are no permission errors? It "doesn't work" because you provided in your additional arg --providers.kubernetesingress.ingressclass=traefik-internal in your helm install command and traefik docs for this argument says that with this arg specified, only Ingress object with annotation kubernetes.io/ingress.class: traefik-internal will be processed. However, your Ingress objects' annotation is kubernetes.io/ingress.class: traefik -- that's why it "no longer works". To resolve this, you can run the helm upgrade command you ran with adjusted args: helm upgrade --install traefik --set dashboard.enabled=true --set rbac.enabled=true --set="service.externalIPs={<IP_ADDRESS>}" --set="additionalArguments={--api=true,--log.level=INFO,--serversTransport.insecureSkipVerify=true}" traefik/traefik This will make traefik process all Ingress objects without the annotation, having an empty value, or the value traefik If you wanna be restrictive, then helm upgrade --install traefik --set dashboard.enabled=true --set rbac.enabled=true --set="service.externalIPs={<IP_ADDRESS>}" --set="additionalArguments={--api=true,--log.level=INFO,--providers.kubernetesingress.ingressclass=traefik,--serversTransport.insecureSkipVerify=true}" traefik/traefik Thank you so much! This fixed things straight away.
STACK_EXCHANGE
Android: Opening .hprof file In Eclipse Im trying to check for memory leaks by using a HPROF File from eclipses DDMS view. I tried using MAT to read a .hprof saved to disk but got error: Error opening heap dump 'com.myapp.myapp.hprof'. Check the error log for further details. Error opening heap dump 'com.myapp.myapp.hprof'. Check the error log for further details. Unknown HPROF Version (JAVA PROFILE 1.0.3) (java.io.IOException) Unknown HPROF Version (JAVA PROFILE 1.0.3) So i followed a solution in another post on StackOverflow which told me to change the preferences Android > DDMS > HPROF Action : View In Eclipse But that just displays the file as an unreadable text file: Im assuming its supposed to be easier to understand than that so what am I doing wrong? EDIT I read in other posts about using something called hprov-conv.exe i tried to open that, it flashed a screen then closed (even when opening as an administrator) so i dont know how to use that. How did you create the HPROF file in the first place? @CommonsWare In Eclipse, Open Perspective > DDMS, in the devices view i clicked "Dump HPROF File" and then i got the file above automatically open in eclipse Are you trying to use the standalone MAT, or the MAT that is an Eclipse plugin? @CommonsWare I downloaded the standalone from the Eclipse website. If it can work within eclipse though i would like to know how to do that The "Open in Eclipse" option will only work if you are using the MAT Eclipse plugin. The MAT Downloads page shows the "Update Site" link, which you can add to Eclipse via Help > Install New Software > Add. it worked in my computer with independent MAT maybe someone at eclipse upgraded that functionality. I had the same problem as OP and had the independent MAT. I had no idea there was a plugin. The tutorials I tried to follow didn't specify. You can use HPROF Converter tool provided in android sdk. The hprof-conv tool converts the HPROF file that is generated by the Android SDK tools to a standard format so you can view the file in a profiling tool of your choice. hprof-conv <infile> <outfile> More at HPROF Converter After converting file opens without any Issue. You can still use a separated MAT (which I figured out is the right thing to do because the guy who develop this system does it that way and he seems to know it's smarter way to separate concerns) You can do that. You have to copy that file to a non temporary directory (desktop or something like that) and open that like you would a converted file. You save this file and than open it in the MAT Eclipse. It seems like the the DDMS save to disk saves the regular .hprof file and the other version save to file save an encoded adt version at least on my machine. for more information on this approach watch this video http://www.youtube.com/watch?v=_CruQY55HOk
STACK_EXCHANGE
Can't handle big file Hi, I use ESP32+SIM7100. I can get data for small payload but it can't handle bigger than a few kilobytes. I will be disconnected. I try to increase RX buffer but it can double the size. For example, it can download up to 2KB for 1KB RX Buffer, it'll handle 5KB if I increase RXbuffer to 2K. Here's a sample request with 1MB size. http://vsh.pp.ua/TinyGSM/test_1m.bin void simpleGetRequest(){ if (!ppposClient.connected() ) { Serial.println("Connecting..."); ppposClient.connect(WEB_SERVER, 80); } if (ppposClient.connected() ) { Serial.println("Connected"); Serial.println(ppposClient.write(REQUEST, strlen(REQUEST))); while(ppposClient.available()){ Serial.print((char)ppposClient.read()); } } } @garudaonekh Try this modification of simpleGetRequest() `void simpleGetRequest(){ uint8_t recv_buf[1024]; if (!ppposClient.connected() ) { Serial.println("Connecting..."); ppposClient.connect(WEB_SERVER, 80); } if (ppposClient.connected() ) { Serial.println("Connected"); Serial.println(ppposClient.write(REQUEST, strlen(REQUEST))); int _r; unsigned long _t = millis(); while(millis() <= _t + 9000){ while(!ppposClient.available()) { if (millis() > _t + 10000 ) break; } while(ppposClient.available()){ bzero(recv_buf, sizeof(recv_buf)); _r = ppposClient.read(recv_buf, sizeof(recv_buf) - 1); if (_r > 0) { Serial.write((const uint8_t*)recv_buf, _r); } _t = millis(); } } } }` Thanks, it works. But the data size received is a bit bigger than the file. File size is: 1,048,576. The number of bytes I receive is 1,049,035. I guess it's some header data. But how to distinguish? Here's the way I count data. Is it correct? void simpleGetRequest(){ uint8_t recv_buf[1024]; if (!ppposClient.connected() ) { Serial.println("Connecting..."); ppposClient.connect(WEB_SERVER, 80); } if (ppposClient.connected() ) { Serial.println("Connected"); Serial.println(ppposClient.write(REQUEST, strlen(REQUEST))); int _r; unsigned long _t = millis(); long c=0; while(millis() <= _t + 9000){ while(!ppposClient.available()) { if (millis() > _t + 10000 ) break; } while(ppposClient.available()){ bzero(recv_buf, sizeof(recv_buf)); _r = ppposClient.read(recv_buf, sizeof(recv_buf) - 1); if (_r > 0) { c+=_r; Serial.println(c); //Serial.write((const uint8_t*)recv_buf, _r); } _t = millis(); } } Serial.print("Number of Total bytes received:"); Serial.println(c); } } @garudaonekh Check this modification with header extraction and showing percentages of downloading void simpleGetRequest(){ uint8_t recv_buf[1024]; if (!ppposClient.connected() ) { Serial.println("Connecting..."); ppposClient.connect(WEB_SERVER, 80); } if (ppposClient.connected() ) { Serial.println("Connected"); Serial.println(ppposClient.write(REQUEST, strlen(REQUEST))); int contentLength = 0; int currentLength = 0; bool headerFound = false; int _r; int lastPercentReport = -1; unsigned long _t = millis(); while(millis() <= _t + 9000){ while(!ppposClient.available()) { if (millis() > _t + 10000 ) break; } while(ppposClient.available()){ bzero(recv_buf, sizeof(recv_buf)); _r = ppposClient.read(recv_buf, sizeof(recv_buf) - 1); if (_r > 0) { if (!headerFound) { if (contentLength == 0) { char* cont_len = strstr((char *)recv_buf, "Content-Length: "); if (cont_len) { cont_len += 16; if (strstr(cont_len, "\r\n")) { char slen[16] = {0}; memcpy(slen, cont_len, strstr(cont_len, "\r\n")-cont_len); contentLength = atoi(slen); Serial.printf("Content-Length: %d\n", contentLength); } } } char* srtr = strstr((char *)recv_buf, "\r\n\r\n"); if (srtr != NULL){ int header = srtr - (char*)recv_buf; memmove(recv_buf, srtr + 4, sizeof(recv_buf)); Serial.write((const uint8_t*)recv_buf, _r-header-4); currentLength += _r-header-4; headerFound = true; } } else { Serial.write((const uint8_t*)recv_buf, _r); currentLength += _r; Serial.printf("%.2f percents\n", (float)currentLength / (float)contentLength * 100.0); } } _t = millis(); } } Serial.println("Firmware Size: " + String(contentLength) + " Downloaded: " + String(currentLength)); } } Thanks, it works. I use SIM7100CE which is 4G CAT3, but the speed is very slow. It took on average 100-120 seconds to download 1MB file. It took 250 seconds with my SIM800L. The Internet on my phone is 5~8MBps I try to increase the PPPOS_RXBUFFER_LENGTH to 10KB, BUF_SIZE to 10KB. But still the same. Do you have any suggestion? My target is 10~20 seconds for 1MB so that I can transfer image from ESP32-Cam. @garudaonekh try to increase uart baudrate between simcom and esp32 and also Serial output @garudaonekh try to increase uart baudrate between simcom and esp32 and also Serial output. In example it is only 115200 bits per seconds that is nearly 14 kilobytes per second Thanks once again, I increase to 921600, it's now around 30 seconds to download 1 MB. I can't go higher than this, it look like this is the limit of ESP32 even though SIM7100 support up to 4,000,000. Anyhow, this is enough for my case. I made wrong assumption that 115200 can handle up to 100KB.
GITHUB_ARCHIVE
Ignition coil arc initiation with protected low voltage continuation? I'd like to initiate a spark plug arc with an ignition coil and then power the resulting (highly conductive) plasma with an ultracapacitor (low voltage, say 12V to 24V, and high current) without damaging the ultracapacitor during the initiation. It would do something similar to this circuit: But instead of relying on a 20KV power supply, it would be preferable to use a low voltage supply I'm trying to avoid any arc-circuit resistance beyond the arc itself and the negligible ESR of the ultracapacitor. I find the aforelinked PDF description of the above pictured circuit somewhat confusing. It says the 20KV capacitor discharges through the inductor upon the Arc Strike Signal closing the switch -- but doesn't say anything about the timing of circuit in relation to the switch re-opening the circuit (which is presumably what causes the ignition spark). Moreover, why would it not be sufficient to dispense with the capacitor in this role and dispense with the timing problem by increasing the induction of the coil, lowering the Igniting Power Supply voltage and then closing and opening the switch at "leisure" to produce the magnetic field and then the spark as with an ordinary ignition coil? It's almost as if the description is off -- and that the capacitor is there only as a "snubber capacitor", to absorb the back EMF away from the Arc Power Supply. Thinking along these lines: Might the ultracapacitor serve as its own snubber capacitor if the polarity of the back EMF discharges the inductor-side plate? Also, if the inductor has reasonably high inductance, it seems the Arc Strike Pulse Unit's power supply can be the same (kind of) supply (say lead acid battery) that charges the ultracapacitor. Pehaps, something like this (I was unable to find a spark gap in the circuit lab parts so I used a voltage controlled switch for the arc): You close SW1 to charge the capacitor. Open it. Then close SW2 to establish the inductor field. Open it and the arc is initiated while the inductor's back EMF draws a small amount of charge from the inductor-side plate. The sacrificial fuse then breaks the arc plasma's circuit after it is delivered a short, high current, low voltage pulse. You can play with the switches and simulate the behavior at this falstad URL. Is this conjectured circuit reasonable? @Sparky256 The voltage/currents he is talking about are not that dangerous, and certainly not more dangerous than welding equipment. Do a Google search for the string "xenon short arc lamp ignitor". There is much information there, including some DIY approaches. Here's my suggestion for some experimental work. Since you are looking for a DC discharge maybe all you need is a single high voltage pulse of quite short duration to trigger the arc. So, maybe try a camera flash circuit put through a small step up transformer to initiate it. To protect the supercap a couple of high value inductors and behind them some zener diodes to limit any voltage higher than the capacitor charging voltage. You will likely need a scope that can handle high voltages to see exactly what is happening - there could be some weird resonances involved. Also, initially, leave out the supercap and just see if you can trigger a one-shot arc successfully. If you want to go beyond that, google "stun gun circuit" for your high voltage generator of more simply try something like this from eBay Ignore the hype - you will get more like 15-50kV from it - the wider the discharge electrodes the higher the voltage and the slower the repetition rate I've done some work on a project with similar requirements, a DIY TIG welder. The crux of the issue are the contradictory requirements to deliver both high current at low voltage and low current at high voltage to the same electrodes. This is difficult, because you need switching devices which can block several kilovolts yet pass tens or hundreds of amps to do this in the naive way. The typical solution appears to be discharging a high voltage capacitor to an inductor in series with the high current low voltage supply, as in OPs schematic. Usually this is done with a spark gap. As the energy rings down in the parallel LCR circuit thus formed, a high enough voltage to strike an arc is briefly present at the output. While conceptually simple, this has a few drawbacks: It requires a high voltage power supply to charge the capacitor to several kV. Arc initiation cannot be triggered exactly when required without substantial extra circuitry. It's not solid state. My solution is to use a custom step up transformer in series with the low voltage high current supply: The transformer is optimized first and foremost for a very low resistance secondary winding. Low resistance is critical for passing large currents trough the secondary winding once the arc is struck. Thus the turns ratio of the transformer is fairly small and the primary only has a single turn, allowing the high voltage secondary to be constructed of just twenty turns of thick copper tape. In order to still get the required voltage gain despite the 1:20 turns ratio, the primary forms a series resonant LC circuit with a tank capacitor. When driven at its resonance frequency, the current trough this circuit rises with every passing cycle, until the primary voltage peaks at several hundreds of volts, the output electrode at several thousands, and an arc forms. One neat but still untested feature of this topology is the ability to use the same H bridge for both arc initiation and controlling the main welding current, just by changing the switching frequency: simulate this circuit I'll still have to investigate if it's feasible to use the high voltage transformer as the choke for PWM (as shown above), or if using a separate inductor for current regulation makes more sense (just letting the HV transformer saturate when the arc is present). If you click through the falstad URL I gave, you'll notice that I have, apparently, answered your first two "drawbacks". Arc initiation uses a 12V source and its inductor is 33uH. Timing of the arc is controlled by momentarily closing the switch between the source and the inductor. In that moment, the magnetic field of the inductor acquires enough energy to jump the arc. What is the drawback of this, other than the need for a high current, high speed switch?
STACK_EXCHANGE
SignalR NPM package breaks Angular app build Disclaimer: It related to older @aspnet/signalr package, I don't know if it is already fixed in the new one, but for now the package is pretty unusable in Angular/Webpack scenario. After adding @aspnet/signalr project stopped compiling with the error "TS2322: Type 'Timeout' is not assignable to type 'number'." To Reproduce Make Angular project that has a call to setTimeout: var t: number = setTimeout( () => { console.log( "here" ); }, 1000 ); Build it with ng build It should fail with error TS2322. The cause @aspnet/signalr package has dependency on @types/node. If it gets installed, it hijacks the "standard" ambient declarations, so setTimeout gets wrong type. Unfortunately it's impossible to tell TypeScript compiler to skip some type, and also the signalr package doesn't want to compile if there is no nodejs types. Thanks for contacting us, @AndrewMayorov. I don't see any easy workaround here. @anurse thoughts? We'll look at it @AndrewMayorov can you post a runnable sample project that reproduces the issue? Also, are you using the ASP.NET Angular SPA templates or just a standard Angular app? I managed to get the repro together. As a workaround (I know it's not ideal) you can override the TypeScript behavior using the below snippet and it should work: let t: number = (setTimeout(() => { console.log("..."); }, 1000)) as unknown as number; Casting to number via unknown tells TypeScript to ignore the conflict :). We'll look at a first-class fix though, since that's not ideal. Looks like the easiest fix here is for us to remove all our @types dependencies from the devDependencies section. The assumption was that since they were devDependencies they'd only apply to this instance (not when installed as a package) but it seems that's incorrect. The conflict is occurring (as @AndrewMayorov pointed out) because installing @aspnet/signalr also brings down the dev dependencies. Thank you for the answer, @anurse! Workarounds are not a real option here, because we already have tons of code, and this is, after all, a hack. So for now I preferred the workaround with loading SinglarR as an external script. But this also bad, of course. Ok, glad to hear you've got some kind of workaround. I realize this is something we need to fix, just want to make sure you can be unblocked as quickly as possible :). @AndrewMayorov I tried out an ng new app and could repro the issue without adding @aspnet/signalr. This is because by default ng new comes with a direct reference to @types/node and an indirect reference via @angular-devkit/build-angular. See https://github.com/angular/angular-cli/issues/13784 for a discussion about that. After removing both of those deps and then adding @aspnet/signalr I couldn't repro the issue. After adding @aspnet/signalr project stopped compiling with the error "TS2322: Type 'Timeout' is not assignable to type 'number'." This sounds like your project was working fine before so we'll need some help getting a repro because we can't reproduce it. Would it also be possible to provide your package.lock.json? Yeah, I also can't repro this on a plain package.json with SignalR installed. I don't think SignalR is the one bringing in @types/node. I'm going to close this for now, but if you can share a project that works without @aspnet/signalr and doesn't work with @aspnet/signalr we can investigate further. > cat .\package.json { "name": "test", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC", "dependencies": { "@aspnet/signalr": "^1.1.4" } } > dir .\node_modules\@types dir : Cannot find path 'C:\Code\anurse\Scratch\test\node_modules\@types' because it does not exist. At line:1 char:1 + dir .\node_modules\@types 2>&1 | clip + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (C:\Code\anurse\Scra\u2026node_modules\@types:String) [Get-ChildItem], ItemNotFoundException + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand @anurse, @BrennanConroy, you are right, guys. I also cannot make a fresh project that reproduces the problem. Angular template indeed has @types/node in devDependencies, and just removing it is not enough, as Angular build depends on it (via webpack). So I have no idea why in our project it behave differently. We don't have explicit reference to node typing, but transient dependency is still there. I will try to reproduce it there.
GITHUB_ARCHIVE
The Book of Programming Ideas document contains 64 example programming exercises for students to try. The sections cover everything from basic input and output exercises through to subroutines, functions, and file handling. There are Python and Java versions of the document, although all exercises are written to be language-independent. This book contains 27 programming tasks for students. They range from simple assignments that output messages and variables to the screen, to more complex programs to play simple games or solve common problems. Each activity lists the required programming knowledge (e.g. loops) to complete it. This is a really useful booklet for starter or extension tasks, allowing students to work at their own pace. The activities are written so that they are language independent. This booklet was created by Stuart Lucas and Michael Kölling and is licensed under the Creative Commons Attribution Share Alike licence. You can download the PDF version and a Microsoft Publisher version, allowing you to edit the booklet. These Java Helpsheets cover various aspects of the language including if statements, data types, loops (while and for), and the Scanner class. Each sheet summarises the concept and provides several example lines of code. The sheets can be printed out on A4 or A3 paper and make excellent wall displays to help students who are new to programming. This resource was created by Matt Lowe and is released under the CC-BY-ND licence. This Python handout and the PowerPoint version are designed to test students' understand of loops in Python. The second example includes an infinite loop for students to spot. The third example demonstrates working code with awful variable names - a good opportunity to highlight how 'simple' things like these can help make code much more (or less) readable. This simple programming project requires the input of ice cream preferences (size, sprinkles, etc) and then calculates the total cost. The emphasis is on accepting and validating user input. With many different combinations of ice cream and prices available, this is also a good chance to talk about testing. The project is written to be programming language agnostic: we have used it successfully with Python and Java, but it is possible to use it while teaching most languages. Skills required: selection / if statements, while loops Code Academy is an incredibly popular site for good reason. It has a range of excellent programming tutorials that focus on practical skills development, providing students with 'bite-sized' programming tasks that have clear and easily achievable goals that stop students getting discouraged. The system also keeps track of students' progress and allows them to learn at their own pace. Python seems to be a particularly popular language for beginners (particularly for GCSE and IGCSE), and I have found the Code Academy tutorials to be very useful for my students. Compiled languages such as Java and C/C++/C# do not generally have tutorials on Code Academy, but as these are generally considered languages for students with more programming experience, this should not be much of a problem.
OPCFW_CODE
This is a guest post by Bastien Castagneyrol. This is an issue I’ve thought about (as have others), and like Bastien, I don’t quite know what action to take. I like Bastien’s climbing metaphor. In a related one, the journey from subscriber-pays paywall to author-pays-open-access crosses a very rugged landscape, with crevasses both obvious and hidden. Disclosure from Bastien: what follows is not exhaustive and could be much better documented. It reflects my feelings, not my knowledge (although my feelings are partly nurtured with some knowledge). I’m trying here to ask a really genuine question. The climbing metaphor My academic career is a rocky cliff. As a not-senior-yet-but-not-junior-anymore researcher, I am supposed to climb in lead. The top of the cliff is quite far away, but luckily I have a strong harness and a solid rope to hold me. I have a well secured position. For those who ever enjoyed rock climbing, my situation looks something like the drawing above (I am the “established” researcher). I could keep hanging from this comfortable position. Or I may want to climb further up, because discovering new horizons is exciting, because it will help me get more lab facilities, and, let’s be honest, because the salary will be better too. But to climb further, I absolutely need someone down the cliff to hold the rope. PhD students. Students do a great job in the field, in the lab, and they can do magic stuff with R. Over the last few years, I’ve become interested in some research areas I would have never considered if I had not been pushed that way by “my” students. Students not only secure my own position, they help me climb further up. But as the rope stretches, I cannot climb any more if the folks at the other end do not join me. Here comes the concern, and here the climbing metaphor (almost) stops. As a tutor/adviser/supervisor/mentor, I must help students climb too. How can I do that? Students need papers Scientific papers in our academic world are currencies. Having one 50 € note in my wallet will give me more opportunities than having one 5 € one. Likewise, a common belief is that I will have more career opportunities with my name in a good position in top-rank journals (at least well established journals in my field). It may not be true, and it should not (among other things, published papers should not be the only currency), but let’s assume that young researchers will get more recognition – and greater chances to pursue their academic careers – if they have a bunch of papers published in the so-called “good journals”. Real people outside academia also want to read scientific papers There is a growing concern in the scientific community about open science. Because public academic research is, by and large, paid for by citizens, it is legitimate that those who pay can access to what they paid for. People who want to be able to read scientific papers, however, find that they have traditionally been hidden behind paywalls. (There are many other reasons why we – as members of the scientific community – may want to break paywalls down, but this is not what I want to discuss here.) Several propositions have been made to open science and make scientific papers freely accessible to anybody. And it will shortly become mandatory (at least in Europe) to make papers from publicly funded research open access. But then the question is: who pays? Research and knowledge are not free. Even the internet is not free, and editing and archiving papers also has a cost. So, again, who pays? If the reader does not pay to read (or if their institution doesn’t pay for them), then the authors have to pay to make their papers accessible for free. Actually, they pay with their grants. If it comes from a public science funding agency, then to be able to read such an open access paper, citizens paid the salary of the people who did the research and of course the fees for making the paper open access (not to talk about the cost of sensors, reactants, fences, students’ grants, travel, or whatever was needed to do the research). And the publisher gets the open-access fee. This author-pays model costs a lot, but it makes it possible to make scientific papers “gold open access” while still publishing in famous journals. Does it mean that we pay for the fame? Kind of. But recall that,for young or still-young-but-older researchers, this kind of fame also means career opportunities. Can students afford open science? Open science can be completely free*. A bunch of researchers recently launched the PCI initiative. PCI stands for “Peer Community In…” – for instance, PCI in Ecology. The principle is simple and seductive. Very briefly: 1 – you are proud of your paper 2 – you upload it on a (free) open archive (for instance, Biorxiv) 3 – from Biorxiv it goes to PCI 4 – your paper is handled by recommenders and then reviewed, as it would be in any other journal 5 – you can make changes to your paper following recommendations 6 – if the recommender deems the work to be valid, your paper receives its RECOMMENDED sticker (7 – you can still send your recommended paper to a classical journal) Appealing, as I said. Buuuuuut…. no impact factor, no famous journal name. Just sound science. And here comes the promised question: Should I encourage “my” student(s) to send their next papers to PCI? One of the reason I haven’t made the leap so far is because I couldn’t make up my mind. On the one hand, the system is appealing (to me) and is likely to fix some annoying issues with the current publication system. My gut feeling is that it deserves to take off. But of course, it will only take off if we (“established” scientists, see above) go this way and, more importantly, if we value the science in PCI papers as we would value the science in any other journal. On the other hand, we need our papers to find audience, otherwise the science inside them won’t have made an impact and our students’ careers may not get the boost we want them to. Famous journals are under the spotlights, and that helps getting audience. This is maybe not the case yet for PCI. None of these arguments is really new, but new ideas need time and discussion to mature. My guess is that people have had time to hear about “broken” publishing, open access, and PCI. Many people have thought about these issues; you probably have. Maybe what you may think now is better informed that what you could have thought few months ago. And maybe you can tell me whether I should encourage “my” student(s) to send their next papers to PCI**? © Bastien Castagneyrol, September 17, 2019; illustration ditto, but licensed CC BY 4.0. *^To the submitting author, I mean. There are still costs here, but they’re borne by fundraising by the “publisher”. **^I anticipate some questions, so, to initiate the discussion, let’s assume that: (1) I can pay for article publication charges in a 100% open access or hybrid journal, (2) students are first authors, (3) students want to pursue their career in academia, (4) other co-authors don’t care.
OPCFW_CODE
One of the diagram types which Capella has adopted from the Unified Modeling Language (UML) is the Class Diagram. Class Diagrams are used extensively in modeling information systems. They can be useful in more general systems architecture models by allowing data and terminology to be modeled precisely. All entities created in a Class Diagram are stored in the Data folder of the model. We did not include a data model for the Toy Catapult example for simplicity and because the toy was intended to be purely mechanical. Nevertheless, we would like to find some way to expose you to this diagram type. In teaching design thinking approaches, we have found three diagram types to be useful in the early stages of a design project: the Context Diagram, the Concept Classification Diagram, and the Influence Diagram. All three of these diagram types can be created in Capella using the more general Class Diagram. In this short tutorial, we will give an example of each and explain briefly how they were created. There is really nothing for you to do in this tutorial: use it as a reference. - In every phase of a Capella model, you can create Class Diagrams by clicking [CDB] Create a new Class Diagram in the Transverse Modeling section of the workflow. - The Palette has lots of tools but all of the diagrams in this section were created using only the Class, Association, and Generalization tools. - The first diagram we find useful is a Context Diagram. In a context diagram, the system of interest is placed at the center of the diagram and the stakeholders (actors) and other entities are placed around it. Relationships between the system and these surrounding entities are indicated with rectilinear lines with verb phrases. For example, we read in this diagram that "The Child retrieves the toy catapult from storage and plays with it." The importance of the context diagram is that it identifies interfaces between the system of interest and the contextual world around it. Thinking about those interfaces often gives rise to important system requirements. As you develop your design, you will discover more and more external entities to which your system is related. You should continue to capture those relationships. The context diagram is a great place to keep track of them. - The structure of the context diagram is easy to create. Just add classes for each entity using the Class tool and connect them using the Association tool. The only trick is in getting the verb phrase into a label on the connector. Here you see the connector from the Parent class to the Toy-Catapult-Concept class selected. The label on the connector is stores. Here is the Properties view that accomplished that: Observe that we didn't bother to name the association: we accepted the default name ("DataAssociation34"). Also observe that there are two roles for every association. Presumably, one role is for the source class and the other role is for the target class. In practice, we had to use trial and error to figure out which role would result in a label appearing on the diagram. Here it is clear that the role on the right, which we named stores resulted in the correct label. - The second diagram we find useful is a Classification Tree Diagram. Brainstorming sessions result in many concepts. It is often useful to organize the concepts into a classification tree. Another use is to keep track of the major decisions that were made in arriving at a final design choice. That is what we illustrate here. We show a progressive set of choices ending with our final design decision: to go with a Trebuchet-style toy catapult. If we run into difficulties with our design, we may want to backtrack up this tree and try a different branch leading to a different design concept. - The structure of this Classification Tree Diagram was created using the Class tool for the concepts and the Generalization tool for the branches. We styled them as rectilinear lines. We also used the Note tool to create the textual notes, moving them backward below the lines and classes, to create a sense of organization. - The third diagram type we find useful is an Influence Diagram. Here we create a node capturing some desirable attribute of the contextual world and then capture how other quantities or qualities influence that desirable attribute. There are two flavors of the diagram, one reflecting the "As Is" state of the world and the other reflecting what we hope our design will achieve, the "To Be" state of the world. Here is an "As Is" diagram where we reflect on what it takes to delight a child. Here we read that our goal of "Entertainment-Pleasure" can be provided by "Narrative, Humour, Challenge, Anticipation, and/or Sensory Stimulation." We do a deeper dive on this line of thinking and see that "Personality" can contribute to a "Narrative", and so on. The diagram is meant to be suggestive of ways in which we can intervene and provide value. - To create this diagram, we used the same techniques as described above for the Context Diagram, except that we styled the lines as oblique rather than rectilinear. - Finally, we cloned the "As Is" Influence Diagram and added a node representing our proposed intervention, the Toy-Catapult-Concept. You can read this diagram to see that the toy catapult can lead to "Entertainment-Pleasure" because it supports a battle-themed narrative, it can be given a personality (just paint eyes on it) to support the narrative, it can be used to hit targets as part of a challenge, it involves a time delay (from loading to triggering) that creates anticipation, it can be materialized to provide tactile stimulation (use nice materials), and finally, it can be shaped and colored to provide visual stimulation. This is the beginning of a value proposition for the toy.
OPCFW_CODE
Following on from this question: Imagine that you want to test for differences in central tendency between two groups (e.g., males and females) on a 5-point Likert item (e.g., satisfaction with life: Dissatisfied to Satisfied). I think a t-test would be sufficiently accurate for most purposes, but that a bootstrap test of differences between group means would often provide more accurate estimate of confidence intervals. What statistical test would you use? Clason & Dormody discussed the issue of statistical testing for Likert items (Analyzing data measured by individual Likert-type items). I think that a bootstrapped test is ok when the two distributions look similar (bell shaped and equal variance). However, a test for categorical data (e.g. trend or Fisher test, or ordinal logistic regression) would be interesting too since it allows to check for response distribution across the item categories, see Agresti's book on Categorical Data Analysis (Chapter 7 on Logit models for multinomial responses). Aside from this, you can imagine situations where the t-test or any other non-parametric tests would fail if the response distribution is strongly imbalanced between the two groups. For example, if all people from group A answer 1 or 5 (in equally proportion) whereas all people in group B answer 3, then you end up with identical within-group mean and the test is not meaningful at all, though in this case the homoscedasticity assumption is largely violated. IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied". A t-test on the other hand needs to calculate means and more and thus needs interval data. You can map Likert scale scores to interval data ("totally dissatisfied" is 1 and so on) but nobody guarantees that "totally dissatisfied" is the same distance to "somehow dissatisfied" as "somehow dissatisfied" is from "neither nor". By the way: what is the difference between "totally dissatisfied" and "somehow dissatisfied"? So in the end, you'd do a t-test on the coded values of your ordinal data but that just doesn't make any sense. If each single item in the questionnaire is ordinal, and I don't think that this point can be disputed given how there is no way of knowing whether the quantitative difference between "strongly agree" and "agree" is the same as that between "strongly disagree" and "disagree", then why would the summation of all these ordinal level scales produce a value that shares the properties of true interval level data? For example, if we are interpreting the results from a depression inventory, it doesn't make sense (to me at least) to say that a person with a score of "20" is twice as depressed as a person with a score of "10". This is because each item in the questionnaire isn't measuring actual differences in levels of depression (assuming that depression is a stable, intenal, organic disorder) but rather the person's subjective rating of agreement with a particular statement. When asked, "how depressed would you say your mood is on a scale of 1-4, 1 being very depressed and 4 being not depresed at all", how do I know that one respondent's subjective rating of 1 is the same as another respondent's? Or how can I know if the difference between 4 and 3 is the same as that of 3 and 4 in terms of the person's current level of depression.If we can't know any of this, then it doesn't make any sense to treat the summation of all these ordinal items as interval level data. Even if the data do form a normal distribution, I don't think it is appropriate to treat the differences between scores as interval level data if they were computed by adding up all the responses to a likert-items. A normal distribution of data just means that the responses are probably representative of the greather population; it doesn't imply that the values obtained from the inventories share important properties of interval level data. We need to be careful in the behavioural sciences about how we use statistics to speak to the latent variables we are studying, for since there is no direct way of measuring these hypothetical constructs, there are going to significant problems when we attempt to quantify subject them to parametric tests. Again, simply because we have assigned values to a set of responses doesn't mean that differences between these values are meaningful. I will try to explain proportional odds ratio model in this context since it was suggested and indicated in at least 2 answers to this question. The score test of a proportional odds model is equivalent to the Wilcoxon rank sum test. More precisely, the score test statistic for no effect of a single dichotomous covariate in a proportional odds cumulative logistic regression model (McCullagh 1980) for ordinal outcome was shown to be equal to the Wilcoxon rank sum test statistic. (Proof in An extension of the Wilcoxon Rank-Sum test for complex sample survey data.) Just like Wilcoxon rank sum test, this test detect whether two samples were drawn from different distributions, regardless of the expected values. This test is invalid if you only want to detect whether two samples were drawn from distributions with different expected values, just like Wilcoxon rank sum test.
OPCFW_CODE
Querying 3 tables that return 32 records took 11 seconds. How to improve query? I am doing a database query using PHP and wordpress. In the query below, it's taking too long to return the results. (32 total, Query took 11.4666 seconds.) The database server is returning a timeout error when the result count is more than 500. Is there a better way I can query 3 tables? The schema is like this: notes table date_created date_modified note_author note_text contact_id usermeta table user_id meta_key meta_value users ID (this is user_id in usermeta table) user_email SELECT note.date_created as date_created, note.date_modified as date_modified, firstname.meta_value as first_name, lastname.meta_value as last_name, useremail.user_email as email, note.note_author as author, note.note_text as note, userid.user_id FROM `notes` as note LEFT JOIN usermeta AS userid ON note.contact_id = userid.meta_value LEFT JOIN users as useremail ON userid.user_id = useremail.ID LEFT JOIN usermeta AS firstname ON userid.user_id = firstname.user_id LEFT JOIN usermeta AS lastname ON userid.user_id = lastname.user_id WHERE userid.meta_key = 'activecampaign_contact_id' AND firstname.meta_key = 'first_name' AND lastname.meta_key = 'last_name' AND note.contact_id = 80426; I would like to improve the performance of the query so it can handle thousands of records. Right now my query is returning timeout error when the result exceeds 500. pls post the execution plan You are joining usermeta three times. As far as I can tell you can pull that information out of a single join with that table. To obtain the execution plan, use an EXPLAIN. https://dev.mysql.com/doc/refman/8.3/en/using-explain.html This probably can be resolved by creating some appropriate indexes. Please [edit] your question to show us the output of SHOW CREATE TABLE for each relevant table. EAV is a difficult schema patter to optimize. The tables sem to be over-normalized. (No need to split parts of a name across multiple tables.) These composite indexes will help some. firstname: INDEX(meta_key, user_id, meta_value) lastname: INDEX(meta_key, user_id, meta_value) userid: INDEX(meta_key, meta_value, user_id) If meta_value is TEXT, leave it out of the index. If note.contact_id is a VARCHAR, quote `'80426'. If this really is WordPress, see https://wordpress.org/plugins/index-wp-mysql-for-speed/
STACK_EXCHANGE
M: Why I hate funnels. - kevin http://tinyletter.com/ben/letters R: brianr This is silly. Funnels are a useful abstraction to measure and optimize conversion rates in a multi-step process. How you optimize the steps in the funnel (i.e. spam or not) is up to you. The "upside down funnel" is really an example of a viral loop, optimistically shaped to imply that loving your customers is guaranteed to bring you more of them. But that is still _worth measuring_ , and once the referrals get to the "try you out" phase, it's worth understanding the process by which they become loyal customers. That's what funnels are for. R: bcoates It's not a funnel in the sense of the object that allows you to pour a wide mouthed bottle into a small mouthed one, it's a name for the graph of "how much crap is in your flow" vs "how many customers are still trying to use your awful UI" tends to narrow down sharply and visually resembles a funnel. You're _supposed_ to hate the funnel! The funnel is what stands between you and a good UX. The (impossibly) ideal funnel is a short length of straight pipe, where 100% of user intent is efficiently converted into action. R: badman_ting I just see them as realistic. Of the people who use the internet, only some will find out about you. Of those, only some of them will care. Of those, only some will be willing & able to pay, etc. The thing about flipping the funnel upside-down is clever, but I wonder how much it has to do with scale. Word of mouth can be a great driver of growth when you're small, but what about after that? Plus, the same effect applies where only some will pay you. So I'm not sure how different it actually is. I agree that trying to make money from stuff you built often involves doing things with various degrees of grody-ness. I think what the author has done here is to think about that in a way that he finds palatable, and that's certainly important. But I wonder how much of this is about changing perception rather than action. R: SandersAK I never understood these types of posts: Take something that is worthy of criticism (the abuse of funnels in user acquisition) and then use hyperbole to make them seem like the worst thing ever. It's a tool to understand the progression of total audience into customers on a website. Just like CSAT and NPS are both valuable indicators of growth potential. None of these things are the only thing that matters. And none of them are the best tool or methodology. Just like a spoon isn't the best accessory in the kitchen. That award obviously goes to the garlic press. R: thattallguy garlic presses ftw. I agree with your point and yet still think posts like this are worth while. Many, many, people and businesses are so funnel focused these days, they've lost sight of the bigger picture. Godin has been talking about this stuff for years, he's a master of hyperbole, and has had a big impact on many people because of it. R: SandersAK garlic presses - so efficient right? I guess if they're good noise makers that get conversations started then maybe that's a net positive. I just start to wonder if hyperbole is the most effective long term way to communicate - tho to use my own argument against me, i suppose deciding "most" is not the point ;) R: rmk2 The thing is, hyperbole is more likely to _provoke_ reactions than a seemingly "objective" and rational utterance. Hyperbole might _annoy_ you or you might agree and find it funny, either way, it is more likely to touch you in some way. Hyperbole enables both people who agree and who disagree to more pointedly argue their particular side. If you think funnels are good, then seeing them compared to a meat grinder will probably annoy you enough to take to the comments and voice your concern or dissent. At the same time, if you find funnels not quite as good, you are equally stimulated to comment based on the apt description of your feelings. Either way, hyperbole is a way to facilitate communication, especially in cases where the subject itself might not be the most enticing. R: SandersAK Yeah I think that's a fair point. Though then is the goal of a post just to incite conversation as opposed to articulating a well considered opinion? R: thattallguy Welcome to the internet ;) R: codva Hating funnels seems sort of weird to me. Especially in this case, where it's just a model of an idea. There is nothing inherently evil about the concept of a funnel. He even makes that point when he turns it upside down and claims it as genius. The act of the trying to force people through the funnel on your timeline instead of theirs is where the problem is. And that act is going to be a problem no matter how you are modeling the customer acquisition process. R: ergest I feel the same way. In fact the idea of "flipping" the funnel is not new. Seth Godin wrote an ebook on it: [http://sethgodin.typepad.com/seths_blog/2006/01/flipping_the...](http://sethgodin.typepad.com/seths_blog/2006/01/flipping_the_fu.html) It's a lot easier for sales and marketing manager to think about conversion in terms of funnels, so a lot of analytics startups (kissmetrics, mixpanel) have it as a standard report. It seems that using a strong word like hate gets you attention; hence clicks. R: programminggeek What he describes "the audience" is part of the funnel. The idea that you have a cloud of people just out there taking in your content or marketing or apps or whatever just means they are sitting at the top of the funnel. You don't have to treat them like a meat grinder at all. You don't have to push every person "into the funnel". In fact, I would argue that's doing it wrong. If anything, the funnel should be a set of gates people pass through, more like a filter that at each stage people are more likely the target audience for what you are selling. For example, when you go into an Apple store, they let you hang out, play with things, and are generally pretty nice to you. They also ask you questions about your needs and wants in product and they steer you towards what they think might be the best fit. Thus, filtering you down to the right product. BUT, if you can't afford the actual purchase, they aren't going to force you into buying or badger you into something. If more websites treated their funnels like a filter instead of like chute you are trying to force people down, the better off marketing would be as a whole. It actually makes the whole process better because you don't even try to pitch until they are ready to buy. For example, on a current side project I don't even show pricing until people use the product. I don't even give them the opportunity to buy until they have shown that they will use it. I don't want angry customers saying they bought something that they didn't use and want a refund. So, we don't try to sell until people are happy enough using the product. We will probably get fewer customers and revenue this way, but our customers will be happier and we will only have customers that use our product instead of people who are paying us because we are good at marketing to them. R: GrinningFool IOW - "the funnel is really a filter". Which means it's not a funnel? I agree, by the way, with the approach you describe. It's a lot more sane and will result with happier customers. I just think that you and OP are saying much the same thing. R: programminggeek I think we are too, but I think that saying that funnels are evil, bad, stupid, whatever is foolish. The funnel is still the funnel, but how you treat it makes a difference. Of course, that kind of nuance doesn't get you on the front page of HN either. R: chipsy Re: People who are saying the funnel is still there. This is entirely a perspective thing; the customers see messages and options, not a funnel. You only see a funnel when you think of it as one. The author describes a deliberate avoidance of business philosophy based around conversion metrics. Hence if your thoughts are to push them back into the conversation, you immediately taint his purpose. Assimilating any one set of metrics into the prime position will essentialize the business into "make those numbers go up," creating a feedback loop that guides future decisions. That feedback loop subsequently creates its own conclusions about how to advance the business. If you break the loop and construct a different one, with different abstractions, you get a different kind of business. That's the big takeaway here. R: JonLim In the games industry, especially with free-to-play, you can hate funnels all you want, but they're a necessity to understanding where your gameplay loops are doing well and doing poorly. I've taken the long view that making a really awesome game leads to people wanting to give you money. However, my personal take on that is that it's just another way to spin funnels, in a less aggressive and predatory way. If I'm mistaken, I'd love to learn why. R: thattallguy The process the author describe still uses "funnels" so I don't think you're mistaken. What we do have to realize, however, is that the intention and language used to describe and execute "marketing", matters a lot. The author outlines the difference (in his post, and the one's he links to) -Content focused on teaching (adding value) vs Content focused on converting \- Politely asking for emails/subscribers vs requiring an email \- referred prospects vs captured leads One post he links to has a great comment by Gregory Ciotti: "It humors me how aggressive certain terms can be in this regard: "campaigns," "email blasts," it's like the marketing team is waging war with their prospects." Language matters, it impacts our actions. From how we create strategies to how we interact with customers or prospects. Everyone is in the funnel game right now, smart money positions against "everyone". R: RexM I don't like the term funnel, it's more like a sieve... However, I do think it's a good way to visualize customers that come to your site, but don't end up converting. You can look at those percentages and try to make them better. Whether you decide to "spam the f*ck out of them" or do something more personable and humane is up to you. R: mdc A sieve would just be a one-step funnel. If your engagement involves more than one step (sieves in series) then it's a funnel. R: thomasfrank09 I think the traditional funnel concept work just fine - as long as you do step three right. Do many blogs and services spam their subscribers? Yes indeed - and I have an itchy unsubscribe button-clicking finger for those services. If you change Step 3 to "Provide even more value", though, then you do indeed get customers that love you. And some of them refer their friends, who come in at the top of the funnel like everyone else - but with some preconceived good feelings towards you because of the recommendation they got from a friend. Pat Flynn's newsletter is a wonderful example of how to do it right. Almost all of the emails I get from him simply give me more useful information - maybe 10% have ever been strictly promotional. R: rodolphoarruda I like the concept. Using it to measure, evaluate and manage leads that fall into level 1 (widest part on top) is by itself a good thing. I once worked for a large company whose sales funnel had 7 stages/filters inside. Each one of them affecting of being affected by more than one organization. Dealing with it was a pain for most sales guys because it was easy to see where the opportunity was stuck in the funnel, but very hard to see why. I can imagine that smaller less complex organization could easily pull out a 3 or 4 level funnel, go with it and see its benefits. R: calbear81 We use funnels as a measurement of product quality all the time, especially in the context of understanding task completion rates and discovering areas for UX/interaction improvement. The "funnels" that Ben seems to talk about are more about sales funnels where you keep getting pestered once you're a lead but in the context of most e-commerce sites, funnels are a great way to know if there's something about the site that's not working for people. R: npsimons This can be extended to other mediums - I try to only do business with companies that have minimal advertising (such as Vanguard and USAA). Think about it for a moment: where does the money for television advertisements come from? If you're a current customer of a company running TV ads, you're being bilked, and if you're a potential customer, why would you want to do business with a company that will bilk you just to get more new customers? R: krisgee Upside down funnel looks like an old timey megaphone. Perhaps that's the analogy here, if you have time to shout your message you might as well shout it to a lot of people. R: spolu Funnel are shortsighted... Yep. But investors are too I presume? R: kirke Anybody see the background picture behind the article? I'm on my phone, can someone with means extract it and post it somewhere so we can see what it is? R: mixmastamyk It's a scene of one of the Planet of the Apes movies, where Charlie is kissing one. Why? no idea...
HACKER_NEWS
We’ve heard a lot of File in our day today life. File is liFe with some arrangement of letters 😛 . What is a file and how it works ? What are different kind of files ? How to read the files and why do we need a file type ? All of these will be cleared at the end. What is a file ? A file is something that has data in it. You can say a collection of similar type of data. For example a file which contains text is a Text file, a file with numbers and calculations in it could be an Excel file, and more. So It is basically a place where the data is stored and transferred. At the bottom layer a file or any computer data is converted and stored in a zero/one format (0/1) i.e. binary data. So for our easy understanding this data is visible to us in text,image,etc formats. At the end of each file you can check the extension i.e. FileName.txt , Filename.jpeg, some files extension are hidden and could be visible in properties. There are very large variety of files that are available and used by different kind of people in the computer world. The most common file types are listed below : - Text (.txt) File : This is the file which most of us use for writing in notepad, wordpad (Rich Text) Software to open these files : Notepad, Notepad++, Wordpad, MS Word. - JPEG, PNG, BMP : are all the image formats. All of these could be visible in any image viewer of mobile/desktop/laptop. (wiki) (Joint Picture Expert Group , Joint Photographic Expert Group, Portable Network Graphics, Bitmap, Graphical Interchange Format ) - MP4, 3GP, FLV, AVI, WMV : These are the mostly used video formats in daily use. A lot of players VLC, KM Player, MX player, GOM player can open these kind of files. - EXE : Most important file type that is Executable. So In windows operating system , this file means a specific program will be installed in your computer or a specific task could be done with this file. So handle this file with care as it can contain viruses too and malware Softwares too. This file type will run only on windows operating system. - DMG : This file is like exe but for MAC OS. It can contain Softwares for MAC OS. - DOC, DOCX, XLS, XLSX, PPT : These are file formats for MS Office suits. Excel, Word, PowerPoint can open these kind of files. They contain text, images, presentation, charts, and spreadsheets. Other softwares like Libre Office, Google Docs, Open Office can also open these kind of files. These are Mostly used files. - APK: Android Application Package is a file format used for Android devices for installation of games and Applciation. These file could only run in Android Operating System. These contains a lot of code which will run after installing the Applciation. - IPA: These files are only for Iphone Operating system. These are used for installing Application and Games in iOS. - JAVA, XML, PHP, SWIFT, JSP, JS,: And many more are the coding file formats which are mostly used by programmers. These file types contains text mostly in English but not understandable by common people. (NotePadd++ & Sublime editor can open these files) - MP3, M4A, AAC, AMR, 3GA,WAV : These are audio file types used from a long ago. The most iconic player was Winamp of that time, was favourite of many in windows OS to play audio files. Audio recording files are mostly stored as aac and amr. These files could also be opened in VLC player. - PDF : Most used but mostly unknown (Portable Document Format) is famous for readable in every place no matter what is your Operating system and where you open it. Be it a browser, WORD, Mobile, anywhere its the same. Why So many file types ? For specific file we write different kind of data in it. So for reading that data we need a file viewer that can decode that 101010 format and read whats written in it. An image could also be opened in notepad but that data will not be in readable format This is how it looks : For reading a specific file kind we need a specific viewer. Following are the list of more common files Icons made https://flat-icons.com/
OPCFW_CODE
- Tahoma (typeface) name = Tahoma foundry = Carter & Cone| Tahoma is a humanist sans-serif typefacedesigned by Matthew Carterfor the Microsoft Corporationin 1994 with initial distribution along with Verdanafor Windows 95. Tahoma is very similar to Verdanabut with a narrower body, less generous counters, tighter letterspacing, and a more complete Unicodecharacter set. Designed from the start as a bitmap rather than outlines, the bold weight is heavy. Being based upon a double pixel width the bold weight is more similar to a heavy or black weight. Though often compared with the humanist sans-serif typeface Frutiger, in an interview with Daniel Will-Harris, Matthew Carter acknowledges some similarities with his earlier typeface Bell Centennial. [ Daniel Will-Harris. [http://www.will-harris.com/verdana-georgia.htm "Georgia & Verdana: Typefaces designed for the screen (finally)'"] "TypoFiles," retrieved January 16, 2007.] It is also the default screen font used by Windows 2000, Windows XP, and Windows Server 2003(replacing MS Sans Serif) and is also used for Sega's Dreamcast. Bundled for inclusion in the font library of Windows, the typeface is widely used as an alternative to Arial. The Tahoma typeface family was named after the Native American name for the stratovolcano Mount Rainier(Mount Tahoma) which is a prominent feature of the southern landscape around the Seattle metropolitan area. Bundling on non-Microsoft operating systems October 16, 2007, Apple announced on their website that Tahoma would be bundled with the next version of their flagship operating system, Mac OS X v10.5("Leopard"). Leopard also shipped with several other previously Microsoft-only fonts, including Microsoft Sans Serif, Arial Unicode, and Wingdings. * [http://www.microsoft.com/typography/fonts/family.aspx?FID=19 Microsoft typography information on Tahoma] * [http://download.microsoft.com/download/office97pro/fonts/1/w95/en-us/tahoma32.exe Direct Download on Microsoft website] Wikimedia Foundation. 2010. Look at other dictionaries: Tahoma — is the original form of the word Tacoma , as in the city of Tacoma, Washington. It can refer to:Groups: *Escuadrón Tahoma, Escuadrón de Monterrey Nuevo León.Places: * Mount Tahoma, an alternative spelling of Mount Tacoma, the original name of the … Wikipedia Sans-serif — font Serif font … Wikipedia Thomas Rickner — (b. October 8, 1966, Rochester, New York) is an American type designer who, while Lead Typographer at Apple Inc., supervised the production of the first TrueType fonts released in 1991 as part of Apple’s System 7 operating system for the… … Wikipedia Unicode font — A Unicode font (also known as UCS font and Unicode typeface) is a computer font that contains a wide range of characters, letters, digits, glyphs, symbols, ideograms, logograms, etc., which are collectively mapped into the standard Universal… … Wikipedia Verdana — Infobox font name = Verdana style = Sans serif date = 1996 creator = Matthew Carter foundry = Microsoft Corporation Verdana is a humanist sans serif typeface designed by Matthew Carter for Microsoft Corporation, with hand hinting done by Tom… … Wikipedia Arial — Infobox font name = Arial style = Sans serif classifications = Neo grotesque sans serif date = 1982 creator = Robin Nicholas Patricia Saunders foundry = Monotype Imaging|Arial, sometimes marketed as Arial MT, is a sans serif typeface and computer … Wikipedia Famille de polices (CSS) — Dans les documents numériques mis en page et enrichis avec des indications de style CSS (par exemple HTML, XHTML, ou SVG) selon les standards du World Wide Web Consortium (W3C), une famille de polices ou famille de fontes (propriété font family:… … Wikipédia en Français List of typefaces — This is a list of typefaces, which are separated into groups by distinct artistic differences. Contents 1 Serif 1.1 Slab serif 2 Sans serif 3 Semi serif … Wikipedia Matthew Carter — For other people of the same name, see Matt Carter (disambiguation). Specimens of typefaces by Matthew Carter. Matthew Carter (born in London in 1937) is a type designer. He lives in Cambridge, Massachusetts, United States … Wikipedia Segoe — (pronounced /IPA|si.ɡoʊ./) is a series of typefaces designed by Steve Matteson during his employment at Agfa Monotype [ [http://www.ascendercorp.com/portfolio custom.html#MS Corp Microsoft Corporate Fonts] ] . The font family is named after Segoe … Wikipedia
OPCFW_CODE
I’d been anticipating this for a few months now, since Tame Impala was re-releasing a special edition of their debut EP which is pretty hard to find at this point. Fortunately the first store I went to had it. With only 5000 released worldwide, I’m pretty excited. I got some other great stuff too, mostly special release singles. One of the things I’m always focusing on when building things, especially things that are relatively permanently constructed, is making sure they are as flexible as possible. When I’m testing things out, I can easily swap wires, reset configurations and mix things up to change performance, maximize efficiency, and/or change how something operates. When something gets permanently put together, that ability is often lost. With my micro solar array, I wanted to avoid that problem. The charge controller I built is extremely simple. It’s just based on an LM317 and a very basic current shutdown switch when the battery is over a certain voltage (not really necessary for NiMH charge rates of < C/8, but nice to have). Depending on the sun, I will hook up a variety of different battery packs. I have one 4 cell, and one 2 cell, and even a single cell occasionally. The problem here is that I normally have the array configured as a 2*12 arrangement. Two 6V panels in series and tie the two pairs together in parallel. This is ok for the 4 cell, since 4 6V in parallel would be too close to the voltage of the cells to run through the LM317, but it’s not so great for charging a 2 cell pack. All the extra voltage is wasted. I started to wonder if I could build a very simple power block using terminals, a few strips of breakaway male headers, and a handful of jumpers to allow for easy changing of the array power configuration. By just changing the jumpers around, you could switch the array from 6V*4, to 12V*2, to 24V*1 very easily. I spent a long time drawing out different ways of doing it and eventually I came up with this. The brick is half a standard rectangular perf board. The four terminal blocks on the sides allow the panel wires to be easily connected to the board. Underneath the board are a number of short wires that are soldered to connect the terminal blocks to various pins on the headers. The setup of these connections was the most difficult part because it allows for the positions of the wires to remain the same so the jumpers alone determine the power output. This allows me to consider the sun conditions and the pack I’m trying to charge and set up the array to deliver the maximum possible charge current at as close a voltage as possible. The most complicated jumper configuration is the 6V*4 since it basically requires each negative and positive terminal end up in the same place. But, this allows me to (even on a completely overcast day) squeeze all the power I can out of the array. No sense in wasting power ramping up the voltage by arranging the array in series if we’re just going to be cutting it way down for a 2-cell charge. Also, in the above image, notice the digital display. That is a really sweet panel meeter from Adafruit. This one measures from 0-99.9V using three wires. Obviously in an array like this, power is at a premium, so I’ve hooked up the meter to a switch, so I can just turn it off unless I want to know the panel voltage. But, the great thing about this meter is it only draws a few milliamps, meaning in really good conditions (i.e. perfect full sun) I could leave it on and just push up the charge controller a bit to compensate.
OPCFW_CODE
How to publish your MSI and EXE setup applications in the Microsoft Store Microsoft pre-released the possibility to directly upload MSI and EXE applications to the Microsoft Store. However, at the time we're writing this article, this option is only available to a limited number of users registered on Microsoft Partner Center review program. If you want to get early access to this feature and see how it works, stay tuned. In this article, I will go through the steps needed to join the Microsoft Partner Center program, and then show you how to prepare and publish the MSI or EXE application. We have an article explaining how to publish your MSIX application to the Microsoft Store - check it out. What you should know about publishing an E0 or MSI application to the Microsoft Store Before getting started, we need to make sure our EXE or MSI application meets the following conditions -- as they are needed to be accepted into the Microsoft Store: - It must have the ability to be silently installed by default or with silent switches. Learn more about silent installers on our How do I create a silent install article. - It must have the ability to be installed as standalone/offline - meaning that after the setup is downloaded, it will not require an internet connection to be installed. - It must be available in one of the approx. 100 languages mentioned by Microsoft. For this article, we will use Hover as our MSI application. Hover is a free tool developed by Advanced Installer’s team that allows you to run natively installed applications inside an MSIX/App-V container. Download it here! Now that we know what characteristics our MSI must have, let's look at how to register with Microsoft's Partner Center to publish your MSI or EXE to the Microsoft Store. How to Register to Microsoft's Partner Center? The first step in publishing an application to the Microsoft Store is to register a developer account. Go to Microsoft Partner Center and sign up. A standard Microsoft account is required to create the developer account. The login will be made using the same email address. Depending on which type of account you are creating, you will need to pay a one-time fee (19$ for an individual account and 99$ for a company account). Find out more information related to account differences and exact prices based on your location, here. The sign-up process is straightforward: - Select Account country/region. - Choose Account type. Enter your Publisher display name (Company Name). - Fill in your Contact info. - Proceed to payment, review, and finalize the sign-up process. Now that you have your accounts created, it is time to log in to the Partner Center. You will need some information from here for the next step, so keep it handy. Join the Preview Program To have access to the preview program, you must fill up this form and wait until Microsoft processes your request. This form also requires you to enter information for the MSI or EXE application that you plan to publish. Let’s go through the steps you need to follow and what you need to input to make sure you get accepted into the program. 1. Enter the Seller ID found in the partner Center dashboard as shown below. 2. Fill in your application name. This name must be unique, easy to read, and relevant to your application. It is best to follow Microsoft's naming best practices. 3. Select No when it asks you if this app will replace an existing app in the Store. Since this is your first application, it will not replace any existing version. 4. Type in your email address. 5. Select your app type: MSI or EXE. 6/7/8. Add and configure the URL to the downloadable application Once you get to the 6th step, add the download link for your application. Make sure the link meets the requirements of the 7th and 8th steps: - It should end with the versioning and the application file /1.0/Hover.msi. - It can be downloaded without any authentication. 9. Even if Digital Signing is not mandatory for MSI or EXE apps, to have credibility it’s best to sign them. Signing your MSI or EXE apps will also ease the process of getting accepted into the preview program. 10. Remember, your application must be able to install standalone/offline - so make sure to select Yes. 11. By default, MSI applications support silent install parameters. So, you can choose that option. 12/13. If you have any dependencies, don’t forget to declare them. For each dependency, you must document why you need them in detail. Otherwise, your application will not be accepted into the store. 14. Specify whether your package has any bundleware. 15. Type the rounded value of your application MB size. 16. Submit the form. After you submit the form, wait for Microsoft's confirmation. Then, you can move along to the next steps. How to create your MSI or EXE app in the Microsoft Partner Center (Name Reservation) When you're accepted into the preview program, it is time to log in again to Partner Center and navigate to Dashboard to start creating your app. In the Overview section, click on New Product and select EXE or MSI app from the drop-down. Type the name of your application. It’s best to go with the same name from the subscription form. This step reserves your app name. You have three months to submit your app to the Microsoft Store. Otherwise, you will lose your name reservation. Starting your Submission Once the name reservation is completed, you are automatically redirected to the application submission page. When you click on “Start your submission”, a submission page is generated. Here are five subsections you need to go through before you can submit your app to the Microsoft Store. As the title suggests, this is the page where we set up in which zones the application will be available, as well as the price. We left Markets, as default. As for the Pricing of our app, we have set it to Free. There are other options such as Free, Freemium, Subscription, and Paid. Remember to save the draft before proceeding to the next steps. The Properties configuration page contains the following subsections: - Category: We have set it as a Developer tools with no subcategory. - Support info: Our company information. - Product declaration: It should be On if applicable for your app. - Notes for certification: This is a good text box where you can add additional information that could help the validation process. Feel free to note any information that you might find useful (maybe user and password for test accounts, dependencies, etc). - System requirements: We marked Keyboard and Mouse since this is a simple non-demanding app. Feel free to complete any requirements you know your app needs. This page requires you to complete a questionnaire on the IARC (International Age Rating Coalition). Since our application is not a Game, Social, or Communication, we chose the All Other App Types category for the Rating questionnaire. Proceed with the rest of the questionnaire and answer the questions based on your application time. Once completed you will get a confirmation page as below: In the Packages section you declare the Package URL (the link from where your application can be downloaded), the architecture (x86 or x64 bits), install parameters (remember your app needs to support the silent install switches as discussed previously), and supported languages. Store listings. Manage Store listing languages. Click on Add languages to add your preferred language as well as the description of your application. After this step, you will see the status is Incomplete in the English section, as seen in the above image. For the next step, click on English and start completing the “Store listing details for English”. There are three mandatory fields: Description, Screenshots, Store Logo, and Applicable license terms. Feel free to complete the rest of the information for a more detailed description. You need to go through these steps each time you want to add a new language. When you're done, just save the draft and close. You should get the completed status as below. Now that we've completed our final step of hitting the publishing button, all we have to do is wait for Microsoft to evaluate our app. After that, our app gets released, based on the publication options we edited before. Now you know how to publish your MSI and EXE setup applications in Microsoft Store. Let us know how it goes and if you have any specific questions about it, we're happy to help! Subscribe to Our Newsletter Sign up for free and be the first to receive the latest news, videos, exclusive How-Tos, and guides from Advanced Installer.
OPCFW_CODE
More lenient order logic (moving away from online/offline orders) (This was discussed during the technical hangout on December 22. @cpacia, @jackkleeman and @justindrake agreed this was probably a good idea. No one voiced an opinion against it.) At the moment there are online orders and offline orders. I suggest moving instead to automatically confirmed orders and manually confirmed orders. Whether or not the buyer and vendor can establish a secio connection, the same order message is sent from the buyer to the vendor. When the vendor is made aware of the order, two cases arise: If the order is denominated in a currency other than BTC and the exchange rate leads to a price outside of the (optional) mispayment buffer, then the order needs to be manually confirmed (or rejected) by the vendor. Otherwise the order is automatically confirmed. This is more lenient for several reasons: Listings denominated in BTC should be automatically confirmed, regardless of whether or not the buyer and seller could establish a secio connection. Listings not denominated in BTC could still have an exchange rate within the mispayment buffer, even if the buyer and seller could not establish a secio connection. It is better UX for the buyer to receive a confirmation as soon as possible, and for the vendor to avoid manually confirming orders if not necessary. The above scheme maximises automatically confirmed orders. This decouples the type of payment (direct payments vs 1-of-2 clawbackable payments) from the type of order (automatically confirmed vs manually confirmed). For example, the protocol would allow for 1-of-2 clawbackable payments even for moderated payments, and would allow for direct payments even when the vendor is offline. The reference client may choose to be opinionated on the type of payment, but other implementations may give the buyer more choice. Services (such as Duo Search) send orders using OFFLINE_RELAY messages even when the vendor is online. This scheme allows for such orders to be automatically confirmed orders. This also simplifies the code: There is no need to make an exception for SendOrder in net.go as described here. There is no need to have an options interface{} for the handlers, and no need to hardcode options in OFFLINE_RELAY. The logic in handleOrder can be significantly simplified. The type of payment and type of order are decoupled (see above). This should result in more modular code. I'm not following this: This decouples the type of payment (direct payments vs 1-of-2 clawbackable payments) from the type of order (automatically confirmed vs manually confirmed). For example, the protocol would allow for 1-of-2 clawbackable payments even for moderated payments, and would allow for direct payments even when the vendor is offline. The reference client may choose to be opinionated on the type of payment, but other implementations may give the buyer more choice. Seems like it would be very inefficient to send the payment into a 1 of 2 during a moderated order, only to move it into a 2 of 3 afterwards. Also, keep in mind direct payments where the vendor is online work differently because it provides the vendor with a chance to provide an addresses. order ----> ack (with payment address) -----> payment When the vendor is offline we don't know the payment address. Seems like it would be very inefficient to send the payment into a 1 of 2 during a moderated order, only to move it into a 2 of 3 afterwards. To me it seems it is the opposite. Doing a clawback through a moderator (as opposed to directly with a 1-of-2) is very inefficient, if not completely broken. The moderator has to check that the clawback request from the buyer is legit (i.e. that the buyer is not trying to defraud the vendor). So the moderator has to contact the vendor before doing the clawback... but the vendor is offline! So the moderator has to wait the inactivity period as agreed in his moderation policy (say, 1 week). The buyer also has to cover the moderator fee (say, 2%), and it takes time and effort to liaise with the vendor. Notice also that if the moderator is also offline, it is impossible for the buyer to singlehandedly change moderator, and the funds would be lost forever. In summary, clawbacks using a moderator does not work, so buyer will want to send funds to a 1-of-2 first. When the vendor is offline we don't know the payment address. I wasn't aware of this. Why does the vendor need to provide a payment address? Can't the buyer derive a payment address from the vendor public key and a random chaincode, similar to here?
GITHUB_ARCHIVE
Kunsthandwerk aus Holz Unikatskulpturen – Stammbäume – Pflanzensäulen Read The Unofficial Guide To Las Vegas 2012 The Windows Presentation Foundation is even detailed for read the unofficial guide to las vegas on Windows XP and Windows Server 2003 by data of the WinFX Runtime Components, a address that can see based from MSDN. including this next translation is financial book for developers completed for the Windows Presentation Foundation. The Electrical 1 document of the WinFX Runtime Components covers template-based with the Community Technology Preview was in May( described as Beta 1 browser). There are continually formatting features between the two techniques or other items; the powerful pages between the two experiences create right announcement, a set of message things, and have to focus with Windows Vista. The read the unofficial draws the membrane of Terms and adolescents in the book, else is also where any items should have created. A Canvas has a profile membrane its actions n't Yet within the change's items. Like any tour itself" page, WPF makes a principal automation of interfaces, and tips 're new to differ language guts not not. The monthly step is Button, Label, TextBox, ListBox, Menu, Slider, and technological invalid digits of property risk work. More Late experiences see badly set, poorly as SpellCheck, PasswordBox, is for doing with section( as with a Tablet F), and more. well godly in a 3D publication, proteins played by the average, sure as card techniques and handy disasters, can modify specified and read by the specifications in a WPF type. While aspects and ambitious read the unofficial guide to las vegas 2012 browser answers can be download loved using XAML, elections must be loved in jeweiligen. 353146195169779 ': ' like the read the unofficial guide to las vegas 2012 Text to one or more Image things in a application, including on the skill's Scribd in that immersive. The physician track digestion you'll be per preview for your father hospital. The experience of payments your l received for at least 3 attempts, or for actually its online work if it matches shorter than 3 books. The movie of changes your production was for at least 10 methods, or for right its separatelybefore semiconductor if it wants shorter than 10 topics. It offers like read the unofficial guide to was delivered at this development. HAProxy vs security: Why you should NEVER journey seconds for decision-making order! 039; fatty The Best Cloud Provider in 2017? 039; creating The Best work example? 3 MBIn the dielectrics of the Roman Empire, the read the unofficial guide to las vegas 2012 realized enabled there not the book of the anything, but fundamentally its selected standard problem, using the clean systems of adult Fintech, file, and content. The Emperor of Law provides how the account generated to return the service of a request, leading with Augustus, the active worked-out, and Getting the cultures defining up to Caracalla and the Severan element. While earlier accounts are shown to find this print either through AD or previouscarousel, this name looks a digital analysis of the regular water and study of the support's Australia&rsquo and resource: by running the j through high concepts, it specifies that the study of maximum &ldquo lived a referral that ordered In right the facts, but badly items who was their images, formats who controlled them, the Available classpath, and the Roman applications and platforms who did it. skills of features making documents and specifying their example through language, processing those forehead-to-wall-thumping' able' Nodes Presenting in such factors, received an same request in making a natural height that the « received always the considerable number alongside the emeritus intelligence in the 13th and effective discourse. FITC Toronto is small and better than Not! 039; Actin are supported it without our strong production of templates and needs. Macromedia Flash books at all page aspects. line goal kind is 10 F off any FITC Amsterdam 2011 settings. The temporary real category cloud is November multiple. We are a location of adoption, a incompatible catalog 3 historians recipient for id.
OPCFW_CODE
Based on our record, Bootstrap seems to be a lot more popular than Milligram. While we know about 135 links to Bootstrap, we've tracked only 3 mentions of Milligram. We are tracking product recommendations and mentions on Reddit, HackerNews and some other platforms. They can help you identify which product is more popular and what people think of it. For speed and simplicity, we'll use a minimalistic CSS framework called Milligram. - Source: dev.to / 4 months ago Milligram is also one of the best lightweight CSS frameworks. It provides a nominal setup of styles for a fast and clean starting point. All sets of modules are packed in 2KB gzipped. It is specially created for better performance and higher efficiency with fewer properties to re-initialize resulting in cleaner code. It uses FlexBox, a grid system, and follows a mobile-first approach. - Source: dev.to / 6 months ago I had to do some research and it came down to Pure CSS, Tailwind CSS, and Milligram, I believe it was. - Source: dev.to / 11 months ago Since templates in http://getbootstrap.com/ are limited, is there any repository for more bootstrap 5 templates to buy? - Source: Reddit / 2 days ago I'm using bootstrap4 to get access mainly to the components that getbootstrap.com has to offer. I haven't done any custom styling or added any styling sheets. I've only been adjusting whatever I pull from the website. - Source: Reddit / 8 days ago Personally, it reminds me of a mix of Bootstrap and Susy:. - Source: dev.to / 20 days ago A better approach would be to build a modern web-based app that looks good if you're on a phone, tablet, or computer. In our household of six, we have both Android and iOS devices; I built some home automation systems and did exactly this using Bottle as the web framework and Bootstrap as the CSS that does a great job of adapting between platforms. It might not work for very sophisticated apps or UIs, but for... - Source: Reddit / 14 days ago In order to help me do the graphic design of the ContextMenu I've employed the latest version of Bootstrap CSS Library . - Source: dev.to / 14 days ago Bulma - Bulma is an open source CSS framework based on Flexbox and built with Sass. It's 100% responsive, fully modular, and available for free. Foundation - The most advanced responsive front-end framework in the world Spectre.css - Lightweight, responsive and modern CSS framework for faster and extensible development. Purecss - A set of small, responsive CSS modules that you can use in every web project. Materialize CSS - A modern responsive front-end framework based on Material Design Material-UI - A CSS Framework and a Set of React Components that Implement Google's Material Design
OPCFW_CODE
Does utils::rtags() work for code in Rmarkdown files? I am trying to using utils::rtags() to build indexes for R code. This works fine for *.R files as such. I have not been able to get it to work for code in Rmarkdown files (*.Rmd). With the proper pattern argument, it finds the files fine, but seems to completely ignore the code they contain. An empty 'TAGS' file is created. For example: utils::rtags(pattern='[.]Rmd$', ofile='TAGS', type='etags') Am I expecting too much? I am pretty sure rtags wouldn't understand Rmd syntax. Maybe if you run the source through knitr::purl first it would be happy, but it would make references to the wrong file unless knitr::purl could be convinced to add #line directives. Yeah utils::rtags uses the internal R parser to annotate code. R itself knows nothing about Rmarkdown syntax. All those features are provided by external packages. The R parser would be unable to know how to extract just code chunks from the document. It would only see a file with invalid R syntax. In Univeral Ctags (https://ctags.io), I'm working on RMarkdown. If you are interested, see https://github.com/universal-ctags/ctags/pull/3309 . Hello @masatakeyamato, that is interesting. Is it correct to believe rmarkdown support is still a work in progress in Universal CTAGS? FWIW, I normally use MS Windows as my OS. I see you provide Win32 builds on the CTAGS github repo. Are those pretty well supported? @DaveBraze, yes, it is not merged yet. However, I will merge it in soon; there is no blocker. The win32 repo releases a binary package(?) daily. @MasatakeYAMATO thanks again. I'll keep an eye on it. In the mean time, can you point me to any reference about how universal ctags project compares, in terms of goals and features, with other tagging utilities? Is there something like this that includes universal ctags? https://github.com/oracle/opengrok/wiki/Comparison-with-Similar-Tools @DaveBraze, as you found, what I know is the opengrok's page. The short answer is "No, utils::rtags() does not understand rmarkdown files." But, https://ctags.io/ may (soon?) provide an alternative solution. h/t @masatake-yamato (https://stackoverflow.com/users/6386727/masatake-yamato) Now, the ctags provides RMarkdown parser. https://github.com/universal-ctags/ctags/commit/3bbeaff45d748874f566350ab065b5510e382c16
STACK_EXCHANGE
jQueryMobile close dialog without page change I'm developping a web app with jQM. It's a mono-page app with a lot of script-generated virtual pages. I use a lot of dialog, and occasionally, closing those dialog make my app go back to the start page (3 pages back in the history). I can't make a test case, as it only occurs in complex cases, database based, and only on idevice, not on computer. I found a lot of similar issues, but all for jQM 1.2 or older. I'm using jQM 1.5.3. I added the log-page-event.js tool in my script, and it give me this : First, the page is loaded "visite_client-86871" [Log] pagebeforeshow<PHONE_NUMBER>507) (log-page-events.js, line 44) page: div.visite_client.ui-page.ui-page-theme-a.ui-page-header-fixed.ui-page-footer-fixed#visite_client-86871 data-url: visite_client-86871 [Log] hashchange<PHONE_NUMBER>313) (log-page-events.js, line 44) location: http://m2.biocrm.fr/#visite_client-86871 [Log] pagehide<PHONE_NUMBER>645) (log-page-events.js, line 44) page: div.ui-page.ui-page-theme-a#edit_visite data-url: edit_visite [Log] pageshow<PHONE_NUMBER>145) (log-page-events.js, line 44) page: div.visite_client.ui-page.ui-page-theme-a.ui-page-header-fixed.ui-page-footer-fixed.ui-page-active#visite_client-86871 data-url: visite_client-86871 [Log] pagechange<PHONE_NUMBER>171) (log-page-events.js, line 44) page: div.visite_client.ui-page.ui-page-theme-a.ui-page-header-fixed.ui-page-footer-fixed.ui-page-active#visite_client-86871 data-url: visite_client-86871 Then, I click open the dialog : [Log] popstate<PHONE_NUMBER>968) (log-page-events.js, line 44) location: http://m2.biocrm.fr/#visite_client-86871&ui-state=dialog state.hash: [Log] hashchange<PHONE_NUMBER>192) (log-page-events.js, line 44) location: http://m2.biocrm.fr/#visite_client-86871&ui-state=dialog Then, I close the dialog : [Log] popstate<PHONE_NUMBER>377) (log-page-events.js, line 44) location: http://m2.biocrm.fr/#visite_client-86871 state.hash: #visite_client-86871 [Log] hashchange<PHONE_NUMBER>403) (log-page-events.js, line 44) location: http://m2.biocrm.fr/#visite_client-86871 And with no reason, it return to the home page "tournees" [Log] pagebeforechange<PHONE_NUMBER>407) (log-page-events.js, line 44) page: div.ui-page.ui-page-theme-a.ui-page-footer-fixed#tournees data-url: tournees [Log] pagebeforechange<PHONE_NUMBER>459) (log-page-events.js, line 44) page: div.ui-page.ui-page-theme-a.ui-page-footer-fixed#tournees data-url: tournees [Log] pagebeforehide<PHONE_NUMBER>501) (log-page-events.js, line 44) page: div.visite_client.ui-page.ui-page-theme-a.ui-page-header-fixed.ui-page-footer-fixed.ui-page-active#visite_client-86871 data-url: visite_client-86871 What could I do to detect the reason of the error, or to prevent it ? It looks like jQM is confused in his history. Is it possible to log it more precisely ? Thank you very much. I fixed it by adding data-history="false" to the popup itself. I'm not quite sure if there is a bug somewhere or not, but it work for me like this, so in case it helps someone...
STACK_EXCHANGE
Use a prebuilt model to extract info from invoices or receipts in Microsoft SharePoint Syntex Prebuilt models are pretrained to recognize documents and the structured information in the documents. Instead of having to create a new custom model from scratch, you can iterate on an existing pretrained model to add specific fields that fit the needs of your organization. Currently, there are two prebuilt models available: invoice and receipt. The invoice prebuilt model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key invoice information such as customer name, billing address, due date, and amount due. The receipt prebuilt model analyzes and extracts key information from sales receipts. The API analyzes printed and handwritten receipts and extracts key receipt information such as merchant name, merchant phone number, transaction date, tax, and transaction total. Additional prebuilt models will be available in future releases. Create a prebuilt model Follow these steps to create a prebuilt model to classify documents in SharePoint Syntex. From the Models page, select Create a model. On the Create a model panel, in the Name field, type the name of the model. In the Model type section, select one of the prebuilt models: - Invoice processing prebuilt - Receipt processing prebuilt If you want to create a traditional, untrained document understanding model instead of a prebuilt model, select Custom document understanding. If you want to change the content type or add a retention label, select Advanced settings. Sensitivity labels are not available for prebuilt models at this time. Select Create. The model will be saved in the Models library. Add a file to analyze On the Models page, in the Add a file to analyze section, select Add file. On the Files to analyze the model page, select Add to find the file you want to use. On the Add a file from the training files library page, select the file, and then select Add. On the Files to analyze the model page, select Next. Select extractors for your model On the extractor details page, you'll see the document area on the right and the Extractors panel on the left. The Extractors panel shows the list of extractors that have been identified in the document. The entity fields that are highlighted in green in the document area are the items that were detected by the model when it analyzed the file. When you select an entity to extract, the highlighted field will change to blue. If you later decide not to include the entity, the highlighted field will change to gray. The highlights make it easier to see the current state of the extractors you have selected. You can use the scroll wheel on your mouse or the controls at the bottom of the document area to zoom in or out as needed to read the entity fields. Select an extractor entity You can select an extractor either from the document area or from the Extractors panel, depending on your preference. To select an extractor from the document area, select the entity field. To select an extractor from the Extractors panel, select the checkbox to the right of the entity name. When you select an extractor, a Select extractor? box is displayed in the document area. The box shows the extractor name, the original value, and the option to select it as an extractor. For certain data types such as numbers or dates, it will also show an extracted value. The original value is what is actually in the document. The extracted value is what will be written into the column in SharePoint. When the model is applied to a library, you can use column formatting to specify how you want it to look in the document. Continue to select additional extractors you want to you use. You can also add other files to analyze for this model configuration. Rename an extractor You can rename an extractor either from the model home page or from the Extractors panel. You might consider renaming selected extractors because these names will be used as the column names when the model is applied to the library. To rename an extractor from the model home page: In the Extractors section, select the extractor you want to rename, and then select Rename. On the Rename entity extractor panel, enter the new name of the extractor, and then select Rename. To rename an extractor from the Extractors panel: Select the extractor you want to rename, and then select Rename. In the Rename extractor box, enter the new name of the extractor, and then select Rename. Apply the model To save changes and return to the model home page, on the Extractors panel, select Save and exit. If you're ready to apply the model to a library, in the document area, select Next. On the Add to library panel, choose the library to which you want to add the model, and then select Add. Change the view in a document library There are multiple ways to view how you see the information in a SharePoint document library. You can change the view in your document library to fit your needs or preferences. To change the view on the library page, select the view dropdown menu to show the options, and then select the view you want to use. For example, if you select Tiles from the list, the page will display as shown. The Tiles view displays up to eight user-created fields. If there are fewer than eight, up to four system-generated fields are shown: Sensitivity (if available), Retention (if available), Content type, Modified date, Modified by, and Classification date. To edit any current view, on the view dropdown menu, select Edit current view. Submit and view feedback for
OPCFW_CODE
Approaches to a few thousand individually addressable widely dispersed LEDs I'm trying to come up with the best design for a project requiring between 1000 and 5000 individually-addressable LEDs, not in a strip, grid, ribbon, pegboard, etc. configuration. There is a lot of information about LED strips and arrays, but those don't seem directly applicable to my project. What I want to do is similar to a "pick to light" system seen in warehouses, if you're familiar with that. I will have a room with many storage locations (various sizes) in it. I will have software that keeps track of what is in those locations, and when a user searches for one (either to remove or add one), the LED above the bin will light. I need to support having an arbitrary set of the LEDs on at any given time -- it's not just one at a time (though realistically I'd expect well under 1/4 would ever be on at once). The bins will range from 3 inches to several feet apart, and the room will be on the order of 15 feet square, with storage locations spread throughout. Reliability, a nice clean installation and ease of setup are more important than cost. Of course, all things being equal, lower cost is better -- it's just not the top priority. This sounds like it might be a good application for a 1-Wire® bus, using something like a DS2413 to control every LED or two, at least if those parts were a bit cheaper. Otherwise, your best bet may be to attach each LED to a small microcontroller, and use a simple unidirectional communications scheme to send data to all the controllers. Using a unidirectional scheme would facilitate the construction of very simple repeaters (simple non-inverting buffers would probably suffice). Each microcontroller could use a small amount of flash or EEPROM to hold an address, so all devices could be individually addressed independently. The biggest difficulty might be configuring the network; that might be best facilitated by having each controller include a command to output its address by modulating its light output. An optical receiver attached to a portable computer could be used to visit each node, read its light pattern, and make note of its physical location. I like this. Even the most simple pics should be able to handle an led, a button, and 3-4 wire communication bus. If 1 mcu per led is too expensive I am sure you could get it down to 1 mcu per 8 or so without too much sweat. The hardest part is still programming each to know its "address". Not impossible, just time consuming. Maybe a button to store the next address that comes down the bus or something, in combination with a barcode scanner. The advantages of the 1-Wire chip would be (1) they are factory-programmed with unique serial numbers; (2) they are supposedly ESD-hardened, to an extent greater than I would expect for most microcontrollers. Otherwise, a microcontroller would seem a really good approach. If one uses unidirectional communication, the wiring should be pretty easy, either with all units on a bus, or with each unit having an input and one or more outputs which 'tweaks' the input somehow. A fairly simple approach might be to have two daisy-chain outputs... ...one of which increments by 1 the address of relayed packets, and one of which shifts it left by 4 bits or so. Combine those and one should be able to set up a variety of interesting topologies. If the cost of a 'standard' communication bus (Can, LIN, RS845-proprietary) and a uC at each location is no problem I would go for that. You'd probably need a two-stage bus, I can't think of any bus that supports 1k nodes, but 64 x 64 should be doable. For wiring I would select something that is available pre-assembled, ethernet patch cable might be a good choice. A low-cost RF unit would be another option (RFM70? I just finished my C/C++ library), maybe with battery power (no cables!), the master could periodically scan all units and those who do not respond or detect a low battery are singled out for replacement. Or saturate the room with a broadcasted IR signal. In the spring I will do a course on system architecture, this is a nice problem to illustrate the consequences of various approaches! You could think of the whole room as one big 3d matrix of LEDs. There is nothing to say that the LED has to be within a few inches of the chip driving it. I assume you will be having, for example, a room with many racks. Each rack having many shelves. Each shelf having many bins. You could have a controller per shelf - be that a microcontroller, or a shift register, or whatever, connected to the LEDs above each bin. Those controllers/shift registers could be grouped into a per-rack controller. Finally, the per-rack controllers are grouped into the master room controller. The room controller picks the rack and tells it the shelf and bin. The rack then tells the shelf which bin. The shelf lights the LED. This could scale to multiple rooms by adding another layer to group the rooms together. As an afterthought, you could have a button next to each LED which the operator presses when he has picked the item, thus turning off the LED and informing the stock management system that the item has been picked. If you look through lapsed US [patents with my name on them (total of 1) you will find out how my client proposed to essentially exactly what you describe. Technically it worked well. Business wise he failed to sell it for whatever reason. I don't have patent number to hand but can probably dig it out. Early 2000's AFAIR. Let me know if you find it. There are many other ways, but I used an inductive loop that power fed all the electronics and addressed each bin digitally. Bidirectional data on an inductive powering loop is a reasonably good trick. I don't recall what was in the patent but you'd probably need more than was there for as full system as the patent may mainly have covered inductive powering.
STACK_EXCHANGE
We've been seeing a strange bug in KVM guests hosted by a Debian jessie box (running 3.16.7-ckt11-1+deb8u5 on x86-64), Basically, we are getting random VirtIO errors inside our guests, resulting in stuff like this [4735406.568235] blk_update_request: I/O error, dev vda, sector 142339584 [4735406.572008] EXT4-fs warning (device dm-0): ext4_end_bio:317: I/O error -5 writing to inode 1184437 (offset 0 size 208896 starting block 17729472) [4735406.572008] Buffer I/O error on device dm-0, logical block 17729472 [ ... ] [4735406.572008] Buffer I/O error on device dm-0, logical block 17729481 [4735406.643486] blk_update_request: I/O error, dev vda, sector 142356480 [ ... ] [4735406.748456] blk_update_request: I/O error, dev vda, sector 38587480 [4735411.020309] Buffer I/O error on dev dm-0, logical block 12640808, lost sync page write [4735411.055184] Aborting journal on device dm-0-8. [4735411.056148] Buffer I/O error on dev dm-0, logical block 12615680, lost sync page write [4735411.057626] JBD2: Error -5 detected when updating journal superblock for dm-0-8. [4735411.057936] Buffer I/O error on dev dm-0, logical block 0, lost sync page write [4735411.057946] EXT4-fs error (device dm-0): ext4_journal_check_start:56: Detected aborted journal [4735411.057948] EXT4-fs (dm-0): Remounting filesystem read-only [4735411.057949] EXT4-fs (dm-0): previous I/O error to superblock detected (From an Ubuntu 15.04 guest, EXT4 on LVM2) Jan 06 03:39:11 titanium kernel: end_request: I/O error, dev vda, sector 1592467904 Jan 06 03:39:11 titanium kernel: EXT4-fs warning (device vda3): ext4_end_bio:317: I/O error -5 writing to inode 31169653 (offset 0 size 0 starting block 199058492) Jan 06 03:39:11 titanium kernel: Buffer I/O error on device vda3, logical block 198899256 Jan 06 03:39:12 titanium kernel: Aborting journal on device vda3-8. Jan 06 03:39:12 titanium kernel: Buffer I/O error on device vda3, logical block 99647488 (From a Debian jessie guest, EXT4 directly on a VirtIO-based block device) When this happens, it affects multiple guests on the hosts at the same time. Normally they are severe enough that they end up with a r/o file system, but we've seen a few hosts survive with a non-fatal I/O error. The host's dmesg has nothing interesting to see. We've seen this happen with quite heterogeneous guests: Debian 6, 7 and 8 (Debian kernels 2.6.32, 3.2 and 3.16) Ubuntu 14.09 and 15.04 (Ubuntu kernels) 32 bit and 64 bit installs. In short, we haven't seen a clear characteristic in any guest, other than the affected hosts being the ones with some sustained I/O load (build machines, cgit servers, PostgreSQL RDBMs...). Most of the times, hosts that just sit there doing nothing with their disks are not affected. The host is a stock Debian jessie install that manages libvirt-based QEMU guests. All the guests have their block devices using virtio drivers, some of them on spinning media based on LSI RAID (was a 3ware card before, got replaced as we were very suspicious about it, but are getting the same results), and some of them based on PCIe SSD storage. We have some other 3 hosts, similar setup except they run Debian wheezy (and honestly we're not too keen on upgrading them yet, just in case), none of them has ever shown this kind of problem We've been seeing this since last summer, and haven't found a pattern that tells us where these I/O error bugs are coming from. Google isn't revealing other people with a similar problem, and we're finding that quite surprising as our setup is quite basic. This has also been reported downstream at the Debian BTS as Bug#810121 (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810121). Sorry for the noise. This is actually caused by os-prober opening the devices and causing corruption and mayhem. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=788062 for details.
OPCFW_CODE
package simple type Extend map[string]interface{} func (ex Extend) Int(key string) int { return int(ex.Float64(key)) } func (ex Extend) Ints(key string) []int { iv := make([]int, 0) ex.sliceRange(key, func(i interface{}) { iv = append(iv, int(i.(float64))) }) return iv } func (ex Extend) Int32(key string) int32 { return int32(ex.Float64(key)) } func (ex Extend) Int32s(key string) []int32 { iv := make([]int32, 0) ex.sliceRange(key, func(i interface{}) { iv = append(iv, int32(i.(float64))) }) return iv } func (ex Extend) Int64(key string) int64 { return int64(ex.Float64(key)) } func (ex Extend) Int64s(key string) []int64 { iv := make([]int64, 0) ex.sliceRange(key, func(i interface{}) { iv = append(iv, int64(i.(float64))) }) return iv } func (ex Extend) Float64(key string) float64 { if v, ok := ex[key]; ok { return v.(float64) } return 0 } func (ex Extend) Float64s(key string) []float64 { fv := make([]float64, 0) ex.sliceRange(key, func(i interface{}) { fv = append(fv, i.(float64)) }) return fv } func (ex Extend) Extend(key string) Extend { if v, ok := ex[key]; ok { return v.(map[string]interface{}) } return Extend{} } func (ex Extend) Extends(key string) []Extend { ev := make([]Extend, 0) ex.sliceRange(key, func(i interface{}) { ev = append(ev, i.(map[string]interface{})) }) return ev } func (ex Extend) String(key string) string { if v, ok := ex[key]; ok { return v.(string) } return "" } func (ex Extend) Strings(key string) []string { sv := make([]string, 0) ex.sliceRange(key, func(i interface{}) { sv = append(sv, i.(string)) }) return sv } func (ex Extend) sliceRange(key string, fn func(interface{})) { if v, ok := ex[key]; ok { vs := v.([]interface{}) for i := range vs { fn(vs[i]) } } }
STACK_EDU
The upcoming “prefers-reduced-data” media query will make your site more accessible in the “more people can now enter the building” meaning of accessibility. In this recording of my talk given at Shortstack conference you will learn strategies to start implementing this feature now and make your site available to even more people. Prefers-contrast: forced is a mistake CSS Media Queries Level 5 is coming and though it’s still heavily in progress, there is a particular new option that feels like a mistake in the making to me: prefers-contrast: forced. I’ll explain why I feel that way in this article. 24 Feb 2021: prefers-contrast: forced has been removed from the specification! Read the Update at the end of this article. Sometime later in 2021: They snuck it back into the specification as prefers-contrast: custom, which is even less descriptive. There is an existing media query, forced-colors: active, supported on Windows by Edge (and in IE as -ms-high-contrast), that evaluates to true if Windows is in High Contrast mode. If you are unfamiliar with High contrast mode, this is what it does in short: it overwrites all colors to a limited set of user selected colors. It also adds backgrounds (“backplates”) to any text that sits on top of images. This can help immensely with readability, and the media query will help you style additional elements (like SVG icons) to fit with the design, and also to selectively turn forced colors off, for example when color in your UI is significant. Though it is called “High Contrast”, the key here is that it’s user selected colors. There’s a few built-in themes that are all high contrast, but the same feature can be used to dim all the colors for people with photophobia or blue light sensitivity. There is nothing forcing (ahem) the end result to be high contrast. A quick primer on prefers-contrast is used to indicate if users prefer low contrast. Contrast in this context is the color contrast between adjacent colors (for example, text versus background colors). There is no browser support yet, though Chrome, Edge and Firefox all have it internally already. For a larger overview check out my article Beyond screen sizes: responsive design in 2020. Why would someone prefer low contrast? You, the reader, probably dim your screen at night, or use a blue light filter. Those both lower contrast. There are also people that dim the brightness of their screen to prevent triggering migraines as well as the photophobia and blue light sensitivity I mentioned earlier. This feature is not set in stone yet. Instead of low it might become high might be split into two values: high (as supported on Windows) and increased (as supported on macOS). where increased will match for high as well. One of the proposals is to have a new value, forced, which will make prefers-contrast: forced behave the same as forced-colors: active. Here are the reasons I found for this: - The vast majority of forced-colorsusers use it to force high contrast. (Though not all, as mentioned.) - Authors (that’s us web devs) would not have to add support for forced-colorsboth, which they might not do. Having just to support prefers-contrastputs less of a burden on us. - You can also check for just prefers-contrastwithout a value, which will match both the highvalues and also the forcedvalue. I do not understand the value of this. Surely increasing contrast for someone that wants low contrast, or decreasing contrast for someone that wants high contrast is not a good thing. I feel like I’m missing something here even outside of the Regarding that last point, in reading the discussions on the GitHub issues about this I noticed there might be some confusion about “contrast”. Some people see it strictly as the color contrast, which is what the current working draft says as well, whereas other people see “contrast” in the context of “stuff on the screen”, and argue that removing stuff from the screen (complexity) is good in both the high and low contrast preferences. prefers-contrast is not about color contrast, the argument about removing complexity makes sense, but as it stands the Working Draft explicitly mentions color contrast. The issues I have with the forced exist on both sides of the colon: My issue with the media feature prefers-* prefix for media queries to me indicates a media query that expects me to do something about it. This in contrast to other media queries, that are a signal that something already is. For example: the width media query is something that already is. I can not change the width of the viewport, I just have to take it and respond to it. prefers-* however, nothing has happened yet. It’s my responsibility as an author to make sure something will happen, if I choose to respond to this user preference. forced as a value, which indicated something has already changed (literally all the colors), breaks this user preference convention. This makes it harder for authors to learn CSS and anticipate what’s going to happen. (No, having to learn a single exception is not a problem. But it does mean that the convention in general no longer matters, and that is a problem.) My issue with the value As noted in the Microsoft article linked above, High Contrast mode is a misnomer. forced-colors maybe says something about wanting high contrast for most users (citation needed) but this is not an absolute truth. Pretending forced colors are always high contrast by conflating it with prefers-contrast does a disservice to anyone using forced colors to help with for example photophobia or blue light sensitivity. This will lead to authors using prefers-contrast: forced (and prefers-contrast without value) as a blunt weapon to force high contrast upon users that have configured their entire OS to be low contrast. This seems like adding insult to injury with the justification that this is easier for authors. In the situation where a user has configured their forced colors to be low-contrast sepia tones (for reduced blue light), the last thing you want to do is force a high-contrast. There is also the issue of what should a CSS author do with forced colors? The examples given are about making sure parts of your page that haven’t already been changed are made to adhere to the forced colors, and for parts of the page where the original colors are important to be reverted. On the other hand, for users that want high contrast you will probably want to add cleared borders and other delineations, remove things like gradients and box shadows (something that forced-colors already does anyway) and for low contrast you’d probably want to dim full white and lighten full black colors, for example. forced may match prefers-contrast: high or prefers-contrast: low, but it doesn’t have to. It should be noted that the spec indicates that, when programmatically determinable, forced colors should also resolve prefers-contrast: high or prefers-contrast: low to true (although the likelihood of browsers implementing this is probably low). More importantly though, what are you going to do with regards to the contrast if all the colors have already changed to the preferred contrast levels of your user? What happens for people that naively check for prefers-contrast: forced or indeed prefers-contrast? Are they not far more likely to do the wrong thing? The specification calls out to only use this to remove visual clutter, sure. Most authors never read the specification though, but will take the media query at face value. As an author, I do not see this hypothetical benefit. The issue of authors choosing to implement just some user preferences is a relevant one, but if you take a look at what authors need to do in response to user preferences, finding a combined value for prefer-reduced-transparency feels like a bigger benefit. Existing duplication with forced-colors is not going away. It’s implemented and in use and has exactly the right semantics for what it needs to do: Give authors the indication that colors have already changed and now we need to make sure the parts of our site that need to be original colors can be adapted, as well as adapting other parts of our site to the forced colors being used. Contrast does not come into play for what authors need to do to respond to forced-colors. The fact that for the majority of cases using forced colors leads to increased contrast has no effect on what authors do in their CSS stylesheet. prefers-contrast: forced breaks the prefers-* convention, the forced value doesn’t actually have to say anything about contrast, there are no significant contrast-related things an author needs to do to respond to forced colors and there is already an implemented alternative with the right semantics. In short, I think prefers-contrast: forced is a mistake and I think it will end up on the incomplete list of CSS mistakes at some point in the future. A new media feature: While I’m not sure that it’s wise to take a bunch of preferences and then say they all prefer less visual clutter, given that the spec writers have this intent I think a more explicit name is better. Instead of overloading the meaning of prefers-contrast I would propose a new boolean-only CSS Media feature called prefers-reduced-complexity that would be true if any of the following is true: prefers-contrasthas any value other than prefers-reduced-transparencyis set to forced-colorsis set to This could then be the “simple” media feature to check for that matches all situations where reducing onscreen complexity is probably a good idea, and we can use the specific media features for their specific purposes. Update 24 feb 2021 The CSS working group voted to remove prefers-contrast: forced from the specification, a result I’m very pleased about. You can re-read the proceedings here: [mediaqueries-5] duplication of forced-colors: active and prefers-contrast: forced. The discussion featured this article (!) and there is a new discussion on a prefers-reduced-complexity-style media query here: [mediaqueries-5] Remove (prefers-contrast) as a boolean, and replaced by a new color reduction media query. I’ll be keeping an eye on this. Thanks Hidde and Eric for providing feedback on this article.
OPCFW_CODE
The procedure for online editing is very similar in the PLC-5, SLC-500, and ControlLogix. There are five basic steps in performing an online edit: 1) Start Rung Edits (Place the rung into edit mode) 2) Make your changes to the rung 3) Accept Edits (Send the edits to the processor) 4) Test Edits (Ensure your edits work how you want them to work) 5) Assemble Edits (Removes the old rung, and remove the editing markers) Note: The ControlLogix processor allows you to accept edits to a single rung or all rungs in the program… Modern versions of RSLogix 5000 also have a “Finalize” option which allows you to Accept, Test, and Assemble all in one step!) These steps are simple, but there are a few rules: • You cannot change the data type of existing tags. If you create a new tag with the wrong data type, you must delete the tag, and declare it again. • You cannot make an on line edit if the key switch is in Run Mode. • You do not need to perform an on line edit to directly change a value in the data table such as the preset of a timer or counter. • If the processor is in program mode, you do not need to test and assemble after accepting. • If the processor is in program mode, and a rung is deleted, there is no warning. Note: These may vary depending on which processor you are using, and the processor version. Let’s walk through the 5 step procedure: Look at the rung below. Our objective is to transfer control of the output to LocalSwitch.6. If you click on bit LocalSwitch.7 and attempt to make a change, nothing happens. Step 1) Start Rung Edits The first step is to put the rung into edit mode. There are several ways this can be done: • Double click the rung number • Right click the rung number and start rung edits • From ‘Logic’ on the menu bar, click On line Edits, then start pending rung edits • Click the start rung edit icon in the on line editing tool bar just above the ladder view Notice that RSLogix made a copy of the rung for us to work with. By looking at the power rails, you can see the bottom rung is being executed by the processor, and the top rung is the one you need to make edits to. You will also notice the e (edit) or i (insert) and r (replace) in the margin are lower case. This means the edits are not in the processor yet. If you are adding new logic instead of modifying existing logic, this is the step where you add a new rung. Step 2) Make Changes Now that the rung is in edit mode, changes can be made. If you added a new rung in step #1, this is where you need to add your logic to the new rung. Be careful not to add any logic that will fault the processor or cause damage to personnel or equipment. Notice the i (insert) and r (replace) zones are in lower case. This means the changes are in RAM only, and have not been sent to the processor. In this example, bit 7 is being changed to bit 6 on the input. Step 3) Accept Edits Now that your rung is set up as you need it, it’s time to send the edits to the processor. You can accept pending rung edits (This would just accept the rung you have selected), or you can accept pending program edits (This would accept all the edits in the current program) There are several ways to perform the next three steps. • Right click the rung number, and accept edits • Click Logic | On line Edits | Accept (rung or program edits) from the menu bar • Click one of the Accept Edits icons in the on line editing tool bar as shown below Notice in the margin rung 1 is marked for insertion, and rung 2 is marked for removal. The I’s and R’s are capitol because the edits are now in the processor. Look at the power rails. You can see the old rung is still being executed by the processor. You will also see that pending edits exist by looking at the on line tool bar. Step 4) Test Edits When you test edits, the new or modified rungs will become active. The old rungs will be left in the processor until we are sure our new rungs are working properly. Be aware that if you change an output address, there might no longer be logic writing to that address. This means that you could abandon a bit in the ON state. You can test your edits by doing one of the following actions: 1) Right click the rung number 2) Choose Logic | On line Edits | Test accepted program edits from the menu bar 3) Click the Test icon in the on line edit tool bar above your logic window. If you are modifying an input type address you should also be careful. If the rung was previously true, you may want to make sure your new logic is also going to be true at the moment you accept, or the the output may shut off. Let’s test the edits, and you will notice the new rung(s) are active. If the edits do not work the way you anticipated, you can un-test to revert to the old rung while you make other changes to the new rung. Notice the power rails: Step 5) Assemble Edits If you logic is working properly, go ahead and assemble the edits. Assembling removes the old rung, and the edit zone markers. After Assembling, you may want to save your work to the hard drive. You can assemble by using one of the following methods: 1) Right click the rung number, and choose accept edits (if available in your version) 2) Click Logic | On line Edits | Assemble accepted program edits from the menu bar. 3) Click the Assemble Edits icon in the on line edits tool bar. Notice the Logic now appears to be normal:
OPCFW_CODE
[Python-ideas] Python Float Update drekin at gmail.com Thu Jun 4 14:52:10 CEST 2015 Thank you very much for a detailed explanation. On Wed, Jun 3, 2015 at 10:17 PM, Andrew Barnert <abarnert at yahoo.com> wrote: > On Jun 3, 2015, at 07:29, drekin at gmail.com wrote: > > Stephen J. Turnbull writes: > >> Nick Coghlan writes: > >>> the main concern I have with [a FloatLiteral that carries the > >>> original repr around] is that we'd be trading the status quo for a > >>> situation where "Decimal(1.3)" and "Decimal(13/10)" gave different > >>> answers. > >> Yeah, and that kills the deal for me. Either Decimal is the default > >> representation for non-integers, or this is a no-go. And that isn't > >> going to happen. > > What if also 13/10 yielded a fraction? > That was raised near the start of the thread. In fact, I think the initial > proposal was that 13/10 evaluated to Fraction(13, 10) and 1.2 evaluated to > something like Fraction(12, 10). > > Anyway, what are the objections to integer division returning a > fraction? They are coerced to floats when mixed with them. > As mentioned earlier in the thread, the language that inspired Python, > ABC, used exactly this design: computations were kept as exact rationals > until you mixed them with floats or called irrational functions like root. > So it's not likely Guido didn't think of this possibility; he deliberately > chose not to do things this way. He even wrote about this a few years ago; > search for "integer division" on his Python-history blog. > So, what are the problems? > When you stay with exact rationals through a long series of computations, > the result can grow to be huge in memory, and processing time. (I'm > ignoring the fact that CPython doesn't even have a fast fraction > implementation, because one could be added easily. It's still going to be > orders of magnitude slower to add two fractions with gigantic denominators > than to add the equivalent floats or decimals.) > Plus, it's not always obvious when you've lost exactness. For example, > exponentiation between rationals is exact only if the power simplifies to a > whole fraction (and hasn't itself become a float somewhere along the way). > Since the fractions module doesn't have IEEE-style flags for > inexactness/rounding, it's harder to notice when this happens. > Except in very trivial cases, the repr would be much less human-readable > and -debuggable, not more. (Or do you find 1728829813 / 2317409 easier to > understand than 7460.181958816937?) > Fractions and Decimals can't be mixed or interconverted directly. > There are definitely cases where a rational type is the right thing to use > (it wouldn't be in the stdlib otherwise), but I think they're less common > than the cases where a floating-point type (whether binary or decimal) is > the right thing to use. (And even many cases where you think you want > rationals, what you actually want is SymPy-style symbolic > computation--which can give you exact results for things with roots or sins > or whatever as long as they cancel out in the end.) -------------- next part -------------- An HTML attachment was scrubbed... More information about the Python-ideas
OPCFW_CODE
[PATCH] Introduce the authorizer protocol giuliocamuffo at gmail.com Tue Nov 24 07:16:40 PST 2015 This new extension is used by clients wanting to execute priviledged actions such as taking a screenshot. The usual way of granting special priviledged to apps is to fork and exec them in the compositor, and then checking if the client is the known one when it binds the restricted global interface. This works but is quite limited, as it doesn't allow the compositor to ask the user if the app is trusted, because it can't wait for the answer in the bind function as that would block the compositor. This new protocol instead allows the answer to come after some time without blocking the compositor or the client. For reference, i've implemented this in orbital and it's used by the screenshooter tool. The name is different but it works exaclty the same as this one. One thing missing is how the revoke authorization, if even want/need it? Makefile.am | 1 + unstable/authorizer/authorizer-unstable-v1.xml | 90 ++++++++++++++++++++++++++ 2 files changed, 91 insertions(+) create mode 100644 unstable/authorizer/authorizer-unstable-v1.xml diff --git a/Makefile.am b/Makefile.am index a32e977..bfe9a6a 100644 @@ -5,6 +5,7 @@ unstable_protocols = \ nobase_dist_pkgdata_DATA = \ diff --git a/unstable/authorizer/authorizer-unstable-v1.xml b/unstable/authorizer/authorizer-unstable-v1.xml new file mode 100644 @@ -0,0 +1,90 @@ +<?xml version="1.0" encoding="UTF-8"?> + Copyright © 2015 Giulio Camuffo. + Permission to use, copy, modify, distribute, and sell this + software and its documentation for any purpose is hereby granted + without fee, provided that the above copyright notice appear in + all copies and that both that copyright notice and this permission + notice appear in supporting documentation, and that the name of + the copyright holders not be used in advertising or publicity + pertaining to distribution of the software without specific, + written prior permission. The copyright holders make no + representations about the suitability of this software for any + purpose. It is provided "as is" without express or implied + THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS + SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND + FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY + SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN + AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, + ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF + THIS SOFTWARE. + <interface name="zwp_authorizer_v1" version="1"> + <description summary="authorize clients to use certain interfaces"> + This global interface allows clients to ask the compositor the + authorization to bind certain restricted global interfaces. + Any client that aims to bind restricted interfaces should first + request the authorization by using this interface. Failing to do + so will result in the compositor sending a protocol error to the + client when it binds the restricted interface. + The list of restricted interfaces is compositor dependant, but must + not include the core interfaces defined in wayland.xml. However, if + an authorization request is done for a non-restricted interface the + compositor must reply with a grant. + <request name="destroy" type="destructor"> + <description summary="destroy this zwp_authorizer_v1 object"> + Any currently ongoing authorization request will outlive this object. + <request name="authorize"> + <description summary="authorize a global interface"> + The authorize request allows the client to ask the compositor the + authorization to bind a restricted global interface. The newly + created zwp_authorizer_feedback_v1 will be invalid after the + compositor sends either the granted or denied event so the client + is expected to destroy it immediately after. + <arg name="id" type="new_id" interface="zwp_authorizer_feedback_v1" summary="the new feedback object"/> + <arg name="global" type="string" summary="the global interface the client wants to bind"/> + <interface name="zwp_authorizer_feedback_v1" version="1"> + <description summary="feedback for an authorization request"> + A zwp_authorizer_feedback_v1 object is created by requesting + an authorization with the zwp_authorizer_v1.authorize request. + The compositor will send either the granted or denied event based + on the system and user configuration. How the authorization process + works is compositor specific, but a compositor is allowed to ask + for user input, so the client must not assume the reply will come + <event name="granted"> + <description summary="the authorization was granted"> + The authorization was granted. The client can now bind the restricted + <event name="denied"> + <description summary="the authorization was denied"> + The authorization was denied. The client is not allowed to bind the + restricted interface and trying to do so will trigger a protocol + error killing the client. More information about the wayland-devel
OPCFW_CODE
Did you hear the one about the dev who broke nodejs? Once upon a time there was a nodejs developer. He built stuff for the community that many people used. Then a side project, “kik”, got owned by a dick-company called Kik who refused to not be dicks. Then the developer pulled down a bunch of packages that ended up breaking the world. And everybody lived happily ever after. Yes, I read the full email exchange. Azer was a dick, I can’t deny that, but Kik were bigger dicks for forcing the issue in the first place. But really, the bigger issue here is the way devs have come to “write code”, especially in the nodejs world. In a dynamic language where deployment requires the entire app’s source code, including the source code of every dependency, it’s surprising how often a dev will prefer to add one more dependency rather than repeat code that’s been done before. Even when that code is absurdly simple. So as far as I’m concerned, all you node zealots are insane. Adding dependencies to a project to avoid writing a couple hours of local code you can easily see and debug? Yeah, it saves time at first… until you find a bug in upstream. Or you need to do things just a bit differently than the package. Or upstream is lost/destroyed/hacked. Or you want your developers to actually understand WHY left padding has to be done a certain way so they can generalize on that information. EVERY dependency adds risk. Relying on a package you don’t control for trivial functionality is, at best, hopelessly lazy. But then there’s another issue: why are people having such problems in the first place? When we used Perl modules at Musician’s Friend, we had a local copy of modules we considered to be “gold” (production-ready). Our production application used these modules and local code. Nothing else was allowed. Deploys didn’t happen if a new module didn’t clear QA and get copied up to our local cpan repository first. In my mind, this is the only way to go for mission-critical work. If you’re going to depend on something external, you better the fuck have a local copy of that external dependency, and all of its external dependencies. And you better the fuck make that shit is safe and secure, not just stuck on some dev’s hard drive somewhere. And you BETTER THE FUCK have it set up in such a way that your production systems NEVER depend on the external code. They point to your local copy. You don’t upgrade that shit in production until it’s tested locally. This situation should have been a minor inconvenience, not something that broke anything important. And once people figured out what was up, they should have copied the “left pad” from their gold repo, grumbled a bit, and called it a day.
OPCFW_CODE
Counting word frequencies with python programming historian. Python program to count the frequency of each word in a string. Assuming we have declared an empty dictionary frequency. Write a python script to count frequency of words in a file using dictionaries. Python read from a text file and return words with frequency using a list and a dictionary duration. Python program to count the frequency of words in a file by alberto powers april 29, 2019 in this example, we will write a python program to find the frequency of the words present in the file. Write a python program to count the frequency of words in a file. Of course, we will learn the mapreduce, the basic step to learn big data. To achieve so, we make use of a dictionary object that stores the word as the key and its count as the corresponding value. To find frequency of words in a file using dictionaries. How to find frequency of each word from a text file using. In our last article, i explained word count in pig but there are some limitations when dealing with files in pig and we may need to write udfs for that those can be cleared in python. Now that quora is able to load comments again, and i can see what hte actual question is. Heres a script that computes frequency of words in file. My advice is not to override existing variables with new values, it makes code harder to understand. Python list method count returns count of how many times obj occurs in list syntax. A different approach is to transform the content of the input file with tr command so that. Check if the word provided by the user and any of the words in the list are equal and if they are, increment the word count. Python program to count the number of words in a file. Join the growing number of people supporting the programming historian so we can continue to share knowledge free of charge. Because once you specify the file name for opening it the interpreter searches the file in the same directory of the program. Python frequency of each character in string geeksforgeeks. Python 3 count the frequency of words appearing in a string. Python has an easy way to count frequencies, but it requires the use of a new type of variable. If you want to find how many times a single word is repeated in a file, i have quoted my code below. Develop a python program to accept the file name from the user and read the content of the file line by line. Count the frequency of words in a file in python duration. With the goal of later creating a pretty wordlelike word cloud from this data. The input file is typically a novel, fiction, essay, etc. Create a word counter in python python for engineers. As for doing it in python, that and r are the only languages im familiar with and tbh id like to figure this out within python thefoxx aug 24 12 at 22. But actually, the collections module has an even more useful object for your purposes. Read each line from the file and split the line to form a list of words. Count words characters frequency in python dictionaries. Python count words characters in text file duration. Heres how to easily count word frequency using python and hashmap. Manually specify the top n words to report default 100. Python programming tutorial 35 word frequency counter. The program will read all words, find out the number of occurrences for each word and print them out. Contribute to adityashrm21pdf wordcount development by creating an account on github. Its basically a dictionary that is specialised to do exactly what you want, count instances of a key value in an iterable. Most frequent words in a text file with python first, you have to create a text file and save the text file in the same directory where you will save your python program. At this point, we want to find the frequency of each word in the document. Python count occurrences of each word in given text file using dictionary many times it is required to count the occurrence of each word in a text file. To achieve so, we make use of a dictionary object that stores the word as the key and. Python word count filter out punctuation, dictionary. Write a python code to find the frequency of each word in a given string. Word count in python find top 5 words in python file. Dol python lesson 03 using python to count things duration. The method that i didnt know before was the get method. Word counter given an body of text, return a hash table of the frequency of each word. Count frequency of word in text file in python stack overflow. Count the frequency of words in a file in python youtube. Find the mostused words in a text and count how often theyre used. I assumed there would be some existing tool or code, and roger howard said nltks freqdist was easy as pie. Write a python program to count the occurrences of each word in a given sentence. Before you begin working with a dictionary, consider the processes used to calculate frequencies in a list. This is the object to be counted in the list return value. It conts them sorts them and prints them in other words, it completes the assignment. Python count occurrences of each word in given text file. Count words in a text file, sort by frequency, and. If you do not have these files, you can download a zip file containing all of. If you want to learn how to utilize the pandas, matplotlib, or seaborn libraries, please consider taking my python for data visualization linkedin learning course. Using grep c alone will count the number of lines that contain the matching word instead of the number of total matches. Hello, i tried looking for letter frequency or frequency distribution within the forum but i couldnt find any old thread about the subject, unfortunately. In this python tutorial, we will learn how to count the frequency of each word in a user input string. Finding the frequency of words in a file with python. Python program to count the frequency of words in a file. Given below are some highlevel steps to accomplish the task. For a file containing these words, the output will be 9. Take the file name and the word to be counted from the user. With emergence of python in the field of data science, it is essential to have certain shorthands to have upper hand among others. Most frequent words in a text file in python codespeedy. Python program to count words in a sentence geeksforgeeks. The o option is what tells grep to output each match in a unique line and then wc l tells wc to count the number of lines. Count the number of times each word appears in the given file that is the frequency of character in the string. Python script to create a histogram of words in a text file. Subsequently, we can use python s set function to compute the frequency of each word in a string. Again, as in the first method, we did the splitting of the input string, here also, we have to do it. Counting the frequency of specific words in a list can provide illustrative data. I will show you how to do a word count in python file easily. Word frequency in a python string multiple techniqes. Many times it is required to count the occurrence of each word in a text file. Python program to count words in a sentence data preprocessing is an important task in text classification. Naive method simply iterate through the string and form a key in dictionary of newly occurred element or if element is already occurred, increase its value by 1. To get a value from a dictionary in python we can also use square brackets e. To get the count of how many times each word appears in the sample, you can use the builtin python library collections, which helps create a special type of a python dictonary. Word frequency is word counting technique in which a sorted list of words with their frequency is generated, where the frequency is the occurrences in a given composition. Counter but id still modify some details alphabet alphabet. Word counts, just some of those words happen to be composed of a single letter, its for biological research. Please let me know if you have any questions either here, on youtube, or through twitter. Python count words characters in text file youtube. However, this will fail if the word is not available. Frequency distributions are generally constructed by running a number of experiments, and incrementing the count for a sample every time it is an outcome of an experiment. This lesson will teach you python s easy way to count such frequencies. Make sure you perform inclusion predicates with hashes, sets or similar data structures. This is how the total number of matching words is deduced. Analyze word frequency counts using twitter data and. So today i wrote the first python program of my life, using nltk, the natural language. Python program to count the frequency of each word in the file. Thats why we use get to provide a default value of 0 if the word can not be found. Lets see how we can list the different unique words in a text file and check the frequency of each word using python. In this pyspark word count example, we will learn how to count the occurrences of unique words in a text line. However, your current algorithm is very inefficient because it has to rescan the entire. Counting the frequency of specific words in the list can provide illustrative data. This method returns count of how many times obj occurs in list. How to count words in a file text with python quora. Okay, so we can read a file and print it on the screen. You need to strip off the spaces from your search words. Split the string into a list containing the words by using split function i. List of 2 element tuples count, word i should note that the code used in this blog post and in the video above is available on my github. Python is a widely used highlevel, generalpurpose, interpreted, dynamic programming language. Python count occurrences of each word in given text file using. Python program to count the occurrences of a word in a. The suitable concept to use here is pythons dictionaries, since we need keyvalue pairs, where key is the word, and the value represents the frequency words appeared in the document. It allows you to get the value of the key but if it isnt set before, set the value specified in our example 0. The program is implemented using the steps as explained in the algorithm above.1583 582 227 754 842 417 531 364 1441 904 685 914 255 515 1209 1293 1080 79 1285 881 1608 96 977 730 909 700 754 1493 1042 1159 280 600
OPCFW_CODE
A significant proportion of students fail to submit their homework on time because of failure to complete them. But what causes this failure among students? Well, some students are plain lazy and therefore fail to comprehend enough to allow them the right assignments. In contrast, a significant proportion fails in doing their homework as a result of homework stress. Causes of Homework Stress Homework stress stems from a myriad of sources, and these are; - Students get assigned an enormous homework load by teachers. It piles a lot of pressure on students to complete despite the exhaustion of learning throughout the day in school. Some teachers fail to realize that the amount of homework is vast and tough for a student to complete within the required time. - Students take different classes in any particular grade, and getting assigned homework in each with similar timelines for submission compounds the matter further. Such a scenario forces students to rush assignments, which can eventually overwhelm students resulting in stress. Stress, as a result, affects the student’s capability even to concentrate and complete some of the manageable assignments. - Some students fail to properly understand the content of the material they get taught in class, and as a result, they must re-read the content again to realize before starting their assignments. They end up wasting a lot of valuable time and, as a consequence, fail to complete the homework in time. Such can lead to a loss of morale. - Students also fail to manage their time correctly, and this ultimately affects their projects. Poor time management can quickly escalate to anxiety and stress, which hampers their progress in doing assignments.Many students can socialize or play games beyond enough, which ultimately affects their supposed homework time. - Some students also procrastinate doing their assignments until the deadlines come calling. They can’t do much about the task in the limited time left, they have to do the homework, yet know they will get bad grades as a consequence which leads to depression. It is impossible to manage stress without identifying if you get stressed in the first place. For you to do this, identify the following. - A lack of interest in doing or completing your assignment - Constant worry about finishing your homework - Lack of sleep because of the thought of not completing your homework on time - Unhappiness and hiding of school results from family and friends Management of Homework Stress It can be tough to avoid stress altogether when you get swamped with lots of homework. But the best part is that you can manage the stress to continue your productivity streak with your assignments. So what can you do? - Ensure that you have enough sleep at night as sleep can improve your mental state and thereby boost your homework success. - Begin doing your homework as early as you can to ensure that you don’t get constrained by time in completing the assignment. - Organize your work by splitting the homework into many small portions where you can dedicate your time and finish a portion before starting on the next one. - Stay attentive in class. - Have a social life by mingling with your family and friends when you can to avoid the stress that comes with isolation. The guidelines will help you manage your homework stress levels, provide exam help and ensure your productivity levels stay up. Everyone needs better grades, and you are no exception, so embrace the tips.
OPCFW_CODE
from collections import namedtuple, OrderedDict from lxml import etree import urlparse Checksums = namedtuple('Checksums', ('md5', 'sha1')) IndexRow = namedtuple('IndexRow', ('download_url', 'checksums')) def _all_internal_links_and_directories(html_root): return html_root.xpath( ".//a[" " @rel = 'internal'" " or " " (" " substring(@href, 1, string-length(text())) = text()" " and " " substring(@href, string-length(@href)) = '/'" " )" "]") def parse( base_url, package_path, html_str, strict_html=True, find_links_fn=_all_internal_links_and_directories): html_root = _parse_html(html_str, strict_html) rows = _parse_internal_links( base_url, html_root, package_path, find_links_fn) return rows def _parse_internal_links(base_url, html_root, package_path, find_links_fn): rows = OrderedDict() for link in find_links_fn(html_root): href = link.attrib['href'] if not _is_ascii(href): continue if href.endswith('/'): if not _is_absolute_url(href): rows[href] = None else: rows[link.text.strip()] = IndexRow( download_url=_make_url_absolute(base_url, package_path, href), checksums=_determine_checksums(href)) return rows def _is_ascii(s): try: s.encode('ascii') return True except UnicodeEncodeError: return False def _parse_html(html_str, strict_html): parser = etree.HTMLParser(recover=not strict_html) html_root = etree.fromstring(html_str, parser) return html_root def _is_absolute_url(url): return bool(urlparse.urlparse(url).scheme) def _make_url_absolute(base_url, package_path, url): if _is_absolute_url(url): return url return '{}/{}/{}'.format(base_url, package_path, url) def _determine_checksums(href): split_url = urlparse.urlsplit(href) fragment = split_url.fragment fragment_dict = dict((fragment.split('='),)) if fragment else {} checksums = Checksums( md5=fragment_dict.get('md5', None), sha1=fragment_dict.get('sha1', None)) return checksums
STACK_EDU
Inno setup supports all windows versions and allows you to create an exe file that contains all of your applications files, which will be displayed in an interface with a great design. Patrick flood posted a comment on discussion open discussion. Get project updates, sponsored content from our select partners, and more. Inno setup helps you make your own software installer with this free software. The inno setup help file source code has been moved into the main inno setup source code repository. Ive created the installer, and installed my application successfully. Inno setup is a free installer for windows programs by jordan russell and martijn laan. Ondersteuning voor zip en bzip2 compressie is aanwezig daarnaast kan. Inno setup is a featurepacked installation builder. So if you wish to express your appreciation for the time and resources the authors have expended developing and supporting inno script studio, and also help defer the costs of running the web site and continued development, we do accept and. How to create a software installer using the inno setup. Inno setup is a free installer for windows programs. This extension program for inno setup allows you to easily download files from the internet during the installation process, or. Inno setup is a simple to use application that implements executable in order to run your own software under windows. Before you will download the program, make sure that you not have application inno setup on your device installed yet this will allow you to save some space on your disk. Setup section directive closeapplicationsfilter was partially case sensitive. The value of appid uniquely identifies this application. Inno script generator is a tool that helps you to construct and maintain installation scripts for jordan russells inno setup. Inno setup is an easy to use software solution for creating installers. If you are a software programmer, this program is a must have. Update application with inno setup installer stack overflow. Jordan russell free inno setup is a free installer for windows programs. Inno setup is a popular program for making software installations. I am having an issue though when it comes to updating. Inno setup is a tool to create installers for microsoft windows applications. Create an installation that is an update or addon to an existing installation. Inno setup is one of the best free tools for creating application installers. Features include a wizard interface, creation of a single exe for easy online distribution, support for disk spanning, full uninstall capabilities, customizable setup types, integrated file compression, support for installing shared files and ocxs, and creation of start menu icons, ini entries, and registry entries. When you upload software to you get rewarded by points. Inno setup is a free software scriptdriven installation system created in delphi by jordan russell. Save inno setup custom form field values to an ini file. Inno script studio may be used free of charge, but as with all free software there are costs involved. I am learning how to use inno setup to create an installer for my project. Is it possible to have a casacde menu stile like innounp shell extractinfolist. Inno setup grew popular due to being free and open source. Issi users do not need to download or install any inno setup language packs. Unfortunately, there is no official unpacker the only method of getting the files out of the selfextracting executable is to run it. Vulnerability flexera installshield jrsoft inno setup dll. Could you provide a keygen with possible sourcecode. Innotools downloader is an addon dll which allows you to download files as part of your installation. Inno setup was added by nostradamus in apr 2009 and the latest update was made in mar 2020. Inno script generator helps you construct installation scripts for inno setup. Before you throw your money away on commercial installers that dont even do what you want, try out inno setup. I wondered and hoped you could do aspack by aspack software. Features include a wizard interface, creation of a single exe for easy online distribution, support for disk spanning, full uninstall capabilities, customizable setup types, integrated file compression, support for installing shared files and ocxs, and creation of start menu icons, ini entries, and. An important piece of any windows suppport application is an installer that is easy to use. Sign up the inno setup quickstart pack includes inno setup itself, an option to install the unofficial inno script studio script editor and an option to download and install official encryption support. La herramienta más util es istool, que es un editor de. Bovirus posted a comment on discussion open discussion. Inno setup tutorial software free download inno setup. This extension program for inno setup allows you to easily add repairmodifyremove options to your installed applications. Its possible to update the information on inno setup or report it as discontinued, duplicated or spam. This ensures that the uninstall of the wrapped product only shows inno setupread more. This video covers creating a specialized version where we place the software into users my. Vulnerability of flexera installshield, jrsoft inno setup. It is a super cool and free tool for making executable setup programs. I plan to release many versions of my software, i would like to change the inno setup installer interface if an older version of my application is already installed on the computer. Inno setup supports all windows versions and allows you to create an exe file that contains all of your applications files, which will be displayed in an interface with a. Get the software from the inno setup developer website. In view of the fact that the inno setup is in our database as a program to support or convert various file extensions, you will find here a inno setup download link. Inno setup recommendations i recommend that you use the verysilent and suppressmsgboxes as fixed parameters for the uninstall of the wrapped setup. For every field that is filled out correctly, points will be rewarded, some. Certainly the attached zip contains an installer, the. Inno setup is superior to installshield in almost every way, its lightweight, simple, and easy to use. If i use post install flag in run section, i am able to achieve it, but here i am giving an option for the user to decide whether to install or not. The software installer includes 14 files and is usually about 4. My driver installation will prompt the user to restart the machine. Please read the readme file included in the package for further instructions on how to use uninshs with your own application setups. This new version of inno script generator suported the newest versions of inno setup jordan russell, martijn laan and also istools bjornar henden. First introduced in 1997, inno setup today rivals and even surpasses many commercial installers in feature set and stability. I want to force user to install driver at the end of installation. In this section you will find the tools and code that i have made for the excellent free installer product, inno setup. Check video tutorials for how to create inno setup installers from scratch using various ides. The installation of a packaged software with innosetup is very simple, and removal is also supported. Ive then updated my version number within the script and recompiled however when i run the installer it seems to treat it as a new install still. So if you wish to express your appreciation for the time and resources the authors have expended developing and supporting inno script studio, and also help defer the costs of running the web site and continued development, we do accept and appreciate donations.277 1281 1613 174 158 954 88 1097 1660 702 1120 572 1517 355 330 1086 967 774 1361 1189 1334 796 737 5 345 469 1383 1533 944 272 1097 360 1293 802 765 944 1263 158
OPCFW_CODE
SAFE is Cisco's Secure Blueprint for Enterprise Networks, the stated aim of which is to provide information on the best practice for designing and implementing secure networks. Recently, the issue of security in networking has been receiving a huge amount of attention. As part of this attention, Cisco has been at the forefront of developing this process, which is based upon the products of Cisco and its partners. The SAFE methodology involves creating a layered approach to security, such that a failure at one layer does not compromise the whole network. Instead, it operates like a military 'defense in depth.' Defense in depth is a concept that explains how it is expected that an enemy will be able to penetrate your defensive perimeter, but that it will take time and effort. Multiple lines of defense slow down an attacker and give you more time to discover and stop them. Additionally, each line of defense can have its own procedures, in the hope that the attacker may not be skilled in all countermeasures. One of the main features of this new set of principles is that it defines a slightly different modular concept from the original core, distribution, and access layers. That is not to say that these original layers are no longer used in design; rather, the SAFE approach is to use an alternative. In practice, designers see both methods as useful and may appropriate features from each. The basis for the new modular design concept is shown in Figure 1.12. Figure 1.12: Enterprise Composite Module This high-level diagram shows only three blocks. Each block represents a different functional area, providing a modular understanding of the security issues. From our perspective, we need to focus in a little more on the detail, and this is expanded in the main SAFE block diagram, shown in Figure 1.13. Figure 1.13: Enterprise SAFE block diagram Figure 1.13 shows a much clearer breakout of the actual modules inside SAFE that need to be managed and secured. Each module has its own threats and protection issues. It is not expected that every network would be built using all modules, but rather that this provides a framework for understanding the security issues involved and isolating them. From the perspective of the Cisco CCNP training program, we need to focus in again, this time looking in a little more detail at the Campus Module, as shown in Figure 1.14. Figure 1.14: Enterprise Campus Module detailed diagram Note that the Campus Module contains a number of smaller modules, each of which is associated with a specific function. Management Module Designed to facilitate all management within the campus network as defined by the SAFE architecture. The Management Module must be separated from the managed devices and areas by a firewall, by separate VLANs, and by separate IP addresses and subnet allocation. Building Module SAFE defines the Building Module as the part of the network that contains end-user workstations and devices plus the layer 2 access points. Included in this are the Building Distribution Module and Building Access Module. Building Distribution Module This module provides standard distribution-layer services to the building switches, including routing, access control, and, more recently, QoS (quality of service) support. Building Access Module The Building Access Module defines the devices at the access layer, including Layer 2 switches, user workstations and, more recently, IP telephones. Core Module This module follows the principles of the core part of the standard Cisco three- layer module, focusing on transporting large amounts of traffic both reliably and quickly. Server Module The main goal of the Server Module is to provide access to the application services by end users and devices.
OPCFW_CODE
Should I edit an obsolete (and incorrect) answer? I came across this answer: https://stackoverflow.com/a/9664327/688958 In short, it states that text/javascript is obsolete and that application/javascript should be used instead. The answer was correct when RFC 4329 was current. However, in May 2022 that RFC was made obsolete by RFC 9239. The new RFC states that application/javascript is obsolete and that text/javascript shall be used instead. There's indeed another answer in that question that mention this fact. However, the incorrect (and accepted) answer has a very high score (364). I believe that most people just assume that it has to be the correct answer. I would do so. Now, it turns out I have enough reputation on Stack Overflow to edit other peoples' answers. Should I edit the answer and mention the new RFC or would that go against the philosophy/guidelines of Stack Overflow? I'm not an SME, so I'm intentionally not posting an answer, however, in an area I was an SME in, I would edit the answer to denote when and what it was correct for, and then link to the answer that denotes the current correct solution. Does this answer your question? How to handle historical, highly upvoted but completely incorrect answers @KarlKnechtel I think there's a distinction there between answers that used to be correct but don't conform to the new standard, and answers that were never correct. Given: Outdated Accepted Answers: flagging exercise has begun has (to my knowledge) not had a follow-up, so we can't add a banner like "Warning: outdated, see [other answer] for the current value" How to handle historical, highly upvoted but completely incorrect answers was about answers that were wrong from the start, which this wasn't, it's just become outdated over time, The new sort order did not become a default and the correct answer won't flow up soon, It will (figuratively) take ages for the fourth answer (sitting at a score of 4 after 8 months), mentioning the renewed approach, to outrank the accepted answer at a score of 364, Quentin (the answerer) is very active and will receive a notification of an edit, and It is a factual change for which you can provide a reference, I'd say: just edit it. People claim "don't change the meaning of the answer" as a reason not to update answers with changes over time (as newer versions of standards, languages and libraries emerge). The question here, if we read past the phrasing that some people would call opinion-based "Which [content-type] is best and why?" and interpret it at face value, is: Which content-type should I use for JavaScript files? The answer can be interpreted as such: According to the current standards, it is [...] By taking this approach to editing, we can update answers without changing their meaning, but instead keeping them up-to-date. See also the Help Center's page Why can people edit my posts? How does editing work?: Editing is important for keeping posts clear, relevant, and up-to-date and Some common reasons to edit a post are: [...] To correct minor mistakes or add updates as the post ages If Quentin is so active, it should be possible to simply ping them in a comment about their answer so they can edit their own answer... Much better than changing their answer to say something they didn't. And place the burden upon them to incorporate the changes into the answer? I have a couple thousand answers out there, I applaud an edit changing outdated info instead of "hey you're outdated, go update". Rollback takes all of three clicks, you know. It's their answer, they're getting rep for it, why shouldn't they bear the burden of keeping it up to date? Are we here for the reputation or for building a knowledge base? Outdated knowledge serves nobody but historians, and they can view the edit history. It is all about momentum. As soon as someone determines an answer to be outdated and knows the up-to-date information, they can click edit and update the answer with very little effort. Posting a comment notifying the original poster, waiting, checking the answer again and again from your comments (if it isn't removed) and following up with an edit (days? weeks?) later when it appears to be ignored is very counterproductive. What's the hurry? The answer has been wrong for almost a year. Serving JavaScript with application/javascript won't cause any browser to reject it, so at worst the response is not strictly following an RFC if it follows the advice in this answer. This feels like a novel "legal theory", but it is a breath of fresh air for me. Yea i mean, why should we care at all if an SO Q&A pair has been spreading misinformation for years, it's not our problem, /s Quentin has edited the answer now so I can return to being lazy :-) If Quentin is so active, it should be possible to simply ping them in a comment about their answer so they can edit their own answer... Much better than changing their answer to say something they didn't. – Yes. This. Sticking an obsolete notice at the top and locking it seems more than a little excessive! I've unlocked it so you can edit But do you agree to others editing your answer, if the changes can be backed up by an updated RFC or other authoritive reference? Ah, you're here Quentin! Thanks for fixing it! So, there's a lesser-used path here, but I'm generally loathe to use it without some serious consideration: an obsolete lock. However, I think it's warranted here. The options for fixing this with edits are Edit in a notice that it's wrong and your answer is in another castle Edit the right answer in I really dislike #2 because it cuts the new correct answer out of the loop. Worse is you'll get people who don't notice the edit and flag the correct answer as a late retread of the older one. It's easy to miss and sometimes we'll remove the correct answer in error. That's a mess. Edit notices are more palatable but now you wind up with the wrong answer still at the top, where people will downvote it (because it's no longer useful). Thus the lock can help preserve its former usefulness, while still acting as a guidepost. This post is a good example of where the obsolete still exists, but without harming the former usefulness. The other piece of the puzzle that pushed me towards an obsolete lock here is that the language itself changed. They were actively telling folks exactly what Quentin said, until they didn't. Quentin shouldn't be penalized for that, which is what apparently has been happening. An obsolete lock prevents his answer from taking any further beating. I should note that I am not opposed to other solutions or suggestions (i.e. make an argument in comments that we don't need the obsolete lock). But I do not feel downvoting a previously good answer into oblivion serves anyone in this case. Why do you want to lock an answer that as of a couple of months ago was correct, gained 350+ upvotes because of that, but is now incorrect, as opposed to letting people edit it? Languages, standards and frameworks get new functionality all the time. Instead of editing the answer to include an updated part, I'd just as a notice at the top stating that the answer is outdated, then link to the most complete updated answer. If an updated answer already exists, then there's no reason to repeat it in the old answer, just add a link. Perhaps some notice like this in a block quote to add attention: UPDATE YYYY-MM-DD: This answer was accurate when it was first written, however it is now outdated. Please refer to [OtherUser's answer](link to answer) for a more recent answer. I think in a perfect world this would be a moderator notice to draw more attention and ensure trusted people are making these notices, however in the mean time it's a good solution. I do actually like this approach the best, albeit with a less in your face "UPDATE" notice; something like "In versions x to x, it worked like this...". It 1) keeps the author's intent 2) keeps the old answer present for future readers who would benefit (though that's not very meaningful in this case) 3) indicates that the given answer is not current while avoiding cluttering up the answer with loud meta headings. If the author is around to update it, then by all means that should occur, but otherwise this seems like a happy medium in cases where old answers are still relevant. "If an updated answer already exists, then there's no reason to repeat it in the old answe" - except when the outdated answer outscores the updated one by 100 (50 now), is accepted and is above the fold for everyone. As a searcher/reader you should not have to doubt the top answer. I'd like to note that despite this answer being at -1, it seems that it was effectively implemented by a mod in this edit (at least until the author updated his answer).
STACK_EXCHANGE
Defect: Migrate diff doesn't generate SQL commands to create enum types Hello again friends 😄. Setting up a toy model with an atlas.hcl and a schema.hcl as follows: env "local" { src = "schema.hcl" url = "postgres://user:pass@localhost:5432/db_name?sslmode=disable" dev = "postgres://user:pass@localhost:5432/db_name_shadow?sslmode=disable" schemas = ["public"] migration { dir = "file://migrations" format = "atlas" } } schema "public" {} enum "happiness" { schema = schema.public values = ["Unhappy", "Existing", "Happy"] } table "users" { schema = schema.public column "id" { type = bigint identity { generated = ALWAYS } } column "happy_state" { type = enum.happiness } primary_key { columns = [column.id] } } Starting with a fresh database, after running only CREATE DATABASE db_name; and CREATE DATABASE db_name_shadow;. Running atlas migrate diff --env local --schema public results in the following: -- create "users" table CREATE TABLE "public"."users" ("id" bigint NOT NULL GENERATED ALWAYS AS IDENTITY, "happy_state" happiness NOT NULL, PRIMARY KEY ("id")); Running atlas migrate lint --env local --latest 1 results in: 20220810211442.sql: executing statement: pq: type "happiness" does not exist Running atlas migrate apply --env local results in: Migrating to version<PHONE_NUMBER>1442.sql (1 migrations in total): -- migrating version<PHONE_NUMBER>1442.sql -> CREATE TABLE "public"."users" ("id" bigint NOT NULL GENERATED ALWAYS AS IDENTITY, "happy_state" happiness NOT NULL, PRIMARY KEY ("id")); Error: sql/migrate: execute: executing statement "CREATE TABLE \"public\".\"users\" (\"id\" bigint NOT NULL GENERATED ALWAYS AS IDENTITY, \"happy_state\" happiness NOT NULL, PRIMARY KEY (\"id\"));" from version "20220810211442.sql": pq: type "happiness" does not exist Tool versions: postgres (PostgreSQL) 14.4 (Ubuntu 14.4-1.pgdg22.04+1) psql (PostgreSQL) 14.4 (Ubuntu 14.4-1.pgdg22.04+1) atlas version v0.5.0 https://github.com/ariga/atlas/releases/tag/v0.5.0 Hey @jrmcpeek ! V0.5.0 is almost a month old, can you try with the latest release ? A lot has changed internally in migrate diff recently https://atlasgo.io/cli/getting-started/setting-up#install-the-cli In general, master is well tested and stable, we use version tags for announcements mostly. Internally we always use latest @rotemtam I can surely test this against the master branch, if you believe this issue is addressed. However, I have concerns about updating a CI/CD pipeline with an untagged / "latest" version. How should something like that be handled? I performed the steps above with the following version: atlas version v0.5.0-30b4018-canary https://github.com/ariga/atlas/releases/latest Calling back to my previous message - I have trouble reconciling canary and tested and stable. Running atlas migrate diff --env local --schema public results in the following: -- create enum type "happiness" CREATE TYPE "public"."happiness" AS ENUM ('Unhappy', 'Existing', 'Happy'); -- create "users" table CREATE TABLE "public"."users" ("id" bigint NOT NULL GENERATED ALWAYS AS IDENTITY, "happy_state" happiness NOT NULL, PRIMARY KEY ("id")); Running atlas migrate apply --env local works as expected. I then make a change to the schema file: schema "public" {} enum "happiness" { schema = schema.public values = ["Unhappy", "Existing", "Happy"] } table "users" { schema = schema.public column "id" { type = bigint identity { generated = ALWAYS } } - column "happy_state" { + column "happy_state_renamed" { type = enum.happiness } primary_key { columns = [column.id] } } And then run atlas migrate diff --env local --schema public, which results in: Error: sql/migrate: read migration directory state: sql/migrate: execute: executing statement "CREATE TYPE \"public\".\"happiness\" AS ENUM ('Unhappy', 'Existing', 'Happy');" from version "20220811051624": pq: type "happiness" already exists
GITHUB_ARCHIVE
When you create a bill of material, you have to select one material as the header material. The header material cannot already be assigned to another BOM. You can only select materials that are in the same collaboration or that are in a work area of the same competitive scenario. Once you have assigned the header material to the BOM, you can no longer change it. The name of the BOM is at first generated from the name of the header material, but can be changed later on, as required. When you create a bill of material, you first have to enter the values of the BOM attributes. In addition to the attributes, you can create and change BOM items and document links. cFolders supports the item categories variable-size item, stock item, and text item. Depending on the item category, the system displays different attributes. Variable-size items and stock items must have a component that can be selected using the search help. The component in an item can be changed at any time. Text items do not have a component. Each item can have linked documents. The prerequisites for documents that are linked to a BOM also apply to these links. You can link an unlimited number of cFolders documents to a bill of material. You can use a search help to select these documents. In a collaboration in the collaborative scenario, documents can only be linked to a bill of material if the document is in the same collaboration or in standards. In a collaboration in the competitive scenario, documents can only be linked to a bill of material if the document is in the same work area or in standards. If a link between a document and a bill of material is deleted, only the link is deleted. The document still exists. A material can be used as the header material for one bill of material only. It can also be used as a component in an unlimited number of BOM items. Here too, assignments can only be made within the same collaboration or in the work area of the same competitive scenario. You have two options when copying bills of material: 1. Copy the bill of material to the same collaboration or the same work area: The bill of material is copied to the target folder and all document links in the copy refer to the same documents as in the original. The components in the items of the BOM copy are also the same as in the original. Since the copied BOM cannot use the same header material as the original, the system first checks whether a material exists that is a copy of the header material in the original BOM and is not used as the header material of any other BOM. If such a copy exists, this material becomes the header material of the BOM copy. If no copy exists, the system copies the header material of the original bill of material. This copy is then used as the header material of the BOM copy. 2. Copy the bill of material to a different work area: The bill of material is copied to the target folder. Each document link (that does not link to standards) is checked to determine whether a copy of the linked document exists in the new work area. If a copy does exist, the link in the BOM copy refers to the copy of the linked document in the new work area. If not, the linked document is copied to the work area with the BOM, and the BOM is linked to this copy. Components in the BOM items are handled in the same way. The system also checks whether a copy of the header material exists and is not already being used as the header material of any other BOM. If such a material exists, this material becomes the header material of the BOM copy. If not, the system copies the header material of the original bill of material. This copy is then used as the header material of the BOM copy. Once you have copied a bill of material, check whether the status of the new linked documents, components, and header material is up-to-date. Since several copies can exist, check whether the copy you require is linked to the bill of material. For active BOM versions, the system provides up to five different exports in CSV (Comma Separated Value) format and one export in XML (Extensible Markup Language). The exports that are available depend on the settings you made in Customizing for Collaboration Folders in the IMG activities Specify Field Selection Profiles in Comma Separated Value Format and Specify Profiles for Export of Material BOM in XML Format. 1. Select the export you want to use from the menu under Export. 2. Enter your chosen explosion level or leave the field blank if you always want the BOM structure to be exploded fully. 3. Select the field selection profile you require for the export in CSV format or the XML profile for the export in XML format. 4. Depending on the type of export, you can select attributes of different cFolders elements. 5. For the export in XML format, you can format the URL of the linked documents for files included in the PDX (Product Data eXchange) package with the notation file://filename. 6. For the export in CSV format you can select the attribute for displaying the column header. 7. If you do not enter a file extension, CSV is used for the export in CSV format and XML for the export in XML format. 8. If you do not enter a file type, the system uses application/msexcel for the export in CSV format and text/xml for the export in XML format. Document links in the cFolders BOMs and items are exported as attachment elements below the BillOfMaterial and BillOfMaterialItem elements that are not contained in the DTD (Document Type Description) according to IPC-2571 and therefore do not correspond to the standardization. These settings are saved for each export type in the user settings and will be available again for the next export. Depending on whether you display the active BOM version or an older BOM version, you can choose a comparison with the last or with the active BOM version in the item overview. The single-level comparison includes the attributes and document links of BOMs, items, header materials, and components. Links to manufacturer part numbers are also compared for material versions. Status changes and categorizations are not taken into account. If differences exist, the system displays the type of change of an item in an additional column marked with a delta in the item overview. Selecting individual items enables you to restrict the content of the detailed results list for the comparison of BOM versions. If you do not select an item, the system also displays the changes to the BOM and header material in the detailed results list.
OPCFW_CODE
Space war gameplay Ja grałem w orientacji pionowej ale na YouTube film jest poziomo żeby lepiej się oglądało. Jak komuś nie. Slingshot around to the history of Spacewar!, at The Dot Eaters: http:// oliveraie.info?bitstory=spa We. SPACEWARS GAMEPLAY. Each race controls its own large group of regions, and EMPIRES AT WAR. The core gameplay is focused on the combat itself. Https://www.rouletteregeln.com/spielsuechtige-promis-hugh-jackman. mind that the title screens are generated by the emulator and are https://www.dbkg.de/kliniken/rhein_sieg_klinik part of the original games. These characteristics are quite important for the game, since its drawing mechanism heavily depends on the afterglow of the display, which provides the required stability for the illinoislottery image. Aside the support of casino steindamm PDP-1's sense switches to control some of the casino games free to play behavior, there's also a provision to extract and hack essential constants setup parameters of the games see the tools menu scratch card strategy by book of ra novoliners options menu casino games book of ra the top right of the screen. Some importance has been spiele zum installieren kostenlos in reconstructing the visual impression of the original CRT display: MIT Celebrates Its Earliest Computer Game, Part 2 of harry potter computerspiel 37'13 Watch the Computer History Museum's CHM lecture celebrating the PDP-1 and its restoration project May 15 th Graetz redacted copy, incl. The source codes of Spacewar! So we started talking about it, figuring what would be interesting displays. The label suggests that this was a tape sent from MIT to an other facility. For more on the making-of of Spacewar! The game was initially controlled with switches on the PDP-1, though Alan Kotok and Bob Saunders built an early gamepad to reduce the difficulty and awkwardness of controlling the game. At any time, the player can engage a hyperspace feature to move to a new, random location on the screen, though each use has an increasing chance of destroying the ship instead. Graetz's seminal article "The Origin of Spacewar" and as presented at the MIT Science Open House in May Moreover, the background starfield is moving quite erratically. Additionaly to some internal modifications it features, like all versions 4, a working single shot mode for online betting accounts sense switch 3. The emulator provides access optc slots most of these "constants" by a special parameters dialog, accessible by the options menu. Left shoulder buttons are hyperspace, flash game play shoulder is fire.
OPCFW_CODE
What's the best way to terminate plastic water supply pipe? I am removing a bathroom fixture that was attached to this cold water supply line: I need to plug the pipe so it doesn't spray water when I turn it back on. Previously it had this fitting: I tried this cap, but it leaked quite a bit. I searched through many fittings at the hardware store, and finally ended up with this valve: which worked (except when I put it on backwards, then it squirted water out, ack!). It took 4 trips to the hardware store to arrive at this solution, and it's not really ideal, since I never want to turn it on. What would an ace plumber have done instead? Could I have improved my question by making the pictures smaller? They seem to dominate the post. On the other hand, I hate squinting to look at tiny pictures. I've decided that I'm happy with the valve, because it makes it easier to drain the plumbing completely for winter, or flush the plumbing to sanitize. Gray pipe usually (always?) means it's polybutylene. That's an important point when trying to do repairs, as you need special couplers to do it right. Jay, it looks like you have a typical RV compression fit PVC line there. Looking at your pics, I don't see any pipe threads on the end of the tubing. It is difficult, if not impossible to match a threaded fitting to this type of pipe. You might have to go to a RV supply with the pipe type(usually printed on the pipe) and purchase an end cap for that specific tubing. If you don't have a pex tool or compression plyers to install a collar, there may be a self sealing compression fitting much like a Shark Bite that can be installed with a wrench, or a simple glue on cap fitting. Be sure to remove any scored or damaged tubing end before trying to install a new fitting. I would normally try to go for a pushfit endcap of the right diameter (e.g. you might need 15mm). Something like this: (product description: Conex Push-Fit 301 Stop End Cap 22mm. Joins a wide range of tubes including copper, carbon steel, stainless steel and plastic including PE-X and polybutylene.) Definitely go push fit if at all possible. Home Depot carries the Sharkbite brand (http://www.homedepot.com/p/t/100638158) for under $10. Note however that these work with copper, PEX, and CPVC, but aren't approved for polybutylene, which has slightly different dimensions. It appears the Conex one does support PB however. You can also go to a plumbing supply store and get a Qest end cap. Looks like PE-X to me. Look for a cap that fits inside the pipe and then you will need a crimp tool and a copper ring to put crimp on the outside of the pipe. Warning - those crimp tools can be expensive. I think mine cost around $100. I would have thought that there would be an end cap for the system. It would look like the elbow and T-junctions you all ready have and would just fit over the end of the pipe. Have you got any spare pieces you can take to the hardware store to compare to make sure you get the right one?
STACK_EXCHANGE
WhatsUp Gold - Performance alerting with SNMP |WhatsUp Gold||7.x - 8.x||Windows 2003 Server Question/Problem: How can I monitor performance using SNMP on Windows 2000, Windows XP or Windows 2003? While the WhatsUp software includes SNMP features and components, this document relies on software provided by third-party companies. Neither the Windows SNMP server that WhatsUp will query nor the MIBs provided by SNMP Informant are supported or maintained by Ipswitch. 1.) On all systems (target machines) for which you want to monitor performance, install the Windows Simple Network Management Protocol Service. This is usually listed in the Add/Remove Programs control panel under the Management and Monitoring Tools category. It is not necessary to install the SNMP Service on the WhatsUp machine (unless you plan to monitor the WhatsUp machine itself). 2.) On all target machines, open a command prompt, enter and execute the following command: Then, restart the target machine. Note: this should be enabled by default on XP and 2003 machines. 3.) Download the standard version of SNMP 4.) From one of the target machines, copy informant-std.mib and wtcs.mib to the WhatsUp machine. These files are usually located in C:\Program Files\SNMP Informant\standard\mibs\SMIv1. The following instructions assume that a temporary folder named C:\MIBs has been created on the WhatsUp machine and the MIB files have been copied to it. 5.) On the WhatsUp machine, close the WhatsUp application and compile the SNMP Informant MIBs. Open a command prompt, navigate to the WhatsUp installation directory, and execute the following command: 6.) You can now open the WhatsUp Console (or Net Tools) and view the MIBs you have compiled. The SNMP Informant tree is located under: iso > org > dod > internet > private > enterprises > wtcs > informant > standard If you cannot see the wtcs tree in Net Tools, this indicates the MIBs did not compile correctly. Double-check your work in step 5. 7.) To query a particular Object ID (OID) in Net Tools, select the SNMP tab and enter the IP Address of the target machine. Then, enter the Community name for the target machine. The default read community name for the Windows SNMP Service is public. This value is case-sensitive. Then, select the browse button next to the What field and navigate to the SNMP Informant tree and select an item. Note: Some OIDs require instance numbers and may not respond if you query them with only Get selected. If querying a particular OID does not return results, try selecting Get All Subitems and performing the query again. If none of the OIDs return any data, the SNMP service on the target machine may not be functioning correctly, traffic to the SNMP service is blocked (SNMP query traffic uses UDP port 161), or your community name may be incorrect. All Windows SNMP servers should respond to a query for *sysinfo in the What field. Two examples for monitoring performance data via SNMP are below. These examples are provided for information only. No support for this information is provided by Ipswitch. To monitor for free disk space on the C drive of a target machine: 1.) Open Net Tools on the WhatsUp machine and in the What field, browse to iso > org > dod > internet > private > enterprises > wtcs > informant > standard > logicalDiskTable > logicalDiskEntry > lDiskPercentFreeSpace and click OK. Notice the OID for this query is 22.214.171.124.4.1.9600.1.1.1.1.5. 2.) Enter the IP address of the target machine and the read community name in the appropriate fields and click Get All Subitems. 3.) For a computer that has the following drives: C Drive = 17% free space Then, the following information should be returned: The instance number for each drive is listed after the name for the OID. The C drive usually has an instance number of 2.67.58. This means that to query for free disk space on the C drive, we should use an OID of 126.96.36.199.4.1.9600.1.1.1.1.5 and an instance number of 2.67.58. Subsequent drives should have different instance numbers, but the same OID. If you aren't sure which drive correlates to which instance number, check the free space of the target machine's drives to find a match. 4.) Now, setup a monitor in WhatsUp to query this OID and instance number on the target machine. Select Monitors and Services from the Configure menu. Click the New button and select SNMP Monitoring from the Service type menu. Enter a name for this monitor, we'll call ours Percent free on C, and click OK. 5.) Now, enter the OID and the instance number for the C drive query. This can be entered directly into the OID field, or you can browse to it. In WhatsUp Gold, be sure to enter the complete OID and instance number (188.8.131.52.4.1.9600.1.1.1.184.108.40.206.58) into the Object ID field. In WhatsUp Professional, you can enter both into the OID field (leaving the Instance field blank), or enter 220.127.116.11.4.1.9600.1.1.1.1.5 into the OID field and 2.67.58 into the Instance field. 6.) Select Range of Values from the menu and enter 100 into the High value field. Enter the amount of free disk space at which you want to be notified into the Low value field. For example, if you want to be notified when there is less than 25% free disk space, enter 25 into the Low value field. If you wish to be notified when there is less than 75% free disk space, enter 75 into the Low value field. Save your changes. To monitor for CPU utilization on a target machine: 1.) Open Net Tools and browse to iso > org > dod > internet > private > enterprises > wtcs > informant > standard > processorTable > cpuPercentProcessorTime. Enter the read community name and target machine's IP address, select Get All Subitems and click OK. 2.) The first CPU on the target machine should have an instance number of 1.48. This, combined with an OID of 18.104.22.168.4.1.9600.1.1.5.1.5, gives us a query of 22.214.171.124.4.1.9600.1.1.5.126.96.36.199 to see the CPU usage for the first CPU of the target machine. 3.) When creating a monitor for CPU usage, always enter 0 for the Low value and enter the value at which you wish to receive a notification for the High value. For example, if you wanted to be notified when CPU usage was above 90%, you would enter 0 in Low value and 90 in High value. If you wanted to be notified when CPU usage was above 10%, you would enter 0 in Low value and 10 in High value. |Document #:||Revision Date:| Return To KnowledgeBase Search Page
OPCFW_CODE
Interpolation Methods: linear(), ease() We've discussed how to 'manually' scale a range of values, by combining a simple division and multiplicationfor instance to convert 360 degrees of rotation into a 100% change in opacity. But this isn't the only way. After Effects offers a couple of built-in interpolation methods specifically designed to turn changes in one set of values into changes in another set: linear(t, t_min, t_max, value1, value2) These methods look more complicated than they really are, mostly because they accept so many arguments. These arguments are: t The incoming data source, e.g. 'rotation', 'time', or a variable of your choice. The values from t must be numbers (dimension of 1). Required. t_min The minimum expected value for 't'. Optionalif t_min and t_max are omitted, After Effects will assume values of 0 and 1, respectively. t_max The maximum expected value for 't'. Optional. value1 The minimum value to output. When t equals or is less than t_min, the method will output value1. Value1 can be a number or a vectorthe results will have the same dimension as value1. Required. value2 The maximum value to output. When t equals or exceeds t_max, the method will output value2. Value2 can be a number or vectorif it doesn't have the same dimension as value1, After Effects will ignore components or append values of 1, as necessary. Required. To see how these arguments work together, consider this example: linear(time, 0, 5, 0, 360); In English, this expression would read something like: 'as time goes from 0 to 5, output values from 0 to 360, with linear interpolation.' Applied to a layer's rotation parameter, this expression would cause the layer to rotate 360 degrees over the first five seconds of the comp. If you try this, notice that the layer stops rotating at 5 seconds. This is a chief difference between these methods and the hand-rolled 'divide by 5, multiply by 360' technique we used earlier. The interpolation methods clamp their incoming and output values at the minimum and maximums you specify. Another difference is that you can specify different styles of interpolation: ease(), ease_in() and ease_out(). These styles work exactly like their identically-named keyframe interpolation styles (available via the Animation->Keyframe Assistant menu). You can use these interpolation methods to provide a more natural progression between sets of valuesa smoothness that would be harder to achieve with simple division and multiplication. To see how the various ease styles work, consider the following animations, which each use the same arguments, but with different interpolation methods: Example: Scroll Bars As a quick, and simple, example of what you might do with these methods, we'll create a simple scroll bar animation. We'll start with two layers, the scroll bar, and a text block that will appear to scroll. We'll attach our expression to the text block's anchor point, because this won't affect our ability to position the layer in the comp (its position will remain unchanged even as the layer scrolls). Remember that moving the anchor point in one direction makes the layer appear to move the other direction in your comp. In our case, we'll be moving the anchor point down, in order to make the layer appear to move up in the comp. In order to achieve full scrolling, the full length of our text block, we'll want the anchor point's y-value to start at zero and end at the layer height. This gives us our output values: For our input values, we could just use zero and the height of the comp. We'll inset the values a bit, just to make the results look a little better. Assuming comp dimensions of 320x240, we'll use a range of 25 to 215. So, as the scroll bar's y-position goes from 25 to 215, the text block will appear to scroll. Of course, our input data will come from our scroll bar layer's y-position, so: Putting this all together, with better variable names, we get an expression that looks like this: Finally, we'll want to put these results ('scrolled_amount') in an array, leaving the anchor point's x-coordinate as it was. The complete expression then looks like: The final animation looks like this (only the scroll bar's position has been keyframed): Of course, there's no reason you need to scroll vertically. We could just as easily create a horizontal scroll bar, with only minimal changes to our expression. For an even more impressive effect, we could create a circular 3D scroll, by applying the same basic expression to a rotation parameter of a 3D layer: These are stills because the animations don't export in Flash format very well. In any case, they are more fun, and easier to understand, when you can interact with them directly. Click here to download the project file. (Windows users click here.) Entire contents © 2001 JJ Gifford.
OPCFW_CODE
Should I use one big SQL Select statement or several small ones? I'm building a PHP page with data sent from MySQL. Is it better to have 1 SELECT query with 4 table joins, or 4 small SELECT queries with no table join; I do select from an ID Which is faster and what is the pro/con of each method? I only need one row from each tables. You should run a profiling tool if you're truly worried cause it depends on many things and it can vary but as a rule its better to have fewer queries being compiled and fewer round trips to the database. Make sure you filter things as well as you can using your where and join on clauses. But honestly, it usually doesn't matter since you're probably not going to be hit all that hard compared to what the database can do, so unless optimization is your spec you should not do it prematurely and do whats simplest. Generally, it's better to have one SELECT statement. One of the main reasons to have databases is that they are fast at processing information, particularly if it is in the format of query. If there is any drawback to this approach, it's that there are some kinds of analysis that you can't do with one big SELECT statement. RDBMS purists will insist that this is a database design problem, in which case you are back to my original suggestion. When you use JOINs instead of multiple queries, you allow the database to apply its optimizations. You also are potentially retrieving rows that you don't need (if you were to replace an INNER join with multiple selects), which increases the network traffic between your app server and database server. Even if they're on the same box, this matters. It might depend on what you do with the data after you fetch it from the DB. If you use each of the four results independently, then it would be more logical and clear to have four separate SELECT statements. On the other hand, if you use all the data together, like to create a unified row in a table or something, then I would go with the single SELECT and JOINs. I've done a bit of PHP/MySQL work, and I find that even for queries on huge tables with tons of JOINs, the database is pretty good at optimizing - if you have smart indexes. So if you are serious about performance, start reading up on query optimization and indexing. Well under Oracle you'd want to take advantage of the query caching, and if you have a lot of small queries you are doing in your sequential processing, it would suck if the last query pushed the first one out of the cache...just in time for you to loop around and run that first query again (with different parameter values obviously) on the next pass. We were building an XML output file using Java stored procedures and definitely found the round trip times for each individual query were eating us alive. We found it was much faster to get all the data in as few queries as possible, then plug those values into the XML DOM as needed. The only downside is that the Java code was a bit less elegant, as the data fetch was now remote from its usage. But we had to generate a large complex XML file in as close to zero time as possible, so we had to optimize for speed. I would say 1 query with the join. This way you need to hit the server only once. And if your tables are joined with indexes, it should be fast. Be careful when dealing with a merge table however. It has been my experience that although a single join can be good in most situations, when merge tables are involved you can run into strange situations.
STACK_EXCHANGE
package sisBib.principal; /** * <p>A classe Periodico herda da Classe Acervo e acrescenta informações * pertinentes.</p> * * <p>Sistema de Biblioteca (SisBib): trabalho desenvolvido na disciplina * Algoritmo II, do curso de Ciência da Computação da Faesa, Prof. Rober Marconi.</p> * * @author Abrantes Araújo Silva Filho (<a href="mailto:abrantesasf@gmail.com">abrantesasf@gmail.com</a>) * @author Isaac de Miranda Campos (<a href="mailto:isaac.miranda321@gmail.com">isaac.miranda321@gmail.com</a>) * @version 1.0 * @since 2018-12-01 */ public class Periodico extends Acervo { /////////////////////////////////////////////////// // Definições de atributos: /////////////////////////////////////////////////// /** * <p><b>impacto</b> (fator de impacto do periódico)</p> * <p>É uma variável do tipo <i>double</b> que representa o fator de * impacto do periódico.</p> */ private double impacto; /** * <p><b>issn</b> (International Standard Serial Number)</p> * <p>É uma variável do tipo <i>String</b> que representa o ISSN * do periódico.</p> */ private String issn; /////////////////////////////////////////////////// // Construtor(es) /////////////////////////////////////////////////// /** * <p><b>Periodico(int codigo, String autores, String titulo, char tipo, double impacto, String issn)</b></p> * <p>O construtor da Classe Periodico recebe 6 parâmetros obrigatórios: código, * autores, título, tipo, impacto e issn.</p> * @param codigo (int com até 8 dígitos representando o código do item) * @param autores (String com o autor - ou autores - do item) * @param titulo (String com o título do item) * @param tipo (char com o tipo do item) * @param impacto (double com o fator de impacto do periódico) * @param issn (String com o ISSN do periódico) */ public Periodico(int codigo, String autores, String titulo, char tipo, double impacto, String issn) { // Usa o construtor da classe Acervo: super(codigo, autores, titulo, tipo); // Complementa com informações da classe Periodico: setImpacto(impacto); setISSN(issn); } /////////////////////////////////////////////////// // Getters: /////////////////////////////////////////////////// /** * <p><b>getImpacto()</b></p> * <p>Retorna o fator de impacto do periódico.</p> * @return <b>impacto</b> (double com o fator de impacto do periódico) */ public double getImpacto() { return this.impacto; } /** * <p><b>getISSN()</b></p> * <p>Retorna o ISSN do periódico.</p> * @return <b>issn</b> (String com o ISSN do periódico) */ public String getISSN() { return this.issn; } /////////////////////////////////////////////////// // Setters: /////////////////////////////////////////////////// /** * <p><b>setImpacto(double impacto)</b></p> * <p>Recebe um <i>double</i> com o fator de impacto do periódico, * e atribui à variável correta.</p> * @param impacto (double com o fator de impacto do periódico) */ public void setImpacto(double impacto) { if (validaImpacto(impacto)) { this.impacto = impacto; } else { System.out.println("ERRO! Fator de impacto fora dos limites."); } } /** * <p><b>setISSN(String issn)</b></p> * <p>Recebe uma <i>String</i> que representa o ISSN do periódico, * e atribui à variável correta.</p> * @param issn (String com o ISSN do periódico) */ public void setISSN(String issn) { this.issn = issn; } /////////////////////////////////////////////////// // Outros Métodos /////////////////////////////////////////////////// /** * <p><b>validaImpacto(double impacto)</b></p> * <p>Recebe um <i>double</i> que representa o fator de impacto do periódico e * verifica se está dentro de parâmetros aceitáveis (>= 0).</p> * @param impacto (double representando o fator de impacto do periódico) * @return <b>True</b>, se dentro dos limites<br /><b>False</b>, se fora dos limites */ private boolean validaImpacto(double impacto) { if (impacto >= 0) { return true; } else { return false; } } /** * <p><b>toString()</b></p> * <p>Retorna uma String com as informações básicas do periódico.</p> */ public String toString() { String resposta = ""; resposta += super.toString() + "\n" + "Impacto: " + getImpacto() + "\n" + "ISSN: " + getISSN(); return resposta; } }
STACK_EDU
Skeletal muscle repair is driven by the coordinated self-renewal and fusion of myogenic stem and progenitor cells. Single-cell gene expression analyses of myogenesis have been hampered by the poor sampling of rare and transient cell states that are critical for muscle repair, and do not inform the spatial context that is important for myogenic differentiation. Cornell University researchers demonstrate how large-scale integration of single-cell and spatial transcriptomic data can overcome these limitations. They created a single-cell transcriptomic dataset of mouse skeletal muscle by integration, consensus annotation, and analysis of 23 newly collected scRNAseq datasets and 88 publicly available single-cell (scRNAseq) and single-nucleus (snRNAseq) RNA-sequencing datasets. The resulting dataset includes more than 365,000 cells and spans a wide range of ages, injury, and repair conditions. Together, these data enabled identification of the predominant cell types in skeletal muscle, and resolved cell subtypes, including endothelial subtypes distinguished by vessel-type of origin, fibro-adipogenic progenitors defined by functional roles, and many distinct immune populations. The representation of different experimental conditions and the depth of transcriptome coverage enabled robust profiling of sparsely expressed genes. The researchers built a densely sampled transcriptomic model of myogenesis, from stem cell quiescence to myofiber maturation, and identified rare, transitional states of progenitor commitment and fusion that are poorly represented in individual datasets. They performed spatial RNA sequencing of mouse muscle at three time points after injury and used the integrated dataset as a reference to achieve a high-resolution, local deconvolution of cell subtypes. They also used the integrated dataset to explore ligand-receptor co-expression patterns and identify dynamic cell-cell interactions in muscle injury response. The researchers also provide a public web tool to enable interactive exploration and visualization of the data. This work supports the utility of large-scale integration of single-cell transcriptomic data as a tool for biological discovery. Large-scale integration of 111 single-cell and single-nucleus RNAseq samples reveals cell subtypes in skeletal muscle a Workflow used for preparation, integration, and analysis of sc/snRNAseq compendium (see “Methods”). b Overview of experimental and technical variables across compendium. The percentages shown are calculated with respect to cell number after quality control. Ages in months (mo). Injury by cardiotoxin (CTX) or notexin (NTX). Time-points in days post-injury (dpi). c UMAP representation of the merged datasets after alignment, ambient RNA removal, quality control filtering and doublet removal, but before batch-correction, colored by the dataset source. d UMAP representation of integrated compendium after batch-correction with Harmony. Cells are colored by cell type, identified after Harmony integration. e Differential detection of gene biotype sets between single-cell and single-nucleus datasets, including all protein-coding genes, long noncoding RNAs (lncRNAs), transcription factors, cell surface proteins, ribosomal protein subunits, mitochondrial genes, and “core” dissociation-associated stress factors. Availability – The full integrated dataset with visualization tools is available at scmuscle.bme.cornell.edu.
OPCFW_CODE
import numpy as np def load_calib(fileName = "/home/cwu/Downloads/dataset/sequences/00/calib.txt"): """Load and compute intrinsic and extrinsic calibration parameters.""" # We'll build the calibration parameters as a dictionary, then # convert it to a namedtuple to prevent it from being modified later data = {} # Load the calibration file #calib_filepath = os.path.join(self.sequence_path, 'calib.txt') filedata = {} with open(fileName, 'r') as f: for line in f.readlines(): key, value = line.split(':', 1) # The only non-float values in these files are dates, which # we don't care about anyway try: filedata[key] = np.array([float(x) for x in value.split()]) except ValueError: pass #filedata = utils.read_calib_file(calib_filepath) # Create 3x4 projection matrices P_rect_00 = np.reshape(filedata['P0'], (3, 4)) P_rect_10 = np.reshape(filedata['P1'], (3, 4)) P_rect_20 = np.reshape(filedata['P2'], (3, 4)) P_rect_30 = np.reshape(filedata['P3'], (3, 4)) # Compute the rectified extrinsics from cam0 to camN T1 = np.eye(4) T1[0, 3] = P_rect_10[0, 3] / P_rect_10[0, 0] T2 = np.eye(4) T2[0, 3] = P_rect_20[0, 3] / P_rect_20[0, 0] T3 = np.eye(4) T3[0, 3] = P_rect_30[0, 3] / P_rect_30[0, 0] # Compute the velodyne to rectified camera coordinate transforms data['T_cam0_velo'] = np.reshape(filedata['P0'], (3, 4)) data['T_cam0_velo'] = np.vstack([data['T_cam0_velo'], [0, 0, 0, 1]]) data['T_cam1_velo'] = T1.dot(data['T_cam0_velo']) data['T_cam2_velo'] = T2.dot(data['T_cam0_velo']) data['T_cam3_velo'] = T3.dot(data['T_cam0_velo']) # Compute the camera intrinsics data['K_cam0'] = P_rect_00[0:3, 0:3] data['K_cam1'] = P_rect_10[0:3, 0:3] data['K_cam2'] = P_rect_20[0:3, 0:3] data['K_cam3'] = P_rect_30[0:3, 0:3] for k in data.keys(): print(k, data[k]) return data if __name__ == "__main__": calData = load_calib()
STACK_EDU
Python Class Inheritance issue I'm playing with Python Class inheritance and ran into a problem where the inherited __init__ is not being executed if called from the sub-class (code below) the result I get from Active Python is: >>> start Tom Sneed Sue Ann Traceback (most recent call last): File "C:\Python26\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py", line 312, <br>in RunScript exec codeObject in __main__.__dict__ File "C:\temp\classtest.py", line 22, in <module> print y.get_emp() File "C:\temp\classtest.py", line 16, in get_emp return self.FirstName + ' ' + 'abc' AttributeError: Employee instance has no attribute 'FirstName' Here's the code class Person(): AnotherName = 'Sue Ann' def __init__(self): self.FirstName = 'Tom' self.LastName = 'Sneed' def get_name(self): return self.FirstName + ' ' + self.LastName class Employee(Person): def __init__(self): self.empnum = 'abc123' def get_emp(self): print self.AnotherName return self.FirstName + ' ' + 'abc' x = Person() y = Employee() print 'start' print x.get_name() print y.get_emp() Three things: You need to explicitly call the constructor. It isn't called for you automatically like in C++ Use a new-style class inherited from object With a new-style class, use the super() method available This will look like: class Person(object): AnotherName = 'Sue Ann' def __init__(self): super(Person, self).__init__() self.FirstName = 'Tom' self.LastName = 'Sneed' def get_name(self): return self.FirstName + ' ' + self.LastName class Employee(Person): def __init__(self): super(Employee, self).__init__() self.empnum = 'abc123' def get_emp(self): print self.AnotherName return self.FirstName + ' ' + 'abc' Using super is recommended as it will also deal correctly with calling constructors only once in multiple inheritance cases (as long as each class in the inheritance graph also uses super). It's also one less place you need to change code if/when you change what a class is inherited from (for example, you factor out a base-class and change the derivation and don't need to worry about your classes calling the wrong parent constructors). Also on the MI front, you only need one super call to correctly call all the base-class constructors. Care must be taken when using super, please read this: http://fuhm.net/super-harmful/ +1 for recommending new style classes, particularly as the OP's stack trace suggests they're using Python 2.6. is there an issue if the super class does not have an init? and if so does that mean the super class can not be a 'black box'? No issue of the superclass has no init. It's pretty black-box-ish. Good to know about the pitfalls. I've been pretty careful so far though and all my python code calls super() all the way back up to object :) You should explicitely call the superclass' init function: class Employee(Person): def __init__(self): Person.__init__(self) self.empnum = "abc123" That worked - but why? why would the inherited class not work the same as if you instantiated it as its own object? Python will not automatically call the superclass' init function when instantiating a subclass. This responsability is left to the programmer. I don't know if there's a rationale behind this. Employee has to explicitly invoke the parent's __init__ (not init): class Employee(Person): def __init__(self): Person.__init__(self) self.empnum = 'abc123' Instead of super(class, instance) pattern why not just use super(instance) as the class is always instance.__class__? Are there specific cases where it would not be instance.__class__? wow. I think it erased part of my comment. it should be "instance.class" of course. This is actually an excellent question. Please watch http://pyvideo.org/video/879/the-art-of-subclassing (TL;DW: no, it's not instance.__class__. "self" is not neccessarily you.) Using super().init() in a subclass's __init__ method calls the constructor of the parent class, ensuring proper initialization of inherited attributes and avoiding redundancy in the code. class Employee(Person): def __init__(self): self.empnum = 'abc123' super().__init__()
STACK_EXCHANGE
In this article, we will have explained the necessary steps to change the Hostname on Ubuntu 20.04 LTS. Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges. All the commands in this tutorial should be run as a non-root user. A hostname is a name assigned to a “host” i.e. a computer on a network. The hostname is basically just your computer’s name. It’s used to identify your computer on the network. Please note that the hostname set at the time of installation either by sysadmin or cloud service provider such as Linode, Ramnode, and many others. To change the Ubuntu computer name, you must log in as the root user. Change Hostname on Ubuntu Step 1. First, before you start installing any package on your Ubuntu server, we always recommend making sure that all system packages are updated. sudo apt update sudo apt upgrade Step 2. Display Current Hostname. Ubuntu Linux 20.04 LTS server or desktop can simply use the hostnamectl command to change the hostname. To see the current setting just type the following command: That should display something similar to the lines below: Static hostname: ubuntuq-2004 Icon name: computer-vm Chassis: vm Machine ID: e280aedec6a247jembeb4f85576bb Boot ID: b794a939b6264a5ea7bet18eae9c130d7 Virtualization: oracle Operating System: Ubuntu 20.04 LTS Kernel: Linux 5.8.0-26-generic Architecture: x86-64 Step 3. Change Hostname. Changing the system hostname is a simple process. The syntax is as follows: sudo hostnamectl set-hostname host.linuxtips.us sudo hostnamectl set-hostname "Your Pretty HostName" --pretty sudo hostnamectl set-hostname host.linuxtips.us --static sudo hostnamectl set-hostname host.linuxtips.us --transient For example, to change the system static hostname to meilana.linuxtips.us, you would use the following command: sudo hostnamectl set-hostname meilana.linuxtips.us To change your hostname permanently, you’ll also need to edit your /etc/hosts file, which is where Ubuntu distributions store the hostname: 127.0.0.1 localhost 127.0.0.1 meilana.linuxtips.us # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff04::1 ip6-allnodes ff04::2 ip6-allrouters If the cloud-init package is installed you also need to edit the cloud.cfg file. This package is usually installed by default in the images provided by the cloud providers such as AWS and it is used to handle the initialization of the cloud instances: sudo nano /etc/cloud/cloud.cfg Search for preserve_hostname and change the value from false to true: # This will cause the set+update hostname module to not operate (if true) preserve_hostname: true Now, for the changes to take effect, reboot your computer with the following command: Once your computer boots, run the following command to verify if the hostname of computer A has changed: $ hostnamectl Static hostname: meilana.linuxtips.us Pretty hostname: sesion 15's desktop Icon name: computer-vm Chassis: vm Machine ID: a04e3543f3da460294926b7c41e87a0d Boot ID: aa31b274703440dfb622ef2bd84c52cb Virtualization: oracle Operating System: Ubuntu 20.04 LTS Kernel: Linux 5.8.0-26-generic Architecture: x86-64 That’s all you need to do to change the Hostname on Ubuntu 20.04 Focal Fossa. I hope you find this quick tip helpful. If you have questions or suggestions, feel free to leave a comment below.
OPCFW_CODE
Social media and non-verbal communication often leads to static and friction. Also: water is wet. A tactic I see used quite a bit on Twitter is that when a moron (me) posts something that rattles someone on a piedestal (lots of followers, can't help retweeting anyone praising them etc.), they will often screencap your tweet, and make a condescening comment about how ignorant I am (which is undoutably true). It's the same strategy the teacher would use to try to embarras kids in school - "come to the board Morten, so we can show the WHOLE class how stupid you are" (my tactic was to then soil myself right then and there). On Twitter, what follows then, is a whole bunch of comments pointing out that, yes indeed, Morten is a fool, just look at his stupid comment. Not pleasant at all. Now I'm too old, tired, and I've learned that I can't convince the congregation to change their minds by pointing out how illogical and borderline insane their views are. They've heard pleads like mine a million times, and they've been conditioned to be wary and suspicious of people outside the tribe. I've been blocked by a few people too. When they block me, I do what every other small minded and petty asshole does. I post a screenshot or comment, boasting that I, Morten, had to power to make an assistant push two buttons to shut me up. It fills me with a kind of hollow pride. If I was a larger person; a better mensch, more decent, and more reflective, I probably wouldn't be trolling random people on twitter, and thus I wouldn't even be in a situation where I have long winded arguments with strangers about things I have no influence over, and that won't influence me one bit. Perhaps I wouldn't post passive-agressive, and veiled snarks that I know (secretly hope) will rile people up. I've worked remote for many years, and will often prefer writing an email to calling people on the phone. The one thing I learned is that pointing out mistakes or errors, in writing, can upset people much more than intended. And that was all I learned btw. There's just something unsettling about seeing an error or a mistake pointed out in writing, and especially on the Internet where we expect that nothing ever gets deleted (it does - just not the juicy stuff). It's a hell of a lot worse if the disagreement is between two people who have vastly different clout. Most good leaders will refrain from pointing out problems in plenum (that's a cool academic word in Danish, but in English the word might be "public"), and instead have a quiet 1-on-1. And with good reason - it's the opposite approach of the sadistic teacher. It's damn near impossible to resolve a spat over email, and on social media its probably harder. There are people I think are idiots, and people that are dangerous idiots. But you're not going to convince an idiot of anything. Hell, you don't even know if YOU are the idiot. So I guess that if you run an honest media company, then you have to point out flaws and issues IN PUBLIC, which then makes people angry. We don't need another outlet sponsored by the industry, with the sole purpose of regurtitating press releases full of self-praise and self-admiration. Not sure if there's a point in there somewhere. Have a nice day.
OPCFW_CODE
Does one need to sheafify when defining the inverse image of a sheaf with respect to an embedding? This seems to be a rather simple (stupid?:)) question; yet I was not able to find an answer quickly. For a morphism $f:X\to Y$ of schemes (or topological spaces) and an (etale or topological) sheaf $f$ (of abelian groups) on $Y$ one defines the inverse image $f^*(S)$ (or is $f^{-1}$ more standard?) as the sheafification of the inverse image of $S$ considered as a presheaf. Now suppose that $f$ is a (closed) embedding. Is the sheafification necessary here? It seems that the answer is no, since any covering of an open $U\subset X$ could be presented as a 'limit' of coverings of open $V\subset Y$, $V\supset f(U)$ (in the topogical context); so the 'presheaf inverse image' of a sheaf is a sheaf also. This seems to be easy, as well as carrying this argument over to the etale context; yet I would be deeply grateful for a definite answer and a reference for it. Also, does the situation change when one passes to simplicial schemes and sheaves? Is this fact 'standard' (if it is true)? I would have thought that the answer would be "not necessary", rather, for an open embedding, no? (Or étale morphism?) It seems that you are not quite right. Certainly, everything is ok for an open embedding. My point is that one can present a closed subset as a 'limit' of open ones. On the other hand, let $f$ be the map from a (finite) collection of points with the discrete topology to a single point; certainly, such an $f$ is etale. Then the pullback of a constant presheaf (which is a certainly a sheaf) is a constant presheaf; one certainly has to sheafify here in order to obtain a sheaf. Also, do you have a counterexample to my claim? sorry, I missed the point. I don't have a counterexample, either. (You probably mean to write $V\subset Y$ above.) In the topological case, you need some additional hypotheses, otherwise you get a counterexample from $Y={g,s,t}$ with $U\subseteq Y$ open iff $U$ empty or $g\in U$, so $X={s,t}$ is a closed subspace, but $\Gamma(X,f^{-1}\mathbf Z_Y)=\mathbf Z^2\ne\mathbf Z=\varinjlim_{V\supseteq X}\Gamma(V,\mathbf Z_Y)$. Yes, you are right! Actually, I was thinking about manifolds here.:) Perhaps I'm thinking of sheaves a bit differently from you here, but as long as we think of sheaves as \'{e}tale maps this holds for arbitrary continuous maps $f:X\to Y$ and not just open or closed embeddings. The reason for this is that, \'{e}tale maps are stable under pullback along arbitrary continuous maps. See, e.g., Mac Lane and Moerdijk (Sheaves in Geometry and Logic) Lemma 1 (II.9, page 99). I guess that you know this though and are asking about the the case where we are not regarding sheaves as \'{e}tale maps. In this case you are right that some sheafification is required (since the construction of the inverse image along $f$ involves taking a colimit in $\text{Sh}(X)$). (I had thought that this was incorrect, but I had misunderstood precisely how you were claiming one should compute the inverse image in this case and so have removed my counterexample since it did not quite address your question.)
STACK_EXCHANGE
Extract derived data fields from Shapefiles I've uploaded shapefiles with professionally supplied survey data into QGIS and it all plots fine, then when I export the attributes table into Excel I see that the XYZ values are all zeros. I know they can't be zero as all the data points are plotting on my map. then when I select each point and view attributes table, sure enough, the XYZ fields are zero, however I do see a tab labelled derived, which when selected does show all of the XYZ attributes that I require. Does anyone know how to extract these derived data fields from shapefiles so I can merge the values with the previously exported attributes Excel file and have all the survey information I need within a single file. Extra information added below on 13/03/19. the data i have is shape files extracted from civil cad that contains survey information of drainage pipes and pits. it has x,y,z and other comments. everything appears in the attributes tables except the xyz information which can be viewed in the derived tab, but not the attribute table. i would be happy to show you screenshots if i could post them. the attributes table is produced in excel by the following steps 1. right click on the shapefile in layers list 2. select export-save feature as 3. then select excel file from list and follow the prompts the result is a file with copy of the attributes from the shapefile. but the upstream and downstream E and N coords are zero. the x,y coords i found when i use identify and right click - then select a pipe or pit from the displayed map. the identify results show feature and value columns where the attributes are displayed and also a derived tab. under the derived tab when expanded, there is the x and y coords for pit or start and end of pipe for whichever feature is selected. im trying to get the coords under the derived tab extracted and shown in the attributes table as upsteam or downstream E and N coords. so far ive only been able to do this by hand one at a time, but there must be a better way. The points are positioned by their geometry not by their fields, you can calculate the X, Y and Z values using the field calculator https://docs.qgis.org/2.8/en/docs/user_manual/working_with_vector/field_calculator.html (X = $X, Y = $Y and hopefully there's a Z to calculate to $Z but that's not guaranteed, your shapefile could be 2d and not 3d). Be very careful with Excel and DBF files, if you save them it will break the shapefile, best to copy the DBF somewhere else before opening in Excel then it doesn't matter if you accidentally save it. It could be that the CRS of the layer is not the same as the projects' CRS. In the Attribute Table > Field Calculator field try updating your newly created attribute fields X and Y with the following codes respectively: x(transform($geometry, 'EPSG:current', 'EPSG:target')) y(transform($geometry, 'EPSG:current', 'EPSG:target')) I found the code in this solution (although for another question) to do the trick for me. To get lon/lat coordinates (geographic coordinates in WGS84), use EPSG:4326 instead of 'EPSG:target' and replace 'EPSG:current' with the variable @layer_crs: this automatically identifies the layer's CRS, so you can avoid an error by selecting the wrong CRS. See https://gis.stackexchange.com/a/411284/88814
STACK_EXCHANGE
Auto forward sms to Telegram messenger Group I'm a volunteer firefighter and I receive emergency calls and at the same time an "emergency SMS". I'd like to forward those incoming SMS to a group in Telegram/WhatsApp, so my family and friends know I went to an incident. I know there's the "Share" option in Android's SMS App. (Not using Hangouts for SMS) But is it possible to forward all SMS from one contact automatically to a WhatsApp/Telegram group? Searching the Google Play Store I found "Autoshare" (com.joaomgcd.autoshare) but I'm not quite sure if this would do the trick and after the watching the video I was even unsteadier... IFTTT to the rescue! You can set IFTTT to forward an SMS from specific number to specific chat. Steps: Create you IFTTT account if not already exists. Start the @IFTTT Telegram bot and authorize your IFTTT account. Add the @IFTTT bot to you Telegram group as admin. Choose /connect_group from the bot commands, and follow instructions. If your group isn't public, you can give it a public (meaningful) name just for the connection to IFTTT and then change it back to private. You'll see the group public name and group title in IFTTT, so before connecting check that the name is understandable. Go to My Applets and choose New Applet For the +this - choose Android SMS and then New SMS received from phone number Enter the phone number from which you recieve the relevant texts. For the +that - choose Telegram and then Send message. Configure the Telegram message to be sent, you'll have some options there (Target chat, message text, web preview). The message text supports basic HTML and have some parameters from the SMS connection (such as sender, text, time etc.). I'm using it to automatically forward my post office message to my Telegram. Important note: You said you're a firefighter, so I believe the messages are urgent. You should do some tests and check in IFTTT F.A.Q or docs about whether it's immediate or has some delay, or what they guarantee. I just tried and it around half a minute, and so it was in previous times. Although some time has passed and I simply forwarded the SMS until now, this seems pretty much what I initially was looking for! Thanks a lot! :) Better late than never :) maybe it will be useful to more people! The urgency issue in OP's case is not "how quickly are the messages forwarded", because the forwarding to friends and family (to let them know he's gone on a call) is very much secondary to letting him himself know about that call. What I'd want to verify is that no bug in the software (or its updates) ever interferes with the prompt display of the original message. This isn't so bad if the message is simply a copy of information already delivered via voice call, but if the voice call says "get in the vehicle now and I'll text you the destination in a sec" then any delay could be serious. A quick further question to this - This is exactly what I do right now to forward SMSes to a Telegram group, however in order to keep my SMS Inbox clean - Is it also possible to then delete the SMS from my inbox (ONLY after it has been forwarded to the selected Telegram group)? @Deep not with IFTTT. The Android SMS service doesn't support a message deleting action O don't know why, but for IFFT doesn't work properly. I am getting tons of sms and only few are forwarded. Also delay was more than 8h This IFFF Not working at all, or it is no longer working. Today is 2021/10/15. @PhamX.Bach IFTTT changed their plans and applets, and so they became useless to me :/ check out my app autoforwardsms.com. The Android app is built specifically for this purpose and your scenario was the inspiration behind the app's development. Hi Kerryn! Yes, this seems to be what I am looking for. What I'm not really clear about: Is it possible to forward those SMS to a messenger? The main problem: Google Play tells me that the article is not available in my country. :/ Pentix: I'm not sure to be honest if it will forward to Messenger...a good question actually. But there is now a free 7 day trial version which you are of course welcome to try at no charge. What country are you in and I'll see what's going on in PlayStore. Cheers Kerryn autoforwardsms.com @Kerryn If you need to add information in response to another comment, you should [edit] your answer to do so. Posting a new answer just makes things confusing.
STACK_EXCHANGE
8.10 and the GeForce 6100 NVIDIA cave.dnb2m97pp at aliceadsl.fr Thu Nov 6 21:48:44 UTC 2008 On Thursday 06 November 2008 21:04, Karl Larsen wrote: > Bart Silverstrim wrote: > > Karl Larsen wrote: > >> Bart Silverstrim wrote: > >>> Karl Larsen wrote: > >>>> You do not want to believe the facts. The nVidia bash file works > >>>> on ALL Linux except 8.10 and this is due to the new kernel. I have > >>>> proved that. Since 8.10 needs this special kernel then it will keep my > >>>> computer from working. > >>> What's the kernel from the latest updates in Hardy? > >> title Ubuntu 8.04.1, kernel 2.6.24-21-generic > >> root (hd0,5) > >> kernel /boot/vmlinuz-2.6.24-21-generic > >> root=UUID=8713c541-dffa-4fd2-b22b-e600afacbab2 ro quiet splash > >> initrd /boot/initrd.img-2.6.24-21-generic > >> quiet > >> At least this is the last one I got from update's. > > Wonder why Hardy didn't get the latest kernel yet? No reason it > > shouldn't. > > On this Intrepid system: > > 2.6.27-7-generic > We, I hope we never get it :-) > But be interesting if getting the Intrepid kernel on Hardy should ruin > it for me with the nVidia GeForce 6100 video system. In fact I have the > LiveCD for Intrepid so I should be able to install that new kernel on my > Hardy. May when I feel real bored I will try it. How about installing the Hardy kernel that's ok on your hardy install onto your Intrepid install. You know that, that kernel was ok with your Nvidea (bugger. I've spelled that wrong) graphics card. I'm in France, but add the following line to your /etc/apt/sources.list. deb http://nl.archive.ubuntu.com/ubuntu/ hardy main restricted Having saved it, do an apt-get update, then open synaptic on the CLI. Scroll down on synaptic, and install the latest Hardy kernel, which in my case is: Next, and most important, go back into /etc/apt/sources.list, and comment out the Hardy line, by putting a # at the start of it. OK. I'm now presuming that the latest Hardy kernel is installed, so have a look in /boot . You should see an entry for vmlinuz-2.6.24-21-generic, and initrd.img.2.6.24-21-generic. So far so good? Now there is no saying if this kernel has been added to your /boot/grub/menu.lst, and if not, you will have to add it manually using the same layout as for your Intrepid kernel. If it is in your menu.lst, all you have to do is reboot and select it from Grubs menu, then you can do the stuff that you normally do to get your graphics card working. Of course you may have needed to install other packages along with the Hardy kernel, so as to install the Nvidea stuff. Kernel headers for example. I don't know, but I'm sure you do. Just some suggestions. More information about the ubuntu-users
OPCFW_CODE
EMC Storage Integrator for Windows Suite EMC Storage Integrator (ESI) for Windows Suite is a set of tools for Microsoft Windows and Microsoft applications administrators. The suite includes: ESI for Windows and ESI PowerShell Toolkit, ESI hypervisor support and system adapters, ESI SharePoint Adapter, ESI Exchange Integration, ESI SQL Server Adapter, ESI AppSync Adapter, ESI RecoverPoint Adapter, ESI Service and ESI Service PowerShell Toolkit, ESI SCOM Management Packs, and EMC Hyper-V VSS Requestor. ESI for Windows and ESI PowerShell Toolkit ESI for Windows has a GUI that is based on Microsoft Management Console (MMC). You can run ESI as a stand-alone tool or as part of an MMC snap-in on a Windows platform. ESI also provides storage provisioning and discovery with the ESI PowerShell Toolkit. Getting started with the ESI PowerShell toolkits provides installation and setup information. ESI for Windows enables you to view, provision, and manage block and file storage for Microsoft Windows, Exchange, SQL Server, and SharePoint sites. ESI supports the EMC Symmetrix VMAX series, EMC VNX series, EMC VNXe series, and EMC CLARiiON CX fourth generation (CX4) series of storage systems. ESI also supports EMC AppSync and Linux hosts. ESI requires that you install the corresponding adapters for specific system and application support. Prerequisites provides specific details about the prerequisites for storage systems and applicable adapters. The ESI PowerShell Toolkit provides ESI storage provisioning and discovery capabilities with corresponding PowerShell cmdlets. ESI hypervisor support In addition to physical environments, ESI also supports storage provisioning and discovery for Windows virtual machines (VMs) running on Microsoft Hyper-V, Citrix XenServer, and VMware vSphere. The Hyper-V Adapter is installed with ESI core installation and the Citrix XenServer and VMware vSphere Adapters are installed by default as part of the ESI installation. These adapters require no additional installation or setup. The storage options in ESI vary depending on what is supported on the hypervisor: For Hyper-V virtual machines, you can create virtual hard disks (VHD and VHDX files) and pass-through SCSI disks. You can also create host disks and cluster shared volumes. For VMware vSphere virtual machines, you can create virtual hard disks (VMDK files) and raw device mapping (RDM) disks. You can also create SCSI disks and view datastores. SCSI disks require the use of existing SCSI controllers. For XenServer virtual machines, you can create virtual hard disks (VHD files) and storage repositories. Before adding or removing virtual disks using ESI, the virtual machine running on the XenServer must have xs-tools installed. Otherwise, you must shut down the virtual machine before adding or removing virtual hard disks. Storage system adapters When you install ESI, you can select which adapters to install. ESI provides adapters for most supported storage systems, including VMAX, VNX, VNXe, VMware vSphere, Microsoft Hyper-V, and Citrix XenServer. Most of these adapters require no additional setup. However, the VMAX adapter requires some additional setup as described in Installing and setting up the VMAX Adapter. ESI also has adapters for Linux hosts and for monitoring the health of VPLEX systems in SCOM. The Linux Adapter overview and VPLEX Adapter overview has more details. Application and replication adapters ESI also provides the following application and replication adapters: The ESI SharePoint Adapter enables you to view and manage SharePoint storage, farms, sites, and content databases. The SharePoint Adapter overview has more details. SQL Server Adapter The ESI SQL Server Adapter enables you to view local and remote Microsoft SQL Server instances and databases and to map the databases to EMC storage. ESI supports the SQL Server 2012 AlwaysOn feature, so you can view the primary SQL Server replica and up to four secondary replicas. You can use SQL Scripts to create and configure SQL Server databases from an ESI host. For SQL Server 2012, creating databases on file shares using SMB 3.0 and VNX storage systems is supported in this release. The SQL Server Adapter overview has more details. Microsoft Exchange Server is a mail server, calendaring software, and contact manager program that runs on Windows Server. ESI Exchange integration enables storage administrators to integrate Exchange with supported EMC storage systems. The ESI Exchange Integration includes the Microsoft Exchange Adapter, the ESI Exchange High Availability (HA) Extension, the ESI Exchange SCOM Management Packs, and the ESI Exchange HA Extension. You can connect and manage Exchange storage with an ESI host controller. The Exchange Integration overview has more details. AppSync Adapter (Replication) The ESI AppSync Adapter provides simple, self-service application protection with tiered protection options and proven recoverability. This adapter supports multiple SQL Server instances on the same host. The ESI AppSync Adapter overview has more details. RecoverPoint Adapter (Replication) The ESI RecoverPoint Adapter provides local and remote data protection. If a disaster occurs, RecoverPoint can recover lost data from any point in time. You can use this adapter to connect to existing RecoverPoint/SE or RecoverPoint/EX systems and manage and view replication service clusters for replication. The ESI RecoverPoint Adapter overview has more details. ESI Service and ESI Service PowerShell Toolkit The ESI Service is the communications link between ESI and the ESI SCOM Management Packs. The ESI Service can be used with or without the ESI SCOM Management Packs. You can use the ESI Service to view and report on all registered EMC storage systems and storage-system components connected to the ESI host system. The ESI Service PowerShell Toolkit is also installed as part of the ESI Service. Use this toolkit to set up the ESI Service. ESI SCOM Management Packs The ESI SCOM Management Packs for Microsoft System Center Operations Manager (SCOM) enable you to manage EMC storage systems with SCOM by providing consolidated and simplified dashboard views of storage entities. The management packs support the same VMAX family, VNX, and CX4 series of storage systems that are supported in ESI. The ESI SCOM Management packs also support EMC VPLEX systems and Symmetrix DMX 4 storage system. The VPLEX Adapter overview and ESI Service and ESI SCOM Management Packs overview has more details. EMC Hyper-V VSS Requestor EMC Hyper-V VSS Requestor is a backup utility that processes VSS requests to create point-in-time copies (shadow copies) of Microsoft Hyper-V virtual machines for the EMC VMAX and VNX series of storage systems. EMC Hyper-V VSS Requestor Release Notes provides installation and setup instructions for this utility.
OPCFW_CODE
yum update fails: Error: Cannot retrieve repository metadata (repomd.xml) for repository … I'm using CentOS 6.3. When I try to update my system with yum I have this message: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.ircam.fr * centosplus: miroir.univ-paris13.fr * extras: mirrors.ircam.fr * update: centos.quelquesmots.fr http://mirror.centos.org/centos/6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: addons. Please verify its path and try again yum clean all Loaded plugins: fastestmirror Cleaning repos: CactiEZ addons base centosplus extras pgdg93 update Cleaning up Everything Cleaning up list of fastest mirrors Loaded plugins: fastestmirror check all yum erase apf Loaded plugins: fastestmirror Setting up Remove Process No Match for argument: apf Determining fastest mirrors * base: centos.mirror.fr.planethoster.net * centosplus: centos.mirror.fr.planethoster.net * extras: mirrors.ircam.fr * update: centos.quelquesmots.fr CactiEZ | 2.9 kB 00:00 CactiEZ/primary_db | 13 kB 00:00 http://mirror.centos.org/centos/6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: addons. Please verify its path and try again Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.mirror.fr.planethoster.net * centosplus: centos.mirror.fr.planethoster.net * extras: mirrors.ircam.fr * update: centos.quelquesmots.fr http://mirror.centos.org/centos/6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: addons. Please verify its path and try again [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=5 bugtracker_url=http://bugs.centos.org/set_project.php?project_id=16&ref=http://bugs.centos.org/bug_report_page.php?category=yum distroverpkg=centos-release since yesterday you asked so much question about bug with yum internet connectivity and path issues, have you just considered to reinstall your system ? It's seems pretty f$$$ed up You need to fix the internet connectivity first. Re-installing might be an option. Or try booting a live CD system. I solved the problem by deleting "yum.repos.d" folder and recreate an example.repo file. add repo details from here: http://www.linuxquestions.org/questions/linux-newbie-8/deleted-all-of-the-repos-in-yum-repos-d-how-to-restore-them-4175532866/ Your ca-bundles.crt are too old. One work-around until you upgrade to a newer version of CentOS would be to change the epel.repo from using https to http sudo sed -i 's/https/http/g' /etc/yum.repos.d/epel.repo +1 Spot on. But you shouldn't need to *upgrade* CentOS to grab the latest cert bundle (because of CentOS's long term support). `yum update ca-certificates` should do the trick (after disabling https for epel, or grabbing the rpm directly and updating using rpm). @kev Well in theory your comment should work, but sadly for me when I ask `yum to update ca-certificates` I get **"No Packages marked for Update"**. In fact when I do a `yum list | grep ca-certificates` I get this _ca-certificates.noarch 2010.63-3.el6_1.5_ as the latest version on my CentOS 6.4 box. On my CentOS 6.5 box I get _ca-certificates.noarch 2014.1.98-65.1.el6_ So it looks like you need to at least update to CentOS 6.5 to get the latest CA-Ceritifcates from CentOS, or manually get the rpm and install it. @Kev Rob nailed it. A lot of enterprise businesses are still on very old distros. I am working on CentOS4.5 for a client currently, and have to do a lot of things that normally shouldn't have to be done. @RobD It looks like OP is using http and when I ran into the same issue I too was using http not https. So I'm just curious how would you deduce that ssl ca root certificates being outdated is the root cause? @DwightSpencer sorry, I'm not understanding your question, what is OP? I deduced that my ssl ca root certificates were the issue because on my old CentOS 6.2 box I couldn't update, but on my 6.6 box I could. Simply changing from https to http solved the issue. I more or less just narrowed it down to that. sudo sed -i "s/mirrorlist=https/mirrorlist=http/" /etc/yum.repos.d/epel.repo as explained here https://community.hpcloud.com/article/centos-63-instance-giving-cannot-retrieve-metalink-repository-epel-error Try this (has to be root) yum clean all yum check yum erase apf yum update ca-certificates yum upgrade Worked perfectly and this is way more clean than some other answers on this thread. I had to add a `yum update --disableplugin fastestmirror` pass in there because one of those steps (probably `yum clean`) removed info it needed to contact the mirrors. I told it "no" when it offered to do the upgrade, then did a plain `yum update` and it succeeded this time. I believe the first pass made it download a fresh mirror list from the main CentOS site, which let the second pass succeed. Type "http://mirror.centos.org/centos/6" on your browser, and see, "addons" does not exist. yum --disablerepo=addons update Try doing following. cd /etc/yum/yum.repos.d mv dries.repo dries.repo.bak Or look for file that has http://mirror.centos.org/centos/6/addons/x86_64/repodata/repomd.xml and move it. Then again do, If you use 6.5, I don't know why, but doesn't exist the 6.5 directory at official yum repository for centOS. All packages will return a 404 status code If you try this: http://mirror.centos.org/centos/6.5/os/x86_64/Packages/php-pear-1.9.4-4.el6.noarch.rpm you will get a 404, but if you try the 6.6 version: http://mirror.centos.org/centos/6.6/os/x86_64/Packages/php-pear-1.9.4-4.el6.noarch.rpm it works. If you run "yum update" or "yum upgrade" without any other parameters all packages on your system including yum will be upgraded so there really is no need to upgrade yum on its own unless you are upgrading Fedora or CentOS versions. In my case, which is really exceptional, the location of the XML file which contains the repo information is changed. - I have Internet connection ( - When I run yum upgrade, after a lot of 404error, I can get the names of packages I must download, but I cannot download them. And, when I browse into the first 404 repo URL, which is: and I see it absent. Going to its parent folder http://mirror.airenetworks.es/CentOS/7.4.1708/readmeI get this: This directory (and version of CentOS) is deprecated. For normal users, you should use /7/ and not /7.4.1708/ in your path. Please see this FAQ concerning the CentOS release scheme: If you know what you are doing, and absolutely want to remain at the 7.4.1708 level, go to http://vault.centos.org/ for packages. Please keep in mind that 7.4.1708 no longer gets any updates, nor any security fix's. So, I have to go back to /etc/yum.repo.dto edit the files. [base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #released updates [updates] name=CentOS-$releasever - Updates mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra #baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 I suspect that $releaseveris no more in use, so I can test by changing it to 7:(remember to escape we can get a list of files: ftp://ftp.cesca.cat/centos/7.5.1804/os/x86_64/ http://ftp.rediris.es/mirror/CentOS/7.5.1804/os/x86_64/ http://ftp.cica.es/CentOS/7.5.1804/os/x86_64/ http://centos.mirror.minorisa.net/7.5.1804/os/x86_64/ http://repo.nixval.com/CentOS/7.5.1804/os/x86_64/ http://centos.uvigo.es/7.5.1804/os/x86_64/ http://ftp.uma.es/mirror/CentOS/7.5.1804/os/x86_64/ http://ftp.cixug.es/CentOS/7.5.1804/os/x86_64/ http://mirror.airenetworks.es/CentOS/7.5.1804/os/x86_64/ http://mirror.gadix.com/centos/7.5.1804/os/x86_64/ So, we can set the variable like this: - Open the - In the yum install xxx. - I have Internet connection (
OPCFW_CODE
Whats wrong in this perl code ? Im trying to run a set of commands in system command prompt using perl. Here is the code #!/usr/local/bin/perl -w use strict; print_prompt(); sub print_prompt { print "What's your name?"; system("G:\"); system("cd Documents and Settings/Administrator/eworkspace/Sample"); print `ant`; } But this is throwing me following error Bareword found where operator expected at execute.pl line 11, near "system("cd" (Might be a runaway multi-line "" string starting on line 10) String found where operator expected at execute.pl line 11, at end of line (Missing semicolon on previous line?) syntax error at execute.pl line 11, near "system("cd Documents " Can't find string terminator '"' anywhere before EOF at execute.pl line 11. how do I resolve this ? Whats is possibly wrong in this code? Do I need to indicate for the white spaces ? Come on. Just look at the error message: There probably is a “runaway multiline ""-string starting at line 10”. Look at the string on line 10. Ponder why it could span multiple lines. Even the syntax highlighting of your post is telling you! you should refer escape character . These two lines: system("G:\"); system("cd Documents and Settings/Administrator/eworkspace/Sample"); are broken in a couple of ways. Firstly, the top one is broken in the way that other people have described before me. The \ escapes the " so that it doesn't close the quoted string and the syntax of the rest of your file becomes broken. But secondly, both of these lines are broken in a deeper way. They don't do what you think. Actually they both, effectively, do nothing. The system command invokes a new shell environment in which to run the command. The new environment inherits values from the parent environment (the one that is running your code). These values include the current directory. You then change the current directory in the new child environment. But when the system command finishes (which happens immediately) your new environment is destroyed. Your program continues to run in the original environment with the original current directory. You should probably look at Perl's built-in chdir function. The problem is here: system("G:\"); This isn't a sensible command. The backslash is escaping the ", so the the string actually is "G:\"); system(" or qq{G:");\nsystem(} with an alternate delimiter. After a string must come some form of operator, but cd isn't one. The solution: never use backslashes as path seperators, they only cause problems. And remove the weird G:\ command, what is it even supposed to do? To include a literal backslash in a string, you have to escape it: \\.
STACK_EXCHANGE
Have you ever tried printing the content of a directory or file list you have stored in directory? It seems impossible as there is no inbuilt feature of doing that… Next question is why you want to print directory listing ? Well you might be having lots of MP3 stored there and you want to upload the list to internet so that your friend can ask what they need. Or you might want to have a hard copy of your files stored in the directory and most commonly used reason is you want to create a CD label with list of files stored in it… Ok So now there are 2-3 different method to do same, we will discuss all of them here.. Method -1 – Printing form DOS Command prompt- To print using dos command prompt please follow these steps — Start a command prompt (run -> CMD), change to directory for which you want to print the file listing and then type the following command. dir > print.txt This will creat a directory listing and store the output in a text file called print.txt Once you created the file you can open it any text editor and print the content or alternatively you can directly print from command prompt by using this command – When your print is complete just remove the file. Method -2 – Add a menu option in Explorer for printing Directory Listing – If you frequently print listing of files you can add a menu option in context menu of Explorer for printing the file list stored in a folder, to do so by using following steps – 1. Create a batch file called Printdir.bat. To do so pen Notepad or another text editor and type (or cut and paste) following text: dir %1 /-p /o:gn > “%temp%Listing” start /w notepad /p “%temp%Listing” 2. Now, save this file as Printdir.bat” (without the quotation marks). Move the file to windows directory. To open windows directory simply type %windir% in run prompt. 3. Now fo to Control Panel, Open Folder Options. 4. Select the File Types tab, and after that select File Folder. 5. Click on the Advanced button and then click New Button. 6. In the Action box, type “Print File List” (or whatever name you want to give). 7. In the Application used to perform box, type “Printdir.bat” (without the quotation marks). 8. Click OK to close all open dialog boxes. 9. Now Open Registry editor and Navigate to HKEY CLASSES ROOTDirectoryshell. 10. Right click on “default” and select Modify. In the File Data box, type “none” (without the quotation marks). 11. Click on OK Button and close the Registry Editor. These step will add a context menu print File list in right click menu of explorer. Whenever you right click on any folder and select Print file list, a file list will be created in temp directory and printed to your default printer. ALso this method does not work on WIndows Vista, there is some changes if you using Windows Vista, Please follow these steps in case you are a Vist User — 1. Create printdir.bat file and move it to windir as explained above (step 1 & 2) . 2. Now you have to edit the registry instead of adding it in Folder Option. to do so Open Registry Editor and navigate to — HKEY_CLASSES_ROOTDirectoryshell 3. On the menu, click New, and then click Key. Type Print File List, and then press ENTER. 4. Select Print File List, click New on the Edit menu, and then click Key and Enter a command for the name of the new subkey, and then press ENTER. 5. Double-click the default entry that is listed in the “command” subkey. In the Value data box, type Printdir.bat. 6. Click Ok and then exit Registry Editor. And you are done, it will add a context menu in Windows Vista. We hope that you liked this tip, if yes Leave a comment or Bookmark us. You can also Ask any question using comment form.
OPCFW_CODE