Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Protein-coding sequences can arise either from duplication and divergence of existing sequences, or de novo from noncoding DNA. Unfortunately, recently evolved de novo genes can be hard to distinguish from false positives, making their study difficult. Here, we study a more tractable version of the process of conversion of noncoding sequence into coding: the co-option of short segments of noncoding sequence into the C-termini of existing proteins via the loss of a stop codon. Because we study recent additions to potentially old genes, we are able to apply a variety of stringent quality filters to our annotations of what is a true protein-coding gene, discarding the putative proteins of unknown function that are typical of recent fully de novo genes. We identify 54 examples of C-terminal extensions in Saccharomyces and 28 in Drosophila, all of them recent enough to still be polymorphic. We find one putative gene fusion that turns out, on close inspection, to be the product of replicated assembly errors, further highlighting the issue of false positives in the study of rare events. Four of the Saccharomyces C-terminal extensions (to ADH1, ARP8, TPM2, and PIS1) that survived our quality filters are predicted to lead to significant modification of a protein domain structure. The existence of complex (multiple-step) genetic adaptations that are irreducible (i.e., all partial combinations are less fit than the original genotype) is one of the longest standing problems in evolutionary biology. In standard genetics parlance, these adaptations require the crossing of a wide adaptive valley of deleterious intermediate stages. Here, we demonstrate, using a simple model, that evolution can cross wide valleys to produce irreducibly complex adaptations by making use of previously cryptic mutations. When revealed by an evolutionary capacitor, previously cryptic mutants have higher initial frequencies than do new mutations, bringing them closer to a valley-crossing saddle in allele frequency space. Moreover, simple combinatorics implies an enormous number of candidate combinations exist within available cryptic genetic variation. We model the dynamics of crossing of a wide adaptive valley after a capacitance event using both numerical simulations and analytical approximations. Although individual valley crossing events become less likely as valleys widen, by taking the combinatorics of genotype space into account, we see that revealing cryptic variation can cause the frequent evolution of complex adaptations. Population genetics is often taught in introductory biology classes, starting with the Hardy-Weinberg principle (HWP) and genetic drift. Here I argue that teaching these two topics first aligns neither with current expert knowledge, nor with good pedagogy. Student difficulties with mathematics in general, and probability in particular, make population genetics difficult to teach and learn. I recommend an alternative, historically inspired ordering of population genetics topics, based on progressively increasing mathematical difficulty. This progression can facilitate just-in-time math instruction. This alternative ordering includes, but does not privilege, the HWP and genetic drift. Stochastic events whose consequences are felt within a single generation, and the deterministic accumulation of the effects of selection across multiple generations, are both taught before tackling the stochastic accumulation of the effects of accidents of sampling. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. PMID: 15911577;PMCID: PMC1451192;Abstract: Evolutionary capacitors phenotypically reveal a stock of cryptic genetic variation in a reversible fashion. The sudden and reversible revelation of a range of variation is fundamentally different from the gradual introduction of variation by mutation. Here I study the invasion dynamics of modifiers of revelation. A modifier with the optimal rate of revelation mopt has a higher probability of invading any other population than of being counterinvaded. mopt varies with the population size N and the rate θ at which environmental change makes revelation adaptive. For small populations less than a minimum cutoff Nmin, all revelation is selected against. Nmin is typically quite small and increases only weakly, with θ-1/2. For large populations with N > 1/θ, mopt is ∼1/N. Selection for the optimum is highly effective and increases in effectiveness with larger N ≫ 1/θ. For intermediate values of N, mopt is typically a little less than θ and is only weakly favored over less frequent revelation. The model is analogous to a two-locus model for the evolution of a mutator allele. It is a fully stochastic model and so is able to show that selection for revelation can be strong enough to overcome random drift. Copyright © 2005 by the Genetics Society of America.
OPCFW_CODE
The crypto community has been wrestling with the realities of collusion and bribery in decentralized governance. Evidence has arisen of EOS block producers swapping votes, and widespread collusion has cropped up in other blockchain systems such as Lisk. Staked voting systems offer an ostensible solution to this issue, where the weight of a vote is proportional to the amount of stake locked up with the vote. This system removes incentives to bribe, but only because it becomes cheaper to directly influence elections with money. Whales, crypto heavy-hitters who own lots of funds, can quite literally tip elections in their favor by using their wealth. A recent analysis of votes on Aragon demonstrates a notable case where a single whale tipped multiple votes against what appeared to be broader community consensus . For blockchain systems to scale to societal use cases, we need to build infrastructure that is resistant to bribery schemes and explicit plutocracy. In particular, by building a mechanism to have robust decentralized identities, with one account provably matched to one human, we can recover many of the properties of existing voting systems in modern democracies. In this letter I make a simple proposal for an identity system on top of Ethereum. How do we get started with designing such an identity system? Vitalik (the creator of Ethereum) has suggested networks of social trust could serve as identity. This system is roughly how governmental agencies verify identity for example. The California DMV (Department of Motor Vehicles) requires bank or credit card statements to verify your identity as a resident of the state. The government believes that attestations by established companies can verify your identity. (It’s an ironic reflection of America today that “corporate citizens” are needed to attest for human citizens.) Returning to Ethereum, we don’t yet have long established institutions on Ethereum like banks or credit card companies that can verify your identity. (Although this is starting to slowly change with decentralized finance projects where there is now loan history for some accounts on MakerDAO). But nevertheless, we can use Ethereum’s existing infrastructure to bootstrap an identity network. Let’s start by reviewing how standard staked voting systems on Ethereum work today. Let’s say I hold an Ethereum address with funds. 10 ETH to be precise. In a staked voting system, my vote is proportional to the amount of funds I put at stake on the vote. So if I staked all 10 ETH, my vote would ever worth 10 times that of someone who had only 1 ETH to stake. This creates the rule of the wealthy we discussed above where one wealthy party can tip elections in their favor easily. There’s a second danger in tying identity to accounts with funds. If a network requires votes from accounts with funds, it becomes easier to identify who’s behind those accounts since analysts can gauge proclivities of the true owner from their voting record. Chain analysis tools might be able to identify the voter by matching against real world data or other on-chain records. Remember in crypto revealing that you hold large amounts of funds is a serious security hazard. There have been attacks of people to steal the private keys controlling their funds. Ideally a voting system shouldn’t be tied to proof of wealth in this way. We instead posit the baseline principle that one human should have only one vote. The challenge now becomes how we can bootstrap a system with one-human/one-vote on-chain and render it robust to bribing attacks. To bootstrap the system, let’s use a simple system of trusted attestation rather than purchase of stake. For me to gain an identity, I need to be attested by 5 trustworthy individuals. Once I have an identity, I gain the ability to attest for others. To keep the system stable, it is important to aim for slow and steady growth. So let’s say that an individual may only attest for 1 other individual per week. Each identity is given one and only one vote in the system. If the core network is grown out carefully, identities could remain sound and not infiltrated rapidly by bots. The major danger in this type of system is Sybil attacks. Can I make 50 puppet accounts and place them under my control? If I get 5 friends, we could form a trust cartel that prints new identities and tips votes in our favor. We need a mechanism to prevent this. For confirmation, let’s have the system require that a new identity with 5 attestations is voted on by the set of all existing identities. The twist here is that if the vote fails, the 5 attesters will lose their attestation powers for a set period of time, say for a month. Importantly, attesters don’t lose their franchise though. All individuals in the system have earned the right to vote. However by improperly attesting, they have shown that their judgment is in question so they are withheld from attesting to the identity of newcomers. However honest mistakes will happen, so the punishment here is kept intentionally mild. This system offers some safeguards, but voters are lazy and may not have incentive to thoroughly scrutinize newcomers. It’s all too easy to see this system becoming a rubber stamp mechanism. To control for this possibility, we could add a bet mechanism, where voters who voted wrongly in the election could be penalized. This punishment would be mild (loss of attestation privileges for a day perhaps) but might prevent rubber-stamping from proliferating. In addition, to lower the chance of puppet accounts proliferating, any identity holder is allowed to accuse another of being a bot. A vote is held to adjudicate this accusation. If the vote passes, the challenged account loses its identity and franchise. The attesters for this account lose their attestation privileges for a month once again. The accuser has shown energy and trust, so they gain the ability to attest to 2 individuals for the coming month. This reward does not stack; no individual gains the ability to attest to 3 or 4 or more others. And the privilege is time boxed. This privilege is primarily a mark of trust from the system that serves to provide social signalling value and not any formal authority. There are a few issues here. For one, a known issue with all voting systems is apathy low voter turnout. For this reason, the system enforces that all individuals must vote within a period of time denoted one epoch (say 6 months). Individuals who fail to vote have their identities revoked and must re-enter the system by gaining attestations once again. Identity brings with it responsibilities and obligations. Citizens (identity holders) must exercise their rights to retain their franchise. This requirement brings our voting system more in line with democracies that require compulsory voting such as Australia. This scheme is somewhat similar to existing proof of stake systems, but the critical difference is that there is no token. There is no “identity coin” that can be held by funds. This design is on purpose since there should be no easy mechanism for the wealthy to subvert the system. By design, the only rewards in the system are increased and decreased trust. These rewards are strictly time boxed and limited and should only serve as auditable social sign for informing future voters. In addition, these addresses can be chosen so they hold no funds. This lowers the incentive to violently steal the private keys that control a particular address. It’s worth noting that our current system doesn’t solve the same problem as recent proposals for anti-collusion-infrastructure Such systems have mechanisms that make bribery challenging by allowing users to invalidate their known keys and swap to new keys secretly. This system means that a bribed individual can always cheat the briber by invalidating their key before they vote. At present though, the scheme depends on the presence of a trusted aggregator who gathers up all key operations and produces a SNARK attesting to their work at the end. It appears that the single trusted operator could be swapped to a m-of-n multiparty scheme for some additional robustness. Such a collusion resistance scheme like this could be layered on top of the robust decentralized identity scheme proposed in this note. Composing these primitives could lead to a more robust voting mechanism and would be an interesting topic for future research. The smart contracts for the design proposed here will take some effort to design correctly, but in principle should be possible to design without too much effort. If the network is bootstrapped carefully so that the chain of trust isn’t violated, this identity system could serve as a resource for the entire community.
OPCFW_CODE
Wireless webs are one of the most popular and widely used webs in the present universe of web. Procuring a radio web is the greatest challenge faced by the networking field faced in the past and in the present. Wireless web are more prone than any other webs because it can be accessed from anyplace inside the wireless part once the user gets the hallmark information or by choping the hallmark information. In 1991 a radio protection encoding system was introduced known as Wired Equivalent Privacy ( WEP ) used to supply confidentiality and unity of the informations by the Wi-Fi Alliance. Later in 2003 the Wi-Fi Alliance introduced an advanced enfranchisement plan the Wi-Fi Protected Access ( WPA ) which implements most of the IEEE 802.11i criterion this conquered terrible jobs faced by WEP. In 2004 the Wi-Fi Alliance developed a more advanced and a sophisticated version of the enfranchisement plan WPA2 which provides a much more unafraid information encoding and hallmark than WPA. The WPA and WPA2 have proved to be the successful protocol for many old ages. A brief history of WEP WEP was the first introduced encoding protocol in IEEE 802.11 criterion ( 1991 ) . It relies on RC4 encoding algorithm. It faced many jobs and proved vulnerable to many onslaughts like the RC4 issue ; it is insecure at any cardinal size, CRC spot tossing onslaught, FMS onslaughts, and Korek onslaughts. Finally in 2004 the WEP was jeopardised and a new encoding protocol was introduced by get the better ofing all the onslaughts faced by it. Wi-Fi Protected Access ( WPA ) In 2004 the Wi-Fi Alliance developed a enfranchisement plan which follows the security protocol and which implements the bulk of the IEEE 802.11i criterions. This was developed in order to get the better of the security issues faced by the WEP. WPA uses the encoding system with Temporal cardinal Integrity Protocol ( TKIP ) with Message Integrity Check ( MIC ) and it uses Extensile Authentication Protocol ( EAP ) hallmark mechanism. It besides uses a Pre-shared Key engineering for hallmark. It is based on IEEE 802.11i criterions and with itaa‚¬a„?s inter operable service it increases the information protection degree and the entree control degree to a great extent in the Wi-Fi systems. Unlike WEP it changes the effectual cardinal really frequently doing WPA more secure. WPA is widely used because since September 2003 all new 802.11b and 802.11 g hardware which are tested by Wi-Fi enfranchisement must implement WPA on it.WPA was designed by good known cryptanalysts and they suggested that it conquers many of the known onslaughts faced by WEP. In 2004 a 2nd coevals execution of the WPA was developed by the Wi-Fi Alliance the WPA2 which is much more advanced and sophisticated than the WPA. It uses a new encoding engineering the Advanced Encryption Standard ( AES ) which is more advanced encoding system than the WPA encoding system. Like WPA, WPA2 uses the Extensile Authentication Protocol for hallmark which is good secured. This proved to be a more unafraid than any other security plan. In 2006 all hardware approved by 802 Bs and g which has Wi-Fi enfranchisement must hold WPA2 implemented in it. TYPES OF MODES This manner is designed for the endeavor security which operates in a managed manner. It uses the IEEE 802.1x hallmark model which uses the EAP with an hallmark waiter. Thus this manner is a good secured for a big system by supplying a common hallmark between the hallmark waiter and the client through the entree point. Each user in this manner is assigned a alone key to entree the web therefore supplying single truth. In this manner TKIP encoding is used for WPA in which in each session an encoding key is assigned for every information package communicated by an encoding cypher employed by the TKIP. And AES encoding type is used for WPA2. This manner Idaho designed for little concern and place webs where there is no hallmark waiter is used. This uses a PSK for hallmark unlike in endeavor manner where IEEE 802.11 is used. This operates on an unmanaged manner. Here a PSK is shared among users therefore the strength of the PSK should be high. Personal manner uses TKIP encoding type for WPA and AES for WPA2 like the endeavor manner. AUTHENTICATION FOR WPA and WPA2 The hallmark procedure is done by IEEE 802.1x model or the EAP model. A common hallmark is initiated in the WPA endeavor and the WPA2 when a user communicates with an Access Point ( AP ) . The user gets the entree to the web merely when it is authenticated by the entree point. The hallmark waiter receives the certificates provided by the user. Common hallmark protects the user from linking to rogue APs by guaranting both the authorised user and the client that the communicating is entitled between them. The client enters the WLAN merely when the hallmark waiter accepts the users certificates if it does non accepts so it is blocked from come ining into the WLAN. A Pair wise Master Key ( PMK ) is generated at the same time when the user authenticates. Between the client and the AP a four manner handshaking takes topographic point so TKIP and AES gets installed and established for WPA and WPA2 severally therefore finishing the hallmark procedure between the client and the entree point. There are assorted types of hallmark is used in WPA and WPA2 tabular array 2 shows the hallmark types used by them with RADIUS waiters and PSK. The WPA and WPA2 use the same hallmark mechanism therefore both the encoding type can be at the same time used in the same web as it uses same hallmark mechanism. The description for the hallmark type which is used for both is shown in the tabular array 2. The WPA and WPA2 hallmark are good secured than any other encoding system. WPA Encryption Using TKIP WPA overcomes WEP encoding jobs by utilizing a forceful encoding system provided by Temporal Key Integrity Protocol ( TKIP ) . It replaces the little inactive encoding key of 40 spot which is entered manually on the client devices and the entree points, with a per package 128-bit key. Unlike WEP, WPA generates keys dynamically which avoids the interlopers who rely upon foretelling the key. It operates on the MAC bed. The hallmark waiter makes the 802.1x generate a alone maestro key or brace wise key for that session during the procedure one time when the useraa‚¬a„?s certificates are authenticated. The cardinal hierarchy and the direction system are maintained by administering the key to the entree point and the client which is done by TKIP. During a session every information package which is communicated is assigned a alone key generated by TKIP. By making this it generates around 280 trillion possible keys for a information package which is hypothetically impossible to follow back. It besides uses the Message Integrity Check ( MIC ) by supplying a mathematical map where both the sender and the receiving system compute and compare it. If the MICaa‚¬a„?s do non fit so the package is removed presuming it to be tampered. Thus it protects the informations packages from capturing, altering and resending by an aggressor. WPA2 Encryption Using AES AES is a block cypher which is a type of symmetric cypher where a same key is used for encoding and decoding. AES encrypts the spots in blocks of plaintext by ciphering individually alternatively of a individual key put over a plaintext input informations watercourse. In WPA2 the 128 spot AES is used. There are four phases carried out by AES doing one unit of ammunition where every unit of ammunition is iterated several times. For WPA2 the loop is done for 10 times each unit of ammunition. Counter-Mode/CBC-Mac Protocol ( CCMP ) is used in AES for WPA2. For a block cypher which use same key for both encoding and decoding CCMP will be a new manner of operation. There are two manners used by the CCMP which are the Counter Mode ( CTR ) and the Cipher Block Changing Message Authentication Code ( CBC-MAC ) manner. Data encoding in CCMP is done in the CTR manner and the information unity is provided by the CBC-MAC manner. As a consequence in the encoding procedure an hallmark constituent is generated by the CCMP utilizing CMC-MAC manner. It differs from the WPA encoding where there is a separate algorithm is required for the unity cheque as in here the unity cheque is done by the CCMP through CMC-MAC default. On top of this a 48-bit Initialisation vector ( IV ) is used by the AES which farther heightening its encoding system. AES is said to be the most powerful encoding system as it requires more than one million millions of operations to interrupt its key. Thus it is said to be a most unafraid cryptanalytic algorithm. WPA2 encoding system utilizing AES is more powerful than the WPA encoding system utilizing TKIP. WPA and WPA2 overcome all known possible exposures that are face by WEP therefore heightening the entree control and informations protection to a great extent. They are really strong criterion based protection with an interoperable solution in the radio webs. It provides enormous benefits of a secure Wi-Fi web. It is designed to work with all sort of arrangers, from September 2006 all the IEEE 802 B and g devices which has the Wi-Fi enfranchisement must hold WPA or WPA2 implemented in it hence it is reasonably widely used. Thus WPA and WPA2 encoding types prove to be the best encoding types of all time.
OPCFW_CODE
I'm curious, given two similar hardware configurations for an index cluster is there a point at which more index nodes will make up for a slower disk subsystem? For example, if I had to decide between designing a cluster with 7200RPM drives or with 15k RPM drives would there be a point where fewer cluster nodes with 15k drives would equal a cluster with more nodes at 7200? Issues of MTBF aside, it gets more interesting when the servers in question are utilizing SSD drives. On the low end there are 3Gbps SATA SSD's at a reasonable price and then 6Gbps SAS drives at about twice to four times the price. When you start talking about 10 drives per server, the costs add up quickly. Of course, there is always the concern that the RAID controller can't even keep up with the slower SATA SSD drives. Or if the SATA architecture is well designed for this type of I/O compared to SAS So, is it worth considering fewer nodes with higher reliability, faster more expensive drives vs. more nodes with less expensive drives? In the end, the dollar cost would be the same, although total storage would be less with fewer machines. With our experience with storage we started with 10-16x 15K RPM SAS drives in our indexers which depending on your indexing volume worked pretty well, however we moved our clustered indexers hot DBs over to storage on our SSD SAN arrays where we saw a pretty significant increase in performance (2x-5x). It seems to come down to how Splunk searches across the buckets, it generates random IO against the disks while looking through the indexes, with SSD storage you start to see big improvements with sub millisecond latency and high random IO that the SSDs can deliver since data is returned much quicker, even if you have a high write rate to the indexes and a lot of searches going on in the system. One other option for you would be to put a couple SSDs into the system to handle your hot buckets then some larger 10K RPM drives in the system to handle your cold buckets. Just make sure you have enough SSD based storage to handle the majority of your searches and that you don't dip into the cold buckets as often on the slower disks (This is how we are configured). Yes to some extent more nodes can make up for slower hardware. For many or most types of queries, Splunk will scale close to linearly with the number of nodes, especially for longer queries. But there is always going to be some overhead, and there is going to be some increase in latency. The answer as to whether it's okay depends entirely on the particulars of exactly how many, how much slower, what the cost difference is, what you're querying, what your tolerance and threshold for slowness is, whether total time or latency is more important, etc. But certainly it can be considered. The default recommendations are based off of "typical" usage using recent commodity hardware at "typical" relative market costs. But different people are in different situations on all factors, so those are merely rules of thumb.
OPCFW_CODE
This thread was archived. Please ask a new question if you need help. how do I find and replace my current TB password with my new secure mail key? My email app is Thunderbird, so complying with ATT's instructions I created a secure mail key so that when ATT changes its security protocol I can continue to receive my emails through Thunderbird. I have the new secure email key, but understand that I need to locate and replace my former Thunderbird password with the secure email key. How do I do this? All Replies (12) >options > security >passwords > Click the saved passwords button. If you right click the entry in the list you can choose to edit the password. Personally I just remove them using the remove button and let Thunderbird ask for them when it needs it. That way I don't accidentally change the wrong password and confuse myself. Matt, many thanks for your prompt reply. I did as you instructed and found only two entries, as follow: mailbox://inbound.att.net (mailbox://inbound.att.net) smtp://outbound.att.net (smtp://outbound.att.net) Are these actually passwords for which the secure email key could be substituted or are they instead incoming and outbound instructions for the server and need to remain? I actually don't ever recall years ago when I installed Thunderbird, creating a password, which could be why there's only those instructions-like entries? The names are how Thunderbird identifies the passwords. Harking back to the mozilla suite and netscape the accounts are still identified as "site" and have web site looking addresses. All that the entry contains is the username and the password as well as a first and last used date. Account settings are stored elsewhere. Under account settings. Passwords are stores separately from other settings as there has always been an option to set a password to encrypt passwords to prevent other seeing them if they sit at your computer. That is in addition to the passwords store being encrypted on disk to prevent casual access by malware. Thanks again Matt and due to your explanation, I returned to the site and realized that when I previously wrote you, I hadn't clicked "show passwords", but instead gone only as far as "Saved passwords". I've now gone to "show passwords" and am able to see the passwords, which are the same one for each of the two lines of text I earlier copied and pasted into my message to you. Also, I've noted that by double clicking on the passwords, they highlight, which I presume means I could delete them and substitute in their place, the new secure mail key I obtained from ATT. Would that be the appropriate thing for me to do, delete those passwords in each of those two lines of text and replace with the new secure email key? Matt, I'm emailing via my webmail, not my usual Thunderbird app, as its become inoperable. I did as suggested and substituted the existing passwords with the new secure mail key obtained via ATT. When I then tried to use Tbird, it kept asking for the new password, which I entered several times, to no avail. I went back into where Tbird stores the passwords and absolutely everything was gone, not only the new passwords, but the previous information which had been listed under "site, user name, last changed". What do I do now? Matt, after sending you the message from my webmail preceding this one, I went back to trying Tbird and when it asked for a password, entered my previous one, checked the box for it to be stored, it worked for both sending and receiving, then went to "saved logins" and the box now contains the former "site, user name, last changed, password" information, so my Tbird is operative again, but using the password I was attempting to replace with the new secure email key. I notice that my webmail password for Yahoo Mail, which is the lousy way ATT delivers email, is the same as my Tbird password, so is it possible that both the Tbird password and the Yahoo Mail password need to be the same and that if I change the Yahoo PW to the new secure mail key, as well as for Tbird, that will sync matters and the secure mail key will function as the Tbird password? slightly off topic, but is your account IMAP or POP in Thunderbird? Matt, I just checked the mail server setting and its POP mail server if it was IMAP there is a way to fool the system into using yahoo oauth, but for pop that does not work. ATT instructions say: Go to your preferred email app and replace the existing password with your secure mail key. (For an IMAP account, delete the existing password for both the IMAP and SMTP servers and replace them with your secure mail key.) You have done that and it did not work? I can only guess a transcription error on your part. I am no good with those gobbledygook strings and do not know many folks that are. Many thanks for your efforts Matt; I'm sure you're as frustrated as am I. You're correct, I did as ATT's instructions specified, only for Tbird to be unable to connect with ATT's mail server. As to the possibility of transliteration error, I was very careful and even reentered several times. Also, when I copied the secure mail key from the ATT site after generating it, I was careful to do so correctly, as I too am well aware of how easy it is to err. I'll now call the help number ATT provides in conjunction with their notice that the change needs to be made, in hopes help can be procured, although my optimism is muted, as their immediate response is always that they don't support Thunderbird. What I shall ask, however, is whether the passwords used to access ATT's webmail and for Thunderbird need to be the same, even though I know there's unlikely to be any correlation. Well Matt, hours spent with ATT only resulted in more confusion, so I'm back to where I was using my former passwords with Tbird and not knowing what to do with the secure mail key. I guess I just have to await my fate regarding what will occur when ATT switches to its new security protocol.
OPCFW_CODE
Is this NDA terminated after the date shown? I have signed an NDA between my company and another company. I realized that their approach does not fit with my vision and I told them nicely that I am not willing to partner nor sell the company. I have multiple question regarding this: 1: I was the only one to sign the NDA, their company did not sign it. The person who talked to me was not the same guy who's name was at the bottom to be signed by. Does the information they gave me still hold if it was stated by an employee in that company? 2: I looked over the NDA to see if there was any kind of termination on it. I saw this at the end: " The obligations under this Agreement will continue in effect from the date hereof through 20th May, 2017. This Agreement (a) constitutes the entire agreement of the parties with respect to the subject matter hereof and supersedes all prior or contemporaneous oral or written agreements, negotiations or representations concerning such subject matter (none of which prior or contemporaneous matters shall be binding on the parties); (b) shall be governed by the laws in the jurisdiction of the injured Party without regard to its conflict or law’s provisions; (c) may only be amended by a writing executed by both parties and dated after the date hereof; and (d) shall inure to the benefit of the parties hereto and their respective successors and assigns, and the obligations under which shall be binding upon the parties and their respective affiliates and associates, but shall not be assigned or delegated to any other person." Does this mean that after the 20th of may, that the NDA is terminated and they can not tell me an idea is theirs? I am a but confused about what that statement meant. 3: They continue to contact me about saying that some ideas that existed were theirs. I was under the impression that NDA was to protect their companies information like how they do a specific thing. Does NDA also cover an idea (new feature) they want my company to implement (nothing regarding their company)? Given they have not given any facts on how to do it, just a general statement of I want you guys to build this and I will market it. Thanks for the help! Signatures don't matter, except as a way of indicating that the parties agree to the terms. Since they set the terms, one can assume they agree: your signature is important, since that's what signals acceptance on your part. If you add or delete terms, that's a counter-offer and they then would have a sign and return that to you. It does not matter who you talked to, what matters is what the NDA says. After the 20th of December, the restrictions set out in the NDA are terminated. We cannot tell what those restrictions are so we don't know what things you or they are required to do now, but as of the 21st of Dec. you and they can do anything they want, for example they can't say "You can't use that idea". It is possible that the NDA prohibits you from using general ideas as opposed to just specifics of implementation – it depends on the wording.
STACK_EXCHANGE
It is possible that when the user browses or makes use of the SITE https://cloudlabs.us/, they receive first-party and third-party cookies, which will be stored on their computer. In our case, we use these cookies to collect information about your online habits in order to improve your browsing experience. We also use them to guarantee an excellent experience that is smooth and personalized. Please note that the cookies used on https://cloudlabs.us/ are only associated with an anonymous USER and their computer. They do not provide references which reveal the USER’s name and last name. They cannot read data from their hard drive, nor can they include viruses in their texts. Likewise, https://cloudlabs.us cannot read the cookies that are embedded on the USER’s hard drive from other servers. We try to keep this list of cookies updated with the cookies that we use at all times. However, it is possible that there may be modifications which take a while to be registered. In any case, the USER may see the cookies that are installed on their browser at any given moment, see the duration of each cookie and delete it should they consider it necessary. |PHPSESSID||Duration of the session||Saves the variables from the| session on the web server. This cookie is essential for the functioning of the website. |Sessionsimulador – sesion||Duration of the session||Cookie used to store login| information on the CloudLabs Store platform on the user’s |__cfduid||1 year||Identifies trustworthy web traffic||CloudFlare| |player||1 year||Needed to show Vimeo videos on| |Vuid||2 year||Stores information about how| Vimeo videos are used. |NID||1 year||Used to remember| preferences such as preferred language, number of search results and if the Google SafeSearch filter is activated or deactivated. |Lang||Used to remember the| |NID||1 year||Used to remember preferences such as preferred language, number of search results and if the Google SafeSearch filter is activated or deactivated.| |Lang||Used to remember the language preference.||Cloudlabs| |_gid||24 hours||Used to give a different indentification code to each page and to indicate the date.||Google Analytics| |_ga||2 years||Used to distinguish users by storing the identification code.||Google Analytics| |_gat||1 minute||Used to limit the percentage of requests||Google Analytics| |_gat_gtag_UA_||Until the end| of the session |Used to measure how users interact with our website.||Google Analytics| |__utma||2 years||Generally created on the first visit and used to determine each visitor to the website. If the cookie is deleted, it will be reset on the next visit.||Google Analytics| |__utmb||30 minutes||Used to determine new sessions or visits on the website. If a user returns, Google Analytics updates the cookie||Google Analytics| |__utmt||10 minutes||Used to limit the percentage of requests.||Google Analytics| |__utmz||6 months||This cookie saves information which explains how the user arrived at the website. It is used to identify the traffic coming from advertising campaigns. The cookie is updated each time a user visits a page.||Google Analytics| |Duration of the session||Used to measure, analyze and improve the use of the website.||Hotjar| |_hjid||1 year||Established when the client visits the page for the first time. Used to conserve the Hotjar user ID. This guarantees that the user’s activity will be attributed to the same ID on future visits.||Hotjar| |_fbp||1 day||Used by Facebook to offer a series of adevrtising products, such as real-time offers from third-party advertisers.| |ANID||2 years||Contains a unique randomly generated value which enables differentiation between different browsers and devices. Used to measure the performance of advertisements and provide recommendations for products based on statistical data.||Google Ads| |1P_JAR||15 days||Used to determine how the final user uses the website and to identify any advertisement that the user has seen before visiting said website.| |IDE||1 year and 7| |Used to improve advertising.| |_gcl_au||3 months||Used by Google AdSense to experiment with advertising effectiveness through the websites that use their services.| |fr||90 days||These connect the website to Facebook and detect the number of “likes” or if the user has logged into Facebook.| |__stid||1 year||Used to track the number of clicks on any option from ShareThis.||Share This| policy, we have created a notification which opens as soon as the user starts browsing If you do not want this website to install cookies on your device, you have the possibility of not accepting the policy when you enter the website for the first time by clicking on the “Disagree” button. The USER can freely decide whether the cookies used on https://cloudlabs.us are embedded on their hard drive or not. In this sense, the user can, at any moment, set their browser to accept or reject all cookies by default or to receive a notification on the screen about each cookie and decide at that moment whether it is embedded on their hard drive or not. Even when the USER sets their browser to reject all cookies or expressly the cookies from https://cloudlabs.us/, they will be able to browse the SITE with the only inconvenience of not being able to enjoy those features which require cookies to be installed. The USER can, at any moment, delete any of the cookies which are on their device. The cookies settings are particular to each browser and computer used to access the SITE. Therefore, we have included the instructions for the main browsers.
OPCFW_CODE
WiFi and Bluetooth question Hi all. I'm not sure if this question should go into this forum but here goes. I have a PC I planned to hook up to a TV. The PC would use 802.11g WiFi to access my home network and the Internet. I started looking for a mini keyboard/mouse to use with this and realized that there might be an issue using either Bluetooth or 2.4 GHz band devices (that aren't Bluetooth). The PC would have a WiFi NIC in it and something, either a Bluetooth adapter or other 2.4 GHz device, connected and used at the same time. How do people get around this issue other then utilizing 802.11n? A Bluetooth keyboard will not interfere with wi-fi. I've used a Logitech diNovo Edge keyboard for years and never had any issues with it. FWIW I have several distinct wi-fi networks in my house and they don't interfere with each other. I am currently using a wifi connected laptop with a 2.4GHz non-bluetooth mouse, though the mouse connects through a small USB adapter. I actually originally bought this mouse to work with my media PC/TV because it has a much greater range than bluetooth. It works fine but I now use a 2.4GHz non-bluetooth keyboard with a trackball for the media centre. Awesome, thanks for the information. I won't worry about it now! Actually, so now the question is whether or not to use Bluetooth vs. regular 2.4 GHz RF. Which is better? The keyboard/mouse unit won't every be used more than 15' away from the PC. Any thoughts? There are three versions or classes of transmit power for Bluetooth. Class 1 - 100 mW 100 meters Class 2 - 2.5 mW 10 meters Class 3 - 1.0 mW 1 meter I have a class 1 BT dongle that I use to send music to a headset while I work outdoors. AFAIK the Logitech diNovo Edge BT keyboard is a class 2. I've used it in another room from where the system is located. IMO most devices are BT class 2 that have a range of approximately 33 feet. Keep in mind that to use a given class, the devices on each end shoul be at the same class or better. In other words a class 2 or 3 device won't communicate with a class 1 device at a distance of 100 meters. It takes two to tango. Thanks for the info. I'm a little confused about the Bluetooth stuff. Take this item for example: It looks pretty cool, but what confuses me is that the system requirements include a Bluetooth enabled PC (if you want to use it with a PC). However it shows that it comes with a Bluetooth dongle. I don't get it. I thought the Bluetooth dongle is what you'd plug into your PC but now I'm wondering if the Bluetooth dongle plugs into the keyboard/mouse device to provide it with Bluetooth, and thus you'd need a Bluetooth enabled PC. Am I right? Does that make sense? The dongle plugs into the computer. Many bluetooth devices provide a storage dock for the dongle so that it doesn't get misplaced when not paired. I think it is safe to ignore that "bluetooth enabled PC" stuff, unless you're running Windows 98. - You may not post new threads - You may not post replies - You may not post attachments - You may not edit your posts
OPCFW_CODE
Cleared SCWCD 1.4 with 92% today. And thus, joining the elite SCWCD's. Exam was easier compared to my expectations. I had 80 questions (of which 11 were not considered for scoring) to answer in 3 hrs and that was ample time.Most of the questions were lengthy with code samples; but were easier to solve. Major share of 'questions on a single topic' belonged to Web deployment and configuration (XMLs, XMLs and more and more XMLs...)- Was tested on almost all the major elements of web.xml like servlet/servlet mapping,listener,security-constraint, login-config, jsp-config, conext-param, ejb , env-entry.... infact, everything....!!! From Security section- Contrary to what HFSJ said, there were more questions on authentication than on authorization. Data Integrity and Confidentiality was accounted to a single qn. 2 qns on dynamic attributes, 7 on design patterns ,and the rest saw a fair share of servlets , session and jsp I spend about a month for preparation.And my best friend for the last few weeks was HFSJ Million thanks to Kathy, Bert and Bryan for explaining things in such a lovely way..! Not even in my wildest dreams did it occur that I would one day fall in love with a technical book...!!! read this book twice. I'd to go through specs whenever I'd to look beyond HFSJ for explanations.Reading entire specifications thoroughly would have been a nightmare.Again, it was HFSJ authors who helped. They had made it easier by mentioning the exact location of the topics corresponding to each question in the "Coffee Cram Mock Answers" section.That way I was able to cover all important topics in spec.And I should admit these references were of tremendous help. Another note I referred to was Marc Peabody's design pattern notes . Needless to say, I scored a 100% in this section (All thanks to Marc !) Other useful last minute references were Frederic Esnault's SCWCD revision notes , and Ashok Kumar Babu's notes on sample web.xml and tld files and important API methods (though just method signatures,they came in real handy.) And last but not least, this great site - Javaranch has been a one stop solution for all my questions and clarifications. Thanks to all those wonderful people here who are helping out those distressed & confused minds...!!! As for mock tests, I didn't go for any commercial tests.Just referred some of the links mentioned in javaranch including javaranch and examulator Other mock tests I took were 2 HFSJ mock tests - old and new one. And I scored in the range of 70-75% in both tests.Well..,predictions didn't work their way since I scored way more than the mock score. And for all those aspiring SCWCD's out there, wish you all good luck with the exam..!!!
OPCFW_CODE
User Exit for Reason of Rejection VA01 My requirement is to make sure only main item and free goods is able to clear reason of rejection when selected together, if not main item will be rejected using rejected reason on free goods. The issues is SAP default program always overide any changes i did. I Found this user exit MV45AFZZ under FORM USEREXIT_MOVE_FIELD_TO_VBAK but it's not working. Would you please help. My Code FORM USEREXIT_MOVE_FIELD_TO_VBAK. ENHANCEMENT 1 ZFREEBIES_REJ_CHECK_ON_CHANGE. "active version DATA: lv_uepos TYPE vbap-uepos, lv_abgru TYPE vbap-abgru, lw_xvbap TYPE vbapvb. LOOP at xvbap WHERE pstyv = 'TANN'. READ TABLE xvbap INTO lw_xvbap WITH KEY posnr = xvbap-uepos. IF sy-subrc EQ 0. xvbap-updkz = 'U'. CLEAR xvbap-grpkz. lv_uepos = xvbap-uepos. IF ( lw_xvbap-abgru ne xvbap-abgru AND xvbap-abgru NE '' ). CASE xvbap-vbeln. WHEN ''. MODIFY xvbap TRANSPORTING abgru grpkz WHERE posnr = xvbap-posnr. MODIFY xvbap TRANSPORTING abgru grpkz WHERE posnr = lv_uepos. WHEN OTHERS. MODIFY xvbap TRANSPORTING abgru updkz grpkz WHERE posnr = xvbap-posnr. MODIFY xvbap TRANSPORTING abgru updkz grpkz WHERE posnr = lv_uepos. ENDCASE. ENDIF. ENDIF. ENDLOOP. ENDENHANCEMENT. ENDFORM. Figure Initial Value : main item = Rejected free goods = Rejected. User Change: Main Item = Cleared. Free goods = Rejected. Sap Result: Main Item = Cleared Free Goods = Cleared. Expected Result : Main Item = Rejected Free Goods = Rejected It seems illogic to use USEREXIT_MOVE_FIELD_TO_VBAK if you want to update VBAP. Why don't you use USEREXIT_MOVE_FIELD_TO_VBAP ? thx for reply sandra, It seem illogical but i tried with move to USEREXIT_MOVE_FIELD_TO_VBAP already, i put break point there and none was caught whenever i change the reason of rejection. USEREXIT_MOVE_FIELD_TO_VBAP only caught when new line was inserted to vbap At what moment you wanna fire your check? During order creation? Changing? When user specifies reason for rejection?
STACK_EXCHANGE
If you come across a Joomla! bug, you can report it to Joomla!’s developers using the Joomla! Issue Tracker. Reporting bugs helps to make Joomla! an even better application than it already is. Creating a GitHub account If you don’t already have one, you will need to create a GitHub account before you can report a bug in Joomla! Issue Tracker. Creating a GitHub account is easy and doesn’t cost anything if you select the Free plan. Create the account To create your account, go to the GitHub sign up page and follow the instructions on the screen. To get a free account, select the Free plan. To create a different type of account or learn more about GitHub accounts, go to Signing up for a new GitHub account. Verify your email address To finish creating your GitHub account, you must verify the email address you provided when you signed up. When you receive the verification email, click the Verify email address link. When the GitHub login screen appears, log in to your account to make sure everything is working as it should. Logging in to Joomla! Issue Tracker Once you’ve created and verified your GitHub account, you can log in to Joomla! Issue Tracker. To start, open Joomla! Issue Tracker in your browser. On the Issue Tracker menu, click the Login with GitHub button. On the next screen, sign in using your GitHub Username or email address and your Password. If this is your first time logging in, you should see the Authorize Joomla! Issue Tracker screen. To continue, click the Authorize Joomla button. You should now be logged in to Issue Tracker and see a New Item button and an Account dropdown on the menu. Reporting the Joomla! bug To report the Joomla! bug, you will create a new Issue Tracker item. Before you do, it’s a good idea to search for the bug to make sure it hasn’t already been reported. Search for the bug first To search for the bug, make sure the toolbar’s Open button on the toolbar is selected, then click the Search Tools dropdown. Unless you know your way around Issue Tracker, you can ignore most of the search filters. Your best bet is to choose some keywords that describe the bug and enter them in the Filter list by ID, summary, or description search box. To start the search, click the Search icon or press ENTER. If after a few tries you still don’t see a matching bug, go ahead and create a new Tracker item for it. Create the new tracker item To create the new item, click the New Item button on the Issue Tracker menu. You should now see the New Item screen. To display instructions right in the New Item form, set the View Mode toggle in the upper right-hand corner of the form to Help. Enter a Title and Description for the bug following the instructions in the blue boxes below. To upload and attach any notes or screen images to the item, click the Add files button at the bottom of the form and follow the on-screen instructions. To finish entering details of the bug, select a Priority and one or more Categories in the box to the right of the form. If you know your version of Joomla!, enter it in the Build field. Again, follow the instructions in the blue boxes. When you’re finished entering all the details, click the Submit button. Getting more help To get more help, take a look at the Filing Bugs and Issues page in the Joomla! documentation. To see what a detailed, well-written bug report looks like, scroll down the page to see the examples provided. For further questions, or if you need help, please open a support ticket from your HostPapa Dashboard. Follow this link to learn how.
OPCFW_CODE
Will SRAS curve definitely shift if LRAS curve shifts? From what I know, a shift in LRAS is generally caused by a change in maximum productive capacity of an economy, which affects the full-employment output level. Such change in maximum productive capacity would also cause a shift in SRAS. So does this mean whenever LRAS curve shifts SRAS curve will also shift? If not, are there any examples against this? Short answer: Yes, the SRAS curve will shift after the LRAS shifts to return the short-run equilibrium (SRAS/AD) back in line with the long-run equilibrium (LRAS/AD). The reason the SRAS curve doesn't shift immediately with LRAS is that there are so-called "frictions" or "nominal rigidities" such as contracts and information gaps that prevent firms from adjusting supply plans instantly. Long answer: There are two simple "stories" we can tell to explain the dynamics of the elementary AS/AD model: the "inflation gap" story and the "output gap" story. Both of these stories are compatible with each other, so they're usually taught as both occurring at the same time. Inflation gap story: SRAS is modeled as shifting upward when expected price levels increase. We can consider the long-run (expected) equilibrium to occur at the intersection of LRAS and AD, and the short-run (actual) equilibrium to occur at the intersection of SRAS and AD. If you start in a long-run equilibrium where all three curves intersect, but then impose an LRAS shock on the model and shift the curve, then the expected price level and actual short-run price level will differ. This is what we call an inflation gap. As firm owners come to the realization that the supply shock is here to stay, they adjust their production processes to match and the SRAS curve shifts to bring the short-run equilibrium in line with the long-run equilibrium. Output gap story: SRAS is also modeled as shifting whenever prices of factors of production change. Thus the output gap generated by the supply shock becomes relevant, with actual short-run output out of line with the long-run sustainable level marked by the LRAS curve. For example, after a positive LRAS shock, the short-run equilibrium will have less output than the long-run sustainable level. The result is an abundance of resources, and the prices of factors of production will fall, causing firms to increase production and shift the SRAS to the right until actual output matches the long-run sustainable level. An opposite story can be told to explain why SRAS shifts left after a negative LRAS shock. The SRAS also shifts. SRAS is normally used in models where the supply side does not adjust immediately to the new conditions in the market, i.e. models with nominal rigidities. Examples are: sticky wages, sticky prices. Other instances where adjustments in prices or nominal money supply are not immediate and in which money has a real effect are the Lucas 'misperceptions' or 'islands' model or Cash in advance models. However, and to the point of answering your question, all these models coincide that in the long term and when the agents in the economy have had the time to adjust their choice variables and/or information symmetry is restored, the AS is vertical again (LRAS). So, if there has been sufficient time for the long run AS to adjust, by definition of short run, i.e. shorter than the long run, there has been enough time for the SRAS to adjust as well.
STACK_EXCHANGE
“You’re only as solid as what you build on”. It is essential to iterate through the fundamentals of Multi-dimensionality and related concepts before getting our hands on QuarkCube. The concept of multi-dimensionality Multi-dimensionality can simply be defined as the ability to view, store and process data based on many dimensions. That sounds scholarly. But, is it a new and vague concept to us? hmmm…. Not really… We are actually applying it in our daily lives… Let’s take the example of a closet. The one in the picture is small. So, say we have a closet which is 25 times the size of the one in this image. It’s your friend’s and he comes up to you asking, “Hey pal, can you tell me the number of full sleeved Van Huesen white linen shirts with black buttons in my closet?” You realise that it’s a big task ahead. Why is it a big task? It’s because though the closet looks neat, it is far from being organized and it is going to take you a while to search for the shirts satisfying all his conditions and then count them. (Well, counting wouldn’t take you that long, but searching certainly will). It would have been an easy task if your friend had grouped all his clothes according to its characteristics and placed them appropriately. How can he possibly organize it? There is not just one way to do it. It basically differs from person to person. Let’s list down a few ways to organize it. He may group the clothes based on Color of garment, Type of material, Type of sleeves, Type of stripes, Type of neck, Type of clothing, Type of button, Color of button. It would have been easier for you to answer his question, if The closet had shirts, pants and socks and ties separately (type of clothing) Among the type of clothing, it was split based on brand (brand) In the Van Huesen shirts type, white shirts and other coloured shirts separately (colour of garment) Among those white shirts, he had placed full sleeved and half sleeved shirts separately (type of sleeves) Among the full sleeved, he had linen kept separately from the cotton (type of material) Among those white full sleeved linen shirts, he placed the shirts according to the button colour (colour of button) It would have taken you less effort to quickly fetch those “white full sleeved Van Huesen linen shirts with black buttons” and count or answer anything that your friend asks for. The chronology of grouping may vary, but the end result will be the same. This explains the advantage of storing and processing the data based on the different dimensions. Multi-dimensional databases store the data based on the dimensions specified by the user. This is a pretty simple example wherein data refer to clothes; dimensions refer to the different features/aspects based on which we grouped the clothes. In the analogy, clothes are the data, but it’s not the kind of data that a computer directly deals with, hence we will look into another example. Imagine we have data on the marks of students of grade 12, their region and subjects. This data has less number of records, hence searching will be quick. But imagine if we have the records for all the students in India. It is going to take time to get a whole glimpse of the data. i.e., summarize the data. Multi-dimensional databases group the data which are homogeneous (i.e., have the same characteristics), aggregate and store the aggregated value. Below given is the representation where, subject and regions are two dimensions in our data. What is stored within those cells are the aggregates of all the records having the corresponding row and column characteristics. When the data is already aggregated and stored readily, it will easy to answer any question like “What is the total marks of the students in the west in the subject ‘Physics’?” That’s one long sentence but by referencing based on row and column, we find its value i.e., 112. It is to be noted that QuarkCube allows many other functions such as sum, standard deviation, variance, count, etc., in order to summarize the data. This looks simple because it has only 2 dimensions (region, subject). Now let’s consider another dimension ‘Gender’ of the students and try to group them based on 3 dimensions. The resulting data is represented in the form of a cube because the data is grouped based on more than 2 dimensions. The third dimension is gender. The picture displays sliced versions of the cube (Male and female layers). These layers put together form the whole data block. The image below shows us the whole cube. Each block in a cube holds exactly one value (though it looks 3d). For instance, 95 is the total marks obtained by the male students (in the data) who are in the northern region in the subject physics. The power of a multi-dimensionality is felt when the size of data is huge. Let’s elaborate on certain phraseologies related to multi-dimensionality. Dimensions are the elementary components of a multi-dimensional model. They are the perspectives from which we view the data. In other words, the aspects on which we are grouping/clustering similar data are called dimensions. For data scientists you can relate dimensions to categorical variables and for those who are designing experiments, you can relate dimensions to the factors of the experiment. Hierarchies are the different levels in a dimension. For example, let’s say we have data on city, state, country of a customer, we frame a dimension “location of the customer” and we define a hierarchy such that we can consolidate data of customers at the different levels in a hierarchy (country-wise, state-wise and city-wise). A dimension can have many hierarchies. Each of the different perspectives to the data in a dimension, is taken as an individual hierarchy. An example is all we need to understand. Let’s say you own a bakery. The same items that you sell may be grouped based on their category (bread, cookies, tarts, pies and cakes) or also based on their production (in-house or procured). The grouping can be done by two ways. Hence, the number of hierarchies for the items sold (dimension) that you sell is two. Attributes are those components which explain more about a particular dimension. They are like adjectives. For example, in our data if we have columns ‘customer_id’, ‘gender’, customer_id becomes the dimension and gender becomes the extra detail about the dimension i.e., Customer_id, hence Gender is an attribute of Customer_id. Such attributes which stay constant with respect to time are the ‘Real time attributes’. In the bakery example, we may call either of the hierarchies as an attribute as well, because the category of item and production variables basically explain more about the dimension (i.e.,Products sold). In the picture, whether or not a vegetable is above the ground, is the attribute which describes the vegetables (dimension). An attribute which is anticipated vary with respect to time dimension/other dimension is termed as ‘varying attribute’. For instance, you own a tender coconut shop with 3 varieties of tender coconuts. For specific months you purchase certain variety tender coconuts from Vendor A and vendor B and you track your sales. We can summarise the data based on the dimensions ‘Type of tender coconut’, ‘Month’ and also ‘Vendor’. Vendor type changes with respect to time and the variety of tender coconut, we add it as a varying attribute to help analyse individual performances of the vendors for the product varieties. Measures are numeric components of a model. The values which are aggregated/summarised based on the dimensions are termed as ‘Measures’. Basically it is that metric that one wants to track or is most interested in. A model must have a measure. In the tender coconut example, we track sales based on ‘Type of tender coconut’, ‘Month’ and ‘Vendor’, hence sales is the measure. (Of course as an owner, one would want to know about the performance, for which, the most important number to track is that of sales). Analysing sales will help identify the scope for business improvement. A data scientist can relate it to the numerical variable which he is trying to predict in a in a regression problem. Representation of data As we know, a cube stores the aggregate of the data which was grouped on multiple dimensions. Let’s try to visualise it. For instance, in the bakery example let’s consider our dimensions are Market (Asia, America) Items (Cakes, tarts, cookies, pies, bread) Presence of egg (0-No, 1-Yes) While storing the data based on different dimensions, we will be able to fetch the data quickly and since it is an aggregated value, it helps us with ‘just in time’ information that we will need to make big decisions. We will be able to get answers for any combination quickly. It will also help us compare performances between different products, different quarters, different markets and different variation in products. For example, what is the total actual sales for eggless pies? Columns in orange are that which hold values for pies. The sum of totals corresponding to the answers our question. Using the other hierarchy of the items sold (grouping based on in-house/procurement), For this cube, we can quickly fetch data for a question like “What is the total sales in Asia for eggless procured items?” Cubes store the information tracked on the different dimensions and at different hierarchy levels. Multi-dimensionality is a powerful feature and its effectiveness is felt when the size of data (no. of records, no. of features) is huge. The inherent ability to summarise and store data, makes multi-dimensional cubes the go-to for effective business intelligence. In the data-driven world, the ideology of multi-dimensional analysis stays inevitable.
OPCFW_CODE
[Warning: This post is not, strictly, a research post. It is a response to events in the astronomical community in the recent past.] Word on the street (I can't find out, because it is not open) is that some argument broke out on the Facebook(tm) astronomy group about loose discussion on the internets about #aliens, and things like the Boyajian star or the 'Oumuamua asteroid. Since I am partially responsible for this loose talk, here is my position: First, I want to separate informal discussion (like on twitter or blogs) from formal discussion in scientific papers (like what might be submitted to arXiv) from press releases. These are three different things, and I think we need to treat them differently. Second, I am going to assert that it is reasonable and normal for astronomers to discuss in scientific papers (sometimes) the possibility that there is alien life or alien technology with visible impact on observations. Third, I am going to presume that the non-expert public deserves our complete respect and cooperation. If you disagree with any of these things, my argument might not appeal to you. On the second assumption (aliens are worthy of discussion), you can ask yourself: Was it a reasonable use of telescope time to look at 'Oumuamua in the radio, to search for technological radio transmissions? If you think that this was a reasonable thing to do with our resources, then you agree with the second assumption. Similarly if you think SETI is worth doing. If you don't think these uses of telescopes are reasonable—and it is understandable and justifiable not to—then you might think all talk of aliens is illegitimate. Fine. But I think that most of us think that it is legitimate to study SETI and related matters. I certainly do. Now if we accept the second assumption, and if we accept the third assumption (and I really don't have any time for you if you don't accept the third assumption), then I think it is legitimate (and perhaps even ethically required) that we have our discussions about aliens out in the open, visible to the public! The argument (that we shouldn't be talking about such things) appears to be: “Some people (and in particular some news outlets) go ape-shit when there is talk of aliens, so we all need to stop talking about aliens!” But now let's move this into another context: Imagine someone in a role in which they serve the public and are partially responsible for X saying: “Some people (and in particular some news outlets) go ape-shit when there is talk of X, so we all need to stop talking about X!” Obviously that would be a completely unethical position for any public servant to take. And it wouldn't just be fodder for conspiracy theorists, it would actually be evidence of a conspiracy. Imagine we, as a community, decided to only discuss alien technology in private, and never in public. Would that help or hurt with the wild speculation or ape-shit reactions? In the long run, I think it would hurt us, and hurt the public, and be unethical to boot. Informal discussion of all matters of importance to astronomers are legitimately held in the open. We are public servants, ultimately. Now, I have two caveats to this. The first is that it is possible for papers and press releases and news articles to be irresponsible about their discussion of aliens. For example, the reportage claiming (example here)—and it may originate in the paper itself—that the reddening observed in the Boyajian Star rules out alien megastructures was debatably irresponsible in two ways. For one, it implied that the megastructure hypothesis was a leading hypothesis, which it was not, and for two, it implied that the megastructure hypothesis was specific enough to be ruled out by reddening, which it wasn't. Indeed, the chatter on Twitter(tm) led to questions about whether aliens could ever be ruled out by observations, and that is an interesting question, which relates to the second assumption (aliens are worthy of discussion) given above. Either way, the paper and resulting press implied that the observational result constrained aliens, which it did not; the posterior probability of aliens (extremely low to begin with) is almost completely unchanged by the observations in that paper. To imply otherwise is to imply that alien technology is a mature scientific hypothesis, which it isn't. Note, in the above paragraph, that I hold papers and press releases to a higher standard than loose, informal discussion! That is my first assumption, above. You might disagree with it, but note that it would be essentially completely chilling to all informal, open discussion of science if we required refereed-publication-quality backing for anything we say, anywhere. It would effectively re-create the conspiracy that I reject above. I don't mean to be too critical here, the Boyajian-star paper was overall extremely responsible and careful and sensible. As are many other papers about planet results, even ones that end up getting illustrated with an artist's impression of a rocky planet with ocean shores and/or raging surf. If I have a complaint about exoplanet science as a community (and I count myself a member of this community; I am not casting blame elsewhere), it is about the paper-to-press interface, where artist's conceptions and small signals are amplified into luscious and misleading story-time by perfectly sensible reporters. We (as a community, and as a set of funded projects) are complicit in this. The second caveat to what I have written above is that I (for one) and many others talk on Twitter(tm) with tongue in cheek and with sarcasm, irony, and exaggeration. It takes knowledge of the medium, of scientists, and of the individuals involved to decode it properly. When I tweeted that it was “likely” that 'Oumuamua was an alien spaceship, I was obviously (to me) exaggerating, for the purposes of having a fun and interesting discussion. And indeed, the asteroid looks different in color, shape, and spin rate (and maybe therefore composition and tensile strength) from other asteroids in our own Solar System. But it might have been irresponsible to use my exaggeration and humor when it comes to aliens, because aliens do set off some people, especially those who might not know the conventions of scientists and twitter. I take that criticism, and I'll try to be more careful. One last point: The underlying idea of those who say we should keep alien discussion behind closed doors (or cut it off completely) is at least partly that the public can't handle it. I find that attitude disturbing and wrong: In my experience, ordinary people are very wise readers of the news, with good sense and responsibility, and they are just as good at reading arguments on Twitter(tm). The fact that there are some exceptions—or that the Daily Mail is an irresponsible news outlet—does not change the truth of my third assumption (people deserve our respect). We should just ignore and deprecate irresponsible news, and continue to have our discussions out in the open! In the long run, astronomy will benefit from open-ness, honesty, and carefully circumscribed reporting of goals and results. We won't benefit from hiding our legitimate scientific discussions from the public for fear that they will be mis-interpreted.
OPCFW_CODE
startTime and endTime of scheduledOverrides are overwritten to UTC Checks [X] I've already read https://github.com/actions-runner-controller/actions-runner-controller/blob/master/TROUBLESHOOTING.md and I'm sure my issue is not covered in the troubleshooting guide. [X] I'm not using a custom entrypoint in my runner image Controller Version 0.26.0 Helm Chart Version 0.21.0 CertManager Version No response Deployment Method Helm cert-manager installation Using the official manifest: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.yaml Checks [X] This isn't a question or user support case (For Q&A and community support, go to Discussions. It might also be a good idea to contract with any of contributors and maintainers if your business is so critical and therefore you need priority support [X] I've read releasenotes before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes [X] My actions-runner-controller version (v0.x.y) does support the feature [X] I've already upgraded ARC (including the CRDs, see charts/actions-runner-controller/docs/UPGRADING.md for details) to the latest and it didn't fix the issue [X] I've migrated to the workflow job webhook event (if you using webhook driven scaling) Resource Definitions apiVersion: actions.summerwind.dev/v1alpha1 kind: HorizontalRunnerAutoscaler metadata: name: monorepo--medium spec: minReplicas: 2 maxReplicas: 400 scaleTargetRef: name: monorepo--medium scaleUpTriggers: - githubEvent: workflowJob: {} duration: 3m0s scheduledOverrides: # set minReplicas to 0 at night everyday - startTime: "2022-01-01T22:00:00+09:00" endTime: "2022-01-02T09:00:00+09:00" minReplicas: 0 recurrenceRule: frequency: Daily # set minReplicas to 0 on every Saturday / Sunday # 2022-01-01: Saturday # 2022-01-02: Sunday - startTime: "2022-01-01T00:00:00+09:00" endTime: "2022-01-03T00:00:00+09:00" minReplicas: 0 recurrenceRule: frequency: Weekly To Reproduce 1. Argo CD syncs the cluster to the manifest of `HorizontalRunnerAutoscaler` 2. ARC webhook server receives a request from GitHub 3. We can see the diff between resource and manifest of `HorizontalRunnerAutoscaler` Describe the bug We use Argo CD to deploy the runner resources. It seems ARC updates startTime field and endTime field of HorizontalRunnerAutoscaler resource to UTC, and it causes the following diff on Argo CD. Here is the metadata.managedFields of HorizontalRunnerAutoscaler resource. It seems ARC webhook server updated it. - apiVersion: actions.summerwind.dev/v1alpha1 fieldsType: FieldsV1 fieldsV1: 'f:spec': 'f:capacityReservations': {} 'f:scaleUpTriggers': {} 'f:scheduledOverrides': {} manager: github-webhook-server operation: Update time: '2022-10-13T04:32:34Z' I also think the current behavior is not intuitive for non-UTC users. Describe the expected behavior It would be nice if ARC keeps startTime field and endTime field of HorizontalRunnerAutoscaler resource to the original timezone. It seems ARC webhook server updates capacityReservations field using Update API. It causes the whole update of the resource including UTC fields. Whole Controller Logs Here is the log of actions-runner-controller-github-webhook-server Pod. 2022-10-13T04:32:34Z INFO controllers.webhookbasedautoscaler scaled monorepo--medium by 1 2022-10-13T04:32:34Z DEBUG controllers.webhookbasedautoscaler Updating hra monorepo--medium for capacityReservations update {"before": 0, "expired": -4, "added": 4, "completed": 0, "after": 4} 2022-10-13T04:32:33Z DEBUG controllers.webhookbasedautoscaler Found 2 HRAs by key {"key": "***/***"} 2022-10-13T04:32:33Z INFO controllers.webhookbasedautoscaler job scale up target is repository-wide runners {"event": "workflow_job", "hookID": "***", "delivery": "***", "workflowJob.status": "queued", "workflowJob.labels": ["self-hosted", "medium"], "repository.name": "***", "repository.owner.login": "***", "repository.owner.type": "Organization", "enterprise.slug": "", "action": "queued", "repository": "***"} Whole Runner Pod Logs - Additional Context No response Is it possible to patch here? https://github.com/actions-runner-controller/actions-runner-controller/blob/c91e76f1695deec251356f8f31c369e58dec486a/controllers/horizontal_runner_autoscaler_batch_scale.go#L201
GITHUB_ARCHIVE
For personal projects, the cloud is probably too expensive. I say this as a person that previously hosted all of their personal projects on the Azure cloud platform. I had multiple app engines inside a single Azure App service, which worked pretty well. The main advantage of running anything in the cloud is that you don't have to worry about server maintenance, and, if used correctly, can be cheaper for projects over a certain size. I had a few reasons for getting off the cloud: Restricted Tech stack Generally, a cloud service will restrict you to running some specific technologies. For example, a specific version of Node.Js. Even if your cloud service supports the technology you want to use, finding out if it supports the version you want to use is not easy. Right now, if you want to know which versions of Node.Js Azure supports, your best place to get that answer is StackOverflow , even though this information should be prominently displayed in the documentation. The cloud offer is fast moving and hard to keep up with The offer from cloud service providers changes a lot. I actually had to look back over my previous blog post detailing how I hosted my projects in the cloud to remind myself how it all worked. The problem is more acute on AWS, where the naming is so whacky that someone maintains a page where the names of AWS products are translated into useful plain English. Elastic Beanstalk, anyone? There is also a danger of just choosing the wrong cloud product. Like the guy that used Amazon Glacier (real name) storage and ended up racking up a $150 bill for a 60Gb download, You need to learn the platform and not the technology The argument put forward by the cloud guys often is that you get to focus on building your app and not deploying it or worrying about the infrastructure. This is absolutely true for a more trivial apps. However we all know that it is rare for a useful application to be this flat. If you want to connect your application into another service to store data or even just files, you'll need to learn what that service might be on that platform, and then you'll need to learn that platform's API in order to program against it. Generally, once you've got that service wired in, you'll get some benefits, but the upfront effort is a cost that should be taken into account. Run projects for less on physical servers Newsflash - physical servers can be very cheap, especially used ones. Helpfully someone on reddit put together a megalist of cheap server providers. Below is a list of the main providers that are worth checking out in Europe: - Kimsufi - based in France. Cheap used servers. Limited availability. No gigabit connections - Online.net - based in France, cheap line of "dediboxes" - OneProvider - based in France, but have servers located globally As of writing, you can get a gigabit connected, 4 core, 4 Gb Ram server with OneProvider for €8 a month. Whilst comparing the cost of this hardware on a cheap provider to a cloud provider would be silly (you're only supposed to deploy the hardware you need in the cloud), as soon as you run two web apps on an €8 box, you're saving money. I've got one server with Kimsufi, and one with Online.net. I've had one very small outage on Kimsufi in the 2 years that I've had a box there. The box was unavailable for around 10 minutes before Kimsufi's auto monitoring rebooted the box. I've had a single lengthier outage in the 18 months I've used Online.net which lead to a server being unavailable for about 5 hours. This was a violation of their SLA and meant that I got some money back. I detailed this outage in a previous post - "Lessons Learned from a server outage". Run whatever you want with on a dedicated server When you aren't running inside of a neatly packaged cloud product, you have full ownership of the server and can run whatever you want. The downside of this is that there is more responsibility, but in my opinion, the pros out weigh the cons, such as not having to wait for some service to become available (Example - GCS doesn't support http to https redirection, I'm guessing you need to buy another service for that - this is a few simple lines of config in nginx) Being able to run whatever you want also opens the door for you to be able to play around with more technology without having to worry about an increase in cost. Running my own dedicated boxes has been fun, and I've learned a lot by doing so
OPCFW_CODE
The beginning of this week saw heated debates which have already been labelled a Wikipedia Edit war over the attempts to remove the terms ‘permissionless’ and ‘bitcoin’ from the blockchain entry on Wikipedia. The revision history on the website shows two particular terms standing out - ‘bitcoin’ and ‘permissionless’. Collaborative knowledge creation Wikipedia is often the first site for getting quick information on various topics. As the interest in the term ‘blockchain’ has been quite vivid lately, it is of the highest importance to provide a clear explanation of the concept to all interested out there. Wikipedia provides information on what the blockchain actually is. The community nature of the platform means that all of the information that is stored there is contributed by volunteers, who have the rights to add and edit articles. When editors disagree about the content of a page repeatedly, they can override each other’s contributions, sometimes making dubious and often constant additions and edits. Permissionless vs. permissioned The disagreement between editors this time concerned the use of terms ‘permissioned’ and ‘permissionless’ and their application to the concept of blockchain. The incident caught the attention of Jon Matonis, one of the most prominent members of the Bitcoin and blockchain community. Matonis expresses his concern on Twitter: “There’s a Wikipedia edit war now going on to remove terms ‘permissionless’ and ‘bitcoin’ from the blockchain entry.” Permissionless, or public accessibility, stands at the core of blockchain technology and is used to power the Bitcoin network, and most digital currency enthusiasts seem to be aware of this fact, therefore they see little to no sense to remove this type of information from its Wikipedia entry. Wikipedia editing policy According to Wikipedia’s editing policy, any edit warring can lead to certain sanctions. The platform uses a bright-line rule called the three-revert rule (3RR). It states that an editor cannot perform more than three reverts on a single page within a 24-hour period, whether it involves adding the same or different material. An edit or a series of edits that undo other editors’ actions wholly or partially is considered a revert, which is often followed by a block of an edit warrior. If the edit war develops, participants are invited to discuss the issue on the talk page and try to come to a consensus, which, however, did not happen in this particular case. Sanctions were imposed against the user behind the incident, registered as Satoshilong. He was blocked from Wikipedia shortly after he started edit warring. There is information that earlier this year he was trying to advertise his own products on the blockchain page on Wikipedia. One drop creates a rainstorm Several debates spurred over the issue whether permissioned designs can be classified as ‘blockchain designs’. Bitcoin community member Dooglus says: “Sorry to say it, but I tend to agree with the current version of the page. Blockchains can be permissioned or permissionless.” Coinosphere expresses his concerns of whether those permissioned blockchain designs are commercially useful: “Permissioned blockchains are all but worthless.... But they're blockchains. As long as each block has a hash of the block before it, that's literally a chain of blocks, no matter what the permission settings are.” Various Bitcoin community members considered the incident an attempt to discredit the term blockchain in its relation to digital currency. The whole dispute took a slightly different direction recalling an infinite argument of what came first - the chicken or egg. John Whelan says on Twitter: “So what? Before cars there were wagons and the first automobile was not the only one.” Jon Matonis replies: “Blockchains didn't exist until advent of Bitcoin. Shared ledgers (lacking immutable consensus) go much farther back w/Oracle.” ‘This is disturbing’, say Bitcoin enthusiasts With blockchain technology being a rather new and still developing phenomenon, a lot of questions about its nature, logic and applicability are yet to be answered, and they will certainly cause even more active debates in the future. However, this is a very worrying incident for all blockchain technology enthusiasts, demonstrating only that incidents like this might happen with people misinterpreting things or trying to push their own hidden agenda when adding information or making edits to the existing content. Obviously nothing can prevent anyone from attempts to sabotage development of the blockchain concept.
OPCFW_CODE
Rear deraileur maintenance / adjustment I have a 9 year old road bike with Shimano 105 groupset (approx 20 000 miles). I clean the deraileur as best I can but have never taken it apart or replaced anything in it. Recently, when I use the small chainring, the top jockey wheel seems to rest on the chain and cassette. It makes some noise but shifting is pretty much unaffected. Are you able to give me an idea of what the issue might be before I start disassembling things or ordering new parts? What the expected lifespan of a deraileur? Check the top pivot isn't siezed. Try removing the derailleur from the frame. If that bolt is difficult/impossible to turn without the derailleur also turning, there lies your problem. @JoeK if you fancy adding this as an answer this would be the correct one It isn't uncommon for the mounting bolt, which acts as the upper sprung pivot on the most common Shimano derailleur design, to become stiff and seize up. What is most interesting about this failure is that shifting performance can remain very good, so that the rider doesn't usually notice there's a problem until the pivot is very firmly stuck. The bolt is dismountable for maintenance (frequently never done), though it's a fiddly job and by the time seizure occurs other pivot points could be worn enough to warrant complete derailleur replacement. The "horrible hack" way to fix the problem is to (with derailleur removed and facing upwards) spin the bolt continuously while dribbling penetrating oil liberally all over the upper knuckle/mounting bolt interface. A cordless drill with a 5mm bit attached may help you achieve this. I have encountered this problem on XTR M900, Deore LX M560, 600 (6400 series), Acera (current 8sp style), Tiagra 9sp, and older 105 designs, so it's quite a common failure mode but maybe not in dry climates. This was the problem. Here's a video showing disassembly and reassembly of the rear deraillure which includes this pivot. In the video he uses a vice to help with reassembly - I did it with just a pair of pliers but was not at all easy. I thought shifting was fine before but it's much improved now. https://www.youtube.com/watch?v=L5KWFhzIPoc As mentioned in the other answer, the first thing that could be tried is adjusting the B-screw. This adjustment will vary the gap between the jockey wheel and cassette. Clockwise rotation of the B-screw moves the jockey wheel away from the cassette, widening the gap between them. Conversely, loosening the B-screw counterclockwise, will move the jockey wheel close to the cassette. Proper position for setting the B-screw is when chain is on small chainwheel and large rear cog. Measuring from tooth tip to tooth tip there should be a 5-6 mm gap between jockey wheel and large cassette cog. An Allen key of 5mm should just squeeze through the gap is how I determine my adjustment. Even though your shifting seems unaffected, the jockey wheel should not interfere or touch the cassette in any way. I suspect that your B tension screw is out of adjustment. Tighten the B tension screw to move the rear derailleur away from the cassette in the small chainring front - big sprocket rear combination, i.e. the lowest gear combination that you have. The expected lifespan of a rear derailleur is heavily dependent on whether you get a stick in the chain destroying the rear derailleur, or whether the bicycle falls damaging the rear derailleur. Riding on stick-free roads will obviously have a far better lifetime, and by avoiding careless parking you can also increase rear derailleur lifetime. At some point during the lifetime of a rear derailleur, you might want to change to new idler wheels. Such idler wheels are available at a fraction of the price of a new rear derailleur. Thanks - funnily enough I did have a fall a few weeks ago coming down on the derailleur side but, although there are a few scratches, nothing seems bent and I'm sure I was getting this issue before the crash - I rarely use the small chainring so has taken me a while to investigate.
STACK_EXCHANGE
This document summarises the following topics: - Passing the course and computing the grade - General grading criteria for the assignment and examination answers - Writing an essay Passing the course and computing the grade The course consists of 1 examination (30 points maximum) and 3 assignments (total of 45 points maximum). The total points are computed with the following formula, total points=examination points+assignment points. The maximum number of total points is 30+45=75. To pass the course, you need to both - get at least 15 points from examination and - get at least 75 / 2 = 37.5 total points. The grade is defined on a scale from 1 to 5 by the total points with the following grade limits (lowest total points that would give a specific grade): 1: 37.5, 2: 45, 3: 52.5, 4: 60, and 5: 67.5. The examination is based on the content discussed and referenced to in the lectures. In the examination, you may be asked, e.g., to define concepts, solve problems (such as in the exercises of the assignments), and to write an essay (see below). You can answer in Finnish, Swedish, or English. The questions will be in English only. The results of the examination will be valid until summer 2018. No extra material is allowed in the examination. We follow the Aalto University Examination Guidelines. General grading criteria for the assignment and examination answers The general grading criteria used in the course and summarised below are essentially the same as the criteria used in the Finnish high school matriculation examination (older version, in Finnish) for the humanities and natural sciences ("reaalikoe"), where applicable. Signs of a strong answer: - The answer is structured and the factual content is correct and relevant. - There is sufficient amount of essential information; the length of an answer or the number of details are not merits in themselves. - Causes and consequences are discussed appropriately from different viewpoints. - All claims made are substantiated. - The answer indicates readiness to independently process and apply the related knowledge and skills. - Any given source material is used appropriately. - The student relates the knowledge presented in an answer to the larger context. - A clear distinction is made between facts, substantiated claims, and opinions. - The editing is to the point, clear, logical and exact; appropriate notations and conventions (in mathematical derivations, pseudocode etc.) are observed. Signs of a weak answer: - The answer contains factual errors. - The ideas are presented un-clearly or inaccurately. - The presented knowledge indicates that the student has misunderstood the problem, or the presented facts are otherwise irrelevant. - The answer is based only on opinions. In addition to having a correct factual content a good answer should be, among other things, understandable for the examiner. It is not the duty of the examiner to try to guess what the student means, may or may not have done, or how the student has reached the conclusions. The grade will decrease if the answer is ambiguous and/or can be interpreted as wrong or incomplete by a reasonable person. (Example: if the task is to produce a visualisation while at the same time maximising the data-ink ratio, it is a good idea to explain in the answer how the data-ink ratio was maximised - instead of just producing a figure where the data-ink ratio may or may not have been maximised. Otherwise, the examiner may assume that the student did not maximise data-ink ratio.) Writing an essay The examination of this course may include writing of an essay and the Assignments may include essay-like answers (e.g., Exercise 2 of Assignment 2). The purpose of this section is to tell about an academic essay, what we expect of you, and what are the grading criteria. The general guidelines for grading (see above) apply also for essays. Here you can find some additional instructions specific to the essays. A good essay is structured and logical. It is more important to emphasise the essential than to present lots of unrelated details. The essay should demonstrate readiness to independently process and apply the related knowledge and skills. It is important to understand that the essay not only measures the knowledge of relevant facts, but also the understanding the topic as a whole. One of the course objectives is that you are able to present what you have learned in an understandable way; the essay is also a good measure of that. Quoting from Indiana University essay exam instructions: Your instructor is not looking for a collection of unrelated pieces of information. Rather, he or she wants to see that you understand the whole picture, i.e., how the generalizations or concepts create the framework for the specific facts, and how the examples or details fill in the gaps. So, when you're studying, try to think about how the information fits together. - - Again, while you're taking the exam, remember that its not simply what you say or how much you say, but HOW you say it thats important. You want to show your instructor that you have mastered the material. The problem assignment could be, for example: "Essay: History of data graphics." An essay should be written in full sentences, and organised in paragraphs. A numbered list or a bullet point list is not an essay. Figures are acceptable, if they help in understanding your argument, and figures are related to the text and referenced in the text. The essay must be understandable also without the figures. The essay should be written in fluent and correct Finnish, Swedish, or English. The handwriting should be understandable (the essay will be rejected if the examiner and 1-2 randomly selected reviewers can't understand your handwriting). A typical length of an essay in this course would be 1-2 pages of hand-written text (depending on handwriting etc.). While the length of an essay is not a grading criterion in itself, an unusually short text often indicates insufficient content. Unusually long text sometimes (not always) indicates that the essay also contains material that is irrelevant to the topic, and/or the student has been unable to express the material succinctly in the context of the topic of the essay. The essay should include sufficient and correct information that is relevant to the topic. You should limit the scope of your essay appropriately: you should not list everything vaguely related to the topic, nor should you mention all insignificant details. Failure to omit unrelated or irrelevant material will decrease your grade. It is not however enough just to list all relevant facts. You should write down your essay in a way that shows that you understand the relationships between the concepts. That is, in addition to remembering unrelated bits from the lecture material you should have an understanding and intuition of the topic as a whole (which is also needed for example to further process and apply the related skills and knowledge). Failure to present related material in the context of the topic of the essay, or failure tho show the connection of the included material to the topic and/or other parts of your essay, will decrease your grade. For example, in an essay on Tufte's theory of data graphics it would not be enough just to list and explain concepts like data-ink, but also express how the data-ink principle relates to the objectives of the Tufte's theory, practical graphical designs and so on (you should similarly show the connection of these concepts to the other parts of your essay and/or the topic of the essay). You should express your ideas in a coherent, understandable and easily readable sequence of paragraphs. Editing should be appropriate for an academic essay: use clear and logical sentences, go to the point, avoid gratuitous abstractions or think-as-you-go flow of ideas. (This is an academic essay, not an exercise in creative literature.) Do not just assert that something is true, substantiate your claims with facts, tests, logical argumentation etc. Make a clear distinction between facts, substantiated claims, and opinions. Your essay should have a clear structure (for example, introduction to the topic, arguments, conclusion). It may be a good idea to write down keywords and plan the "skeleton" of the essay by sketching the concepts and the structure of the essay on a scratch paper before starting the writing, for example, by using a mind or concept map (do not however submit a mind or concept map as an essay answer!). You can think that the "intended audience" of your essay is your fellow student (who has not yet taken this course, but would have the necessary prerequisite knowledge to take the course) who has asked you to write short text of the topic of the essay. The essay should be understandable and useful to her. (Would your friend understand the topic and idea of the essay just by reading your text, or would she have to ask you for clarification? Would your friend at some point ask how a specific part of your essay is related to the topic or the other parts of the essay? Would your friend at some point ask "what do you mean by this" or "what is the point of writing this here"? Would your friend have to guess what you really meant? If the essay is, for example, about designing glyphs, would your friend be able to design a glyph just based on your essay?) Essay is very related to a summary of a scientific article or a text book chapter. One should be able to understand the main ideas of the scientific article or the book chapter by just reading the summary.
OPCFW_CODE
Adding title / subtitle to the popup. I am trying to add title and subtitle but it doesn't work . .popup(isBarPresented: $isBarPresented, isPopupOpen: $isPopupOpen) { } .popupImage(Image(systemName: "person")) .popupTitle("1356") .popupCloseButtonStyle(.chevron) .popupInteractionStyle(.drag) The image and title should be added inside the content view. So, for example, if you have a music app, and the popup presents a music player, the title—the song name—should be set as a title of the popup in the player view. You can check out the example app in this repo. “ The image and title should be added inside the content view “ I think that is the answer that I wasn’t seeing it + big thanks for the response Thank you. On Nov 19, 2020, at 11:35 AM, Leo Natan<EMAIL_ADDRESS>wrote:  The image and title should be added inside the content view. So, for example, if you have a music app, and the popup presents a music player, the title—the song name—should be set as a title of the popup in the player view. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730216094, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5KZOBPUGITOKGKBAQTSQTKFNANCNFSM4T3AJ5YA. Please let me know how it goes. Thanks For sure Thank you. On Nov 19, 2020, at 11:43 AM, Leo Natan<EMAIL_ADDRESS>wrote:  Please let me know how it goes. Thanks — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730220210, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5J2JCK4EDN76MFLW23SQTLELANCNFSM4T3AJ5YA. I tried , but the items arent showing up . [cid:83120660-ce2f-4dff-b61d-de3de2deb438] From: Leo Natan<EMAIL_ADDRESS>Sent: Thursday, November 19, 2020 3:43 AM To: LeoNatan/LNPopupUI<EMAIL_ADDRESS>Cc: iTollMouS<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [LeoNatan/LNPopupUI] Adding title / subtitle to the popup. (#4) Please let me know how it goes. Thanks — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730220210, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5J2JCK4EDN76MFLW23SQTLELANCNFSM4T3AJ5YA. Here is my simple attempt . https://github.com/iTollMouS/WakaTime-Tracker/blob/main/WakaTime Tracker/ContentView.swift i am not sure how to make it happend From: Leo Natan<EMAIL_ADDRESS>Sent: Thursday, November 19, 2020 3:43 AM To: LeoNatan/LNPopupUI<EMAIL_ADDRESS>Cc: iTollMouS<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [LeoNatan/LNPopupUI] Adding title / subtitle to the popup. (#4) Please let me know how it goes. Thanks — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730220210, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5J2JCK4EDN76MFLW23SQTLELANCNFSM4T3AJ5YA. Did you look at the demo project? I did , and I even copied the same exact structural code as TabView { } .present the pop up .popuptitle( text text ) pupupsubtitle ( text ) And it doesn’t show . Thank you. On Nov 19, 2020, at 5:25 PM, Leo Natan<EMAIL_ADDRESS>wrote:  Did you look at the demo project? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730409739, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5N3SGIO5TH6RMT5KBTSQUTHBANCNFSM4T3AJ5YA. Looking at your code, I don’t think you have. Thank you. On Nov 19, 2020, at 5:42 PM, Leo Natan<EMAIL_ADDRESS>wrote:  Looking at your code, I don’t think you have. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730419966, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5LJZURQM6JHWMEDXA3SQUVF5ANCNFSM4T3AJ5YA. [cid:46F585C1-8802-4647-B115-6086F324A764-L0-001] [cid:06D6C014-8BBE-4C8F-A9F6-D850C5C308A6]Now I see where this is going ✌️😬😋 Thank you. On Nov 19, 2020, at 5:42 PM, Leo Natan<EMAIL_ADDRESS>wrote:  Looking at your code, I don’t think you have. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730419966, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5LJZURQM6JHWMEDXA3SQUVF5ANCNFSM4T3AJ5YA. Did you get it to work? Damn right I did . Now I just learned how to customize it . Customizing is way better and it makes more sense than the default one . ( like music app ) Thank you. On Nov 19, 2020, at 7:12 PM, Leo Natan<EMAIL_ADDRESS>wrote:  Did you get it to work? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730478856, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5MDD4NXCW6EW35BFT3SQU7WTANCNFSM4T3AJ5YA. Enjoy Will do . [cid:9E8CA71E-F03C-4B37-9FE1-A12DB5DE9D1E] Thank you. On Nov 19, 2020, at 8:12 PM, Leo Natan<EMAIL_ADDRESS>wrote:  Enjoy — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LeoNatan/LNPopupUI/issues/4#issuecomment-730515034, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXPK5OMXPKRZ7KW7YSQSF3SQVGX5ANCNFSM4T3AJ5YA.
GITHUB_ARCHIVE
Microsoft is forging ahead with its plan to try to unify the Windows app development space with its "Project Reunion" effort. This week, Microsoft took the wraps off the next milestone on this path: The second preview of WinUI 3. And officials also provided a few updated dates for its Reunion roadmap. What is WinUI? Microsoft execs provided a handy slide (which I saw via @gcaughey), dedicated to answering that. WinUI is the native UI platform for Windows 10, which can be used to build .NET and C++ apps for Windows 10 devices. WinUI is part of the Windows and Xbox OS shells, and "many apps, plus platforms like Xamarin, Forms, and React Native for Windows," officials said. WinUI 3 is under development and is one of the key components of Project Reunion. Microsoft released Preview 2 of WInUI 3 on July 15. Preview 2 includes a bunch of fixes, along with updated documentation and walkthroughs. Microsoft has faced a big Windows developer problem since launching the Universal Windows Platform. The company didn't convince the majority of developers to build new UWP apps and/or update their existing Win32 apps with UWP elements. Microsoft ended up with two, siloed native app platforms, which had uneven support between them. The Windows development team has been trying to devise a way to fix this and enable developers to simply build "Windows apps" that work on the one billion-plus Windows 10 devices out there. As of yesterday, Microsoft officials said they are planning to make WinUI 3 Preview 3, which will include new features and capabilities and not just fixes, available in September, concurrent with Ignite. A WinUI 3 November release (name to be determined) will be the first "go-live" preview of WinUI 3, meaning it can be used in production apps. Microsoft also moved the date when it plans to open-source the WinUI code to November. Update: Earlier this year, WinUI 3 was expected to hit general availability before the end of 2020.. Based on the roadmap, it looks like this is now going to happen in 2021. In a Q&A session following a call about WinUI 3, Microsoft officials said that the Windows Presentation Foundation (WPF) team now sits in the same organization as WinUI. ARM64 support for WPF is planned for 21H1. Officials also said that they are working on plans to improve the XAML engine in the new unified dev platform, adding that this is one of the main reasons they're separating WinUI from the core OS. (Thanks, again to @gcaughey, for tweeting the Qs and As.) Microsoft's long term vision for Project Reunion is to create a Unified App platform (rather than UWP, Unified Windows Platform) which will be applicable for Win32 and UWP apps. The team wants to offer developers a platform on which they can build state-separated, cloud-powered modern apps that will work across all the existing one billion-plus Windows PCs. The first announced components of Reunion are WinUI 3, Edge/Chromium-backed WebView 2, React Native for Windows, composition and input layers, C++/WinRT, Rust/WinRT and C# WinRT libraries, along with MSIX-Core for app distribution.
OPCFW_CODE
package co; public class Twitter { public static final int MAX_VERT = 11; public static void main(String[] args) { Twitter t = new Twitter(); t.mouseAndCheese(); } /** * Given an MxN board and a list of words, write an algorithm that will find * all the words on the board that are present in the dictionary. */ /** * Mouse and cheese trap question: There is a grid, and mouse is in * one coordinate. He has to reach the cheese. There are mousetraps * and walls, means he mouse cannot go to those grids. Find the * smallest path for the mouse to reach cheese. * * Solution: Grid is represented by a graph. Where vertices are accessible cells. * Root of the graph is cell where the search begins. BFS algorithm gives shortest * path (assuming all edges have the same weight) in terms of number of steps. * BFS uses queue. */ public void mouseAndCheese() { Vertex a = new Vertex("A"); Vertex b = new Vertex("B"); Vertex c = new Vertex("C"); Vertex d = new Vertex("D"); Vertex e = new Vertex("E"); Vertex f = new Vertex("F"); Vertex g = new Vertex("G"); Vertex h = new Vertex("H"); Vertex i = new Vertex("I"); Vertex j = new Vertex("J"); Vertex k = new Vertex("K"); Graph graph = new Graph(); graph.addVertex(a); graph.addVertex(b); graph.addVertex(c); graph.addVertex(d); graph.addVertex(e); graph.addVertex(f); graph.addVertex(g); graph.addVertex(h); graph.addVertex(i); graph.addEdge(0, 1); graph.addEdge(1, 0); graph.addEdge(1, 2); graph.addEdge(2, 1); graph.addEdge(2, 3); graph.addEdge(3, 2); graph.addEdge(3, 4); graph.addEdge(3, 5); graph.addEdge(4, 3); graph.addEdge(5, 3); graph.addEdge(5, 6); graph.addEdge(6, 5); graph.addEdge(6, 7); graph.addEdge(7, 6); graph.addEdge(7, 8); graph.addEdge(8, 7); graph.print(); graph.findCheese("H"); } /** * Very rudimentary queue. Keep two pointers head and tail * and adjust them based on operation. */ private class Queue { private int[] arr = new int[2*MAX_VERT]; int tail = 0, head = 0; public void enqueue(int v) { arr[tail++] = v;} public int dequeu() { return arr[head++]; } public boolean isEmpty() { return tail == head; } } private class Stack { private int[] arr = new int[MAX_VERT]; private int top = -1; public boolean isEmpty() {return top == -1;} public void push(int v) {arr[++top] = v;} public int pop() {return arr[top--];} public int peek() { return arr[top]; } // Return string representation of bottom up stack. public String toBottomUpString() { StringBuffer sb = new StringBuffer(); for(int i = 0; i <= top; i++) { sb.append(arr[i]); } return sb.toString(); } } private class Vertex { public String label; public boolean isVisited = false; public Vertex(String l) { label = l; isVisited = false; } } private class Graph { int[][] adjMatrix = new int[MAX_VERT][MAX_VERT]; public Vertex[] vertices = new Vertex[MAX_VERT]; int size = -1; public void addVertex(Vertex v) { vertices[++size] = v; } public void addEdge(int start, int end) { adjMatrix[start][end] = 1; adjMatrix[end][start] = 1; } public void print() { for(int i = 0; i < MAX_VERT; i++) { for(int j = 0; j < MAX_VERT; j++) { System.out.print(adjMatrix[j][i] + " "); } System.out.println(); } } /** * Print traversal path to cheese. */ public void printPath(Queue queue) { System.out.println(); } public void printVertex(int index) { System.out.print(vertices[index].label + " "); } /** * BFS approach with queue should find shortest path. */ public void findCheese(String cheese) { Queue queue = new Queue(); queue.enqueue(0); vertices[0].isVisited = true; printVertex(0); // Lucky. The first cell contains cheese. if(vertices[0].label.equals(cheese)) { return; } int v2; while(queue.isEmpty() == false) { int v1 = queue.dequeu(); while((v2 = getUnvisitedVertex(v1)) >= 0) { queue.enqueue(v2); vertices[v2].isVisited = true; printVertex(v2); // Check is the cheese cell. if(vertices[v2].label.equals(cheese)) { System.out.println(); System.out.println("Found cheese in cell " + vertices[v2].label); return; } } } // No cheese found :( System.out.println(); System.out.println("Could not find the cheese"); } /** * Find unvisited vertex of another vertex. * @param v of the input vertex in adjacency matrix. * @return index of the next unvisited vertex. * -1 there is no more unvisited vertices. */ public int getUnvisitedVertex(int v) { for(int i = 0; i < MAX_VERT; i++) { if(adjMatrix[v][i] == 1 && vertices[i].isVisited == false) { return i; } } return -1; } } }
STACK_EDU
Presentation on theme: "University of Tehran 1 Microprocessor System Design Interrupt Omid Fatemi"— Presentation transcript: University of Tehran 1 Microprocessor System Design Interrupt Omid Fatemi (email@example.com) University of Tehran 2 Outline Interrupts Processor steps Interrupt service routine Input device with interrupt Polling vs. Interrupt University of Tehran 3 Interrupt The microprocessor does not check if data is available. The peripheral will interrupt the processor when data is available University of Tehran 4 Polling vs. Interrupt While studying, I’ll check the bucket every 5 minutes to see if it is already full so that I can transfer the content of the bucket to the drum. Input Device Memory PP instruction POLLING University of Tehran 5 Polling vs. Interrupt I’ll just study. When the speaker starts playing music it means that the bucket is full. I can then transfer the content of the bucket to the drum. Input Device Memory PP instruction INTERRUPT Interrupt request University of Tehran 6 Interrupt Some terms to remember: –Interrupt service routine –Interrupt vectors –Interrupt vector number –Interrupt vector table University of Tehran 7 Interrupt Service Routine (ISR) Is the routine that is executed when a certain interrupt request is granted Is very similar to a procedure in assembly language except that it ends in IRET instead of RET University of Tehran 8 Interrupt Vector Is the address of an ISR Composed of four bytes –2 bytes for IP –2 bytes for CS University of Tehran 9 Interrupt Vector Number Is a number that differentiates interrupt requests. Take note that there can be more than one device that can request for interrupt, in order for the processor to know which device requested an interrupt, the device gives an interrupt vector number In 8088, there are at most 256 interrupt vector numbers (00 to FF) University of Tehran 10 Interrupt Vector Table Reserved memory space where the interrupt vectors are stored Can be viewed as an array of Interrupt Vector –Each element of the array is four bytes in size composing of CS and IP –There is a total of 256 elements in the array University of Tehran 11 How the Processor works? External Action: The power is turned on or the reset button is pressed. 1.The processor is reset. –DS, ES, SS, and IP are initialized to 0000 –CS is initialized to FFFF –Interrupt Flag (IF) is cleared to 0 2.The processor fetches an instruction. 3.The processor increments the program counter (CS:IP) by 1. 4.The processor decodes and executes the instruction (if it is already complete). 5.Go back to step 2. University of Tehran 12 How the Processor works? External Action: A peripheral requested an interrupt by pulling the interrupt line (INTR) to HIGH (Note: this action will only have an effect if IF is set to 1, and the interrupt line must be maintained HIGH until an acknowledgment is given). 6.The processor will complete and execute the last instruction before it acknowledges the interrupt. 7.The processor acknowledges the interrupt by giving a LOW pulse to its INTA’ line. At this point, the peripheral may stop pulling HIGH the interrupt line. 8.Receiving the LOW pulse (from INTA’ line), the peripheral which issued the interrupt request should provide an interrupt vector number through the data bus. The processor stores this interrupt vector number to a temporary register. University of Tehran 13 How the Processor works? 9.The processor pushes the contents of the status register (a 16-bit register containing all the status of the flags) to the stack. 10.The processor clears the Interrupt Flag (IF) and Trap Flag (TF) are cleared 11.The content of the CS register is pushed to the stack. 12.The content of the IP register is pushed to the stack. 13.The processor multiplies the interrupt vector number by 4. This value is the memory location (four bytes of memory location) where the interrupt vector is located. The first two bytes is copied to the IP register, and the next two bytes is copied to the CS register. 14.Go back to step 2. University of Tehran 14 Let’s incorporate Interrupt the hardware and software The program makes a “running LED” effect (initially moving from down to up). Every time the lowest button is pressed, it changes the direction of the movement. When the highest button is pressed, the program terminates. University of Tehran 15 8088 and an Output Device University of Tehran 16 8088 and an Input Device University of Tehran 17 8088 and an Interrupt-driven Input Device INTR A 1 5 8088 Minimum Mode A18 A0 : D7 D6 IOR IOW A19 D5 D4 D3 D2 D1 D0 74LS245 B7 B6 B5 B4 B3 B2 B1 B0 A7 A6 A5 A4 A3 A2 A1 A0 E DIR A 1 4 A 1 3 A 1 2 A 1 1 A 1 0 A 9 A 8 A 7 A 6 A 5 A 4 A 3 A 2 A 1 A 0 IOR 5V INTR INTA University of Tehran 18 8088 and an Interrupt-Driven Input Device INTR DQset Qclr 5V A 1 5 8088 Minimum Mode A18 A0 : D7 D6 IOR IOW A19 D5 D4 D3 D2 D1 D0 74LS245 B0 B1 B2 B3 B4 B5 B6 B7 A7 A6 A5 A4 A3 A2 A1 A0 E DIR A 1 4 A 1 3 A 1 2 A 1 1 A 1 0 A 9 A 8 A 7 A 6 A 5 A 4 A 3 A 2 A 1 A 0 IOR 5V INTR INTA University of Tehran 19 8088 and an Interrupt-driven Input Device University of Tehran 20 8088 and an Interrupt-Driven Input Device University of Tehran 21 8088 and an Interrupt-Driven Input Device INT 3 University of Tehran 22 Example Polling program? The program makes a “running LED” effect (initially moving from down to up). Every time the lowest button is pressed, it changes the direction of the movement. When the highest button is pressed, the program terminates. University of Tehran 23 The Circuit A 1 5 8088 Minimum Mode A18 A0 : D7 D6 IOR IOW A19 D5 D4 D3 D2 D1 D0 A 1 4 A 1 3 A 1 2 A 1 1 A 1 0 A 9 A 8 A 7 A 6 A 5 A 4 A 3 A 2 A 1 A 0 IOR 5V 74LS245 B0 B1 B2 B3 B4 B5 B6 B7 A0 A1 A2 A3 A4 A5 A6 A7 EDIR A 1 5 A 1 4 A 1 3 A 1 2 A 1 1 A 1 0 A 9 A 8 A 7 A 6 A 5 A 4 A 3 A 2 A 1 A 0 IOW 74LS373 Q0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 D0 D1 D2 D3 D4 D5 D6 D7 OELE University of Tehran 24 Trace What the Program Does: mov dx, F000 mov ah, 00 mov al, 01 L1:out dx, al mov cx, FFFF L2:dec cx jnz L2 cmp ah, 00 jne L3 rol al, 1 cmp al, 01 jne L1 jmp L4 L3:ror al, 1 cmp al, 80 jne L1 L4: mov bl, al in al, dx cmp al, FF je L6 test al, 01 jnz L5 xor ah, FF jmp L6 L5: test al, 80 jz L7 L6: mov al, bl jmp L1 L7: delay Ah =0 means left Ah =ff means right Checking the buttons University of Tehran 25 What’s the Problem With Polling in the Sample Program? Running LED takes time User might remove his/her finger from the switch before the in al, dx instruction is executed the microprocessor will not know that the user has pressed the button University of Tehran 26 Problem With Polling mov dx, F000 mov ah, 00 mov al, 01 L1:out dx, al mov cx, FFFF L2:dec cx jnz L2 cmp ah, 00 jne L3 rol al, 1 cmp al, 01 jne L1 jmp L4 L3:ror al, 1 cmp al, 80 jne L1 L4: mov bl, al in al, dx cmp al, FF je L6 test al, 01 jnz L5 xor ah, FF jmp L6 L5: test al, 80 jz L7 L6: mov al, bl jmp L1 L7: University of Tehran 27 Program (Main) with Interrupt mov ax,0000 mov ds, ax mov bx, 000C mov ax, 2800 mov [bx], ax mov ax, 5000 mov [bx+02], ax sti mov dx, F000 mov ah, 00 mov al, 01 L1:cmp ah, 88 je L4 out dx, al mov cx, FFFF L2:dec cx jnz L2 cmp ah, 00 jne L3 rol al, 1 jmp L1 L3:ror al, 1 jmp L1 L4: ISR starts at 5280088 means end University of Tehran 28 Program (ISR) ;assume that the ;ISR starts at ;location 52800 mov bl, al in al, dx test al, 01 jnz S1 xor ah, FF jmp S2 S1: test al, 80 jnz S2 mov ah, 88 S2: mov al, bl iret
OPCFW_CODE
If you have a TUIO only system, please refer to this guide. If your system supports a different protocol you may be able to run without the translator software. These multiplayer, multitouch games for your multitouch table or interpersonal computer are available for free download from the Chrome Web Store. These games are free for private home use. Any distribution or commercial use requires a license. A resolution of 1920x1080 is recommended for these games. The games work best on tables of 46" in size. GPU accelerated 2d canvas is also required (you must have a medium to high end graphics card). To start playing these games on your multitouch system do the following: Right click the three-pack icon and make sure "open full screen" is checked. Launch the three pack and touch the flashing TUIO indicator. If the indicator goes away you do not have a TUIO system and you are ready to play! *NEW* 4/12/2013 - TUIO Concentration Flash is a flashcard/memory game where you are shown a set of cards and must memorize their positions. You are then quizzed on each card's location. Plays from 1-6 players on native or TUIO multitouch tables. You can download three games in one from the Chrome Web Store using this link to the DougX.net Three Pack. The games accommodate as many as six players around your multitouch table. This package features a main window from which you can launch: This is the launcher for the 3-pack. Touch and hold in any corner to configure the settings for the 3pack. Most settings in this menu are only used on TUIO systems. Your mouse should also function as an input device unless you disable it in the configuration screen. (watch video) Texas Hold'em (2-6 players) Players at each station can use their hand to cover their two cards and the bottoms will "flip up" letting them check the strength of their hole cards. The betting mechanics are standard, no limit tournament rules. You can adjust the blind levels and other settings by doing a touch+hold in any corner. Computer opponents are available on the initial screen, just touch your "buy in" area again to make the station AI. Don't do any real gambling with this, because I don't guarantee my hand-evaluation algorithm to be 100% correct! (watch video) Missile Defender (1-4 players) Defend your cities against incoming alien missiles. Your left, right, and center cities store the ammo for your countermeasures -- so defend them at any cost. Each time you touch your play area you launch a countermeasure at the incoming missile. Your missiles make larger explosions when you fire closer to the ground, but you get fewer points for hits at low altitudes. Hold a corner for more info. Your ammo is limited in each wave, so don't squander it. As long as at least one of your cities survives they will all be rebuilt between waves. Bonuses are awarded for undestroyed cities and leftover ammo. (watch video) Card Golf (2-6 players) Players try to minimize their score over nine rounds. Each round consists of arranging eight face down cards, flipping three, and then taking turns trying to get matches in your columns. If you don't have a match in a column the cards score face value, but there are some cards worth zero. On your turn, swap out any of your cards with the center card, which becomes the center card for the next person. You can request a random center card once per turn. Once anyone goes all-face-up the remaining players get one more turn. Detailed instructions are available by touching and holding in a corner. Play against up to five other humans or computer players. (watch video)
OPCFW_CODE
Online translation tools come in handy when we don’t know a certain language. Let us imagine a situation where you are in a foreign country with locals only understanding their native language. But you have no choice but to ask for directions to the nearby eatery. What would you do? Or, imagine this – you’re browsing the net and come across an interesting site. But the contents of that site are written in a language unknown to you? How would you read it? That’s where online translators come in. And, talking about such translators, what’s better than nicetranslator? Today, it is among the fastest and most convenient online translators available out there. In this piece, we will know more about the site/app. What does the app do actually? The app is an online translator that utilizes the famous Google API. It is one of the fastest, most efficient translation tools available today. Unlike many other similar tools, you don’t need to select your source language. The app, just like Google Translate, auto-detects the source language as you type. However, you are free to pick the language yourself if you desire. In addition to it, you can translate to and from 52 specified languages. These include English, Arabic, Chinese, and many more. This helps you understand any language you might be having trouble with. Features of the app Now, let us check out the features of this amazing app at a glance: - Powered by the famous Google API, the app lets you translate texts to and from 52 different languages. - No need to pick the source language yourself. The app detects the language you are typing in automatically. However, you can manually choose the source language with ease. - No need to register. You can visit the site and start using the tool then and there. - No need to use a keyboard to type accented characters. You can do so right from the site itself. Benefits and drawbacks of the tool Let us now look at some of the benefits and drawbacks of using the site: - The translator tool is very fast and reliable. - It offers a free tutorial to make things easier. - Translating your texts using the app/site costs no money. It offers its amazing services absolutely free of cost. 1. While being a great translator, it lacks a few features of the original Google Translate. 2. It doesn’t have some secondary function buttons on its window for quick use when needed. Now that you know about this awesome online translator tool, what are you waiting for? While some extra things could be added to the tool to further improve it, it’s pretty remarkable even now. So, start using the app as soon as possible and take advantage of all its cool features.
OPCFW_CODE
This content has been marked as final. Show 5 replies Off the top of my head, when I want to "group by" month, I truncate the "date" in question to the first of the month and then "group by" that. I don't think that fits your problem. Suppose you have had a machine since CREATE_DT = '10/15/2012', then is what you want is to count it in Oct 2012, Nov 2012, Dec 2012, Jan 2013, and Feb 2013? And if a sister machine, started the same day, broke on 01/03/13, then you want the same counts except nothing for Febuary? Is that right? I wonder if there an Oracle "analytic" function that does this type of computation? bostonmacosx wrote:Nope. Clear as mud. Hello there. So I hope I can explain this sufficiently: OUT OF THE WAY: 11g 4.1.1 I'm going to simplify my data so that it is clear what I'm looking to do. I want to have a line chart grouped by date. Let's say monthly...this is easy to do if you are dealing with one specific date and some value you can build the series against with case statements. I've done that a million times. The columns of data I'm dealing with are as follows: MACHINE_TYPE(type of machine) So lets say I want to see a line chart where each line(data point) is a MACHINE_TYPE and each bin is a month. That month should be any machine with a CREATE_DT below then end of the month and a RETIRE_DT which is either Greater then the end of the month or is NULL(ala hasn't been retired yet). In the query for a chart which is I guess I'm not seeing how to be able to put these values together so that it walks month by month and figures out the values and puts them in the correct "bin" of time along the X axis of the chart. SELECT LINK,LABEL,CASE()"",CASE()"",CASE()"" from TABLE GROUP BY ROLLUP(VALUE) I hope I'm being semi clear as it is hard to explain this scenario. If time is plotted against the X axis, what measure is plotted on the Y? =============Instead of inadequate attempts to explain this here with fragments of code that we can't do anything with because we don't possess the objects and data they're based on, show us something. Create the objects and some sample data in a workspace on apex.oracle.com and post guest developer credentials. Sketch the required chart or mock it up in a spreadsheet and upload it as an image or PDF so we can see what you're aiming for. on another note I built this table from the data: Using the function: 01-JAN-12 01-FEB-12 01-MAR-12 01-APR-12 01-MAY-12 01-JUN-12 01-JUL-12 01-AUG-12 01-SEP-12 01-OCT-12 01-NOV-12 01-DEC-12 01-JAN-13 ENVOS 59 59 59 59 59 59 59 59 59 59 59 60 60 Alias 12 26 26 26 26 26 26 26 26 26 26 26 26 Blade 9 9 9 9 9 9 9 9 9 9 9 9 9 DataMvr create or replace FUNCTION ACTIVE_SYSTEMS RETURN VARCHAR2 is var1 VARCHAR2(4000):= ''; start_date DATE:= to_Date('05-JAN-2012','DD-MON-YYYY'); end_date DATE:= to_Date('08-JAN-2013','DD-MON-YYYY'); new_start_date DATE; BEGIN new_start_date:=trunc(start_date,'MONTH'); var1 := q'!SELECT !'; while(new_start_date<end_date) LOOP var1 := var1 || q'! count(case when create_dt<'!'||to_char(new_start_date,'DD-MON-YY')||q'!' and (retire_dt IS NULL or retire_dt>'!'||to_char(new_start_date,'DD-MON-YY')||q'!') then 1 end) "!' ||to_char(new_start_date,'DD-MON-YY')||q'!",!'; new_start_date:= add_months(new_start_date,1); END LOOP; var1 := var1 || q'! CMS_NODE_OS.OS_TYPE||' '||node_env as envos from CMS.CMS_NODE LEFT join CMS.CMS_NODE_OS on CMS.CMS_NODE.NODE_NAME=CMS.CMS_NODE_OS.NODE_NAME where retire_dt is NULL group by rollup(CMS_NODE_OS.OS_TYPE||' '||node_env)!'; RETURN var1; END; Here is the link which I hope makes it more clear: the UN and PW are demo/demo One way to achive Your goal is create a timeline and left join Your result like in my example... SELECT NULL LINK, TO_CHAR(MONTHS_,'YYYY-MM') DATIME, SUM(DECODE(OS_TYPE,'AIX','1',0)) "AIX", SUM(DECODE(OS_TYPE,'Linux','1',0)) "Linux" , SUM(DECODE(OS_TYPE,'Windows','1',0)) "Windows" FROM ( SELECT ADD_MONTHS(TRUNC(SYSDATE,'MM'),-ROWNUM + 1) MONTHS_ FROM DUAL CONNECT BY LEVEL <= (SELECT CEIL( MONTHS_BETWEEN(SYSDATE,TO_DATE('2005-01-01','YYYY-MM-DD'))) FROM DUAL) ) THE_TIMELINE LEFT JOIN MACHINES ON ( MONTHS_ BETWEEN CREATE_DT AND LAST_DAY(NVL(RETIRE_DT,MONTHS_)) ) GROUP BY (TO_CHAR(MONTHS_,'YYYY-MM')) ORDER BY DATIME This appears to work although I don't understand what is going on. I'm trying to educate myself LEVEL and the creation of a timeline although there appears not to be much out there about this. We'll see what I can do I hate using solutions without understanding them but thanks again Kenny
OPCFW_CODE
If you have pet rats, you may be wondering whether you can take them outside (as you might with a pet rabbit) or if they could live outside on a permanent basis? Well, they can’t. It’s almost impossible for a domesticated rat to thrive in the outside world. Here’s why. This is why pet rats shouldn’t live outside: they’re not designed for it, they don’t understand predators, they might get lost, they might get poisoned and they won’t get on with wild rats at all. There are substantial differences between domestic rats and wild rats that make it difficult or impossible for them to survive outside. Can A Pet Rat Live Outside? No, it’s not a good idea for pet rats to live outside of a home. They aren’t bred for the purposes of living in the wild and their lack of experience with the real world is going to cause them some serious problems. Firstly, there’s the risk of escape. Your rat may love you but being taken outside and exposed to the elements for the first time is going to freak it out. Even if your rats are incredibly well-trained and are happy to come to you when you call – they may well run away. We’ve never met a human being who could run faster than a scared rat can. If your rat flees, you’re going to find that you never get it back again. Then, there’s the problem of predators. Your rats live in a world where there are no predators (at least, we hope they do – we’re not keen on the idea of keeping pet rats in order to feed them to snakes) and if you introduce them to the outside world, they will expect the same kind of rules. That means the first time that a passing cat or bird comes along, not only will they have the size advantage and lethality advantage that they would normally have, they also have the complete advantage of surprise because your rat doesn’t know it’s supposed to run away. You also have the risk of your rats poisoning themselves. When they’re in your home, you control their diets and that means they can eat whatever they fancy if it’s put in front of them. They don’t know what they can and can’t eat in the wider world. That means they are as likely to tuck into a slug pellet or a poisonous toadstool as they are into a tasty bit of insect. So, it’s a bad idea to take your rats outside at any time. We know that there are rat owners out there that still do this but it’s still a bad idea. Your rats’ health is at risk and that’s not acceptable to us as pet owners. Can Pet Rats Survive Outside? Theoretically, your rats can survive outside. After all, they are capable of withstanding most temperatures (same as people) and they certainly don’t mind sun, wind, rain, etc. However, in reality, if you left your rats to fend for themselves outside – they’d not survive for very long at all. They don’t have the instruction manual for the real world. We’ve already noted that they’d encounter predators without knowing how to handle it and they might eat something that kills them, but they’d also be in real trouble if they encountered wild rats. They would be prone to catching diseases from these rats and it’s likely that they’d be attacked by these rats (and wild rats are likely to have back up from their own nest) and when rats fight, it can get very brutal, indeed. We’d be amazed if the average domesticated rat could survive for more than a few hours outside and no rat would be likely to live out their full life if they were released into the wild. The Differences Between Wild Rats & Domesticated Rats There are four substantive differences between wild rats and domestic rats and these differences help to explain why domesticated rats aren’t suited to outdoor living and thus, also, why they make such superb pets (they’ve been tailored to the role over years of breeding). Rats were domesticated during the 20th century and haven’t got the long history of being pets that say, dogs have. However, rats also have short lifespans and breed very quickly – there are a lot more generations of rat born in a century than there are of dog. So, now there is a marked variation between wild and domestic rats and the main differences are in: So, let’s take a look at each of those, in turn. Socialization: Wild Rats Don’t Like People One thing that we are absolutely certain of is that you have never been walking down the street when a wild rat has run up to you and decided to investigate who you are. That’s because wild rats are very much not social creatures. They are neophobic (frightened of new things). If a wild rat comes across a human being, its first instinct is to flee. If they can escape, they will run for the hills without looking back over their shoulder at you. The only time wild rats tend to come into contact with humans is if they are trying to access the same food supply. Wild rats don’t even like other rats that much. They will avoid each other, for the main part, unless they are breeding together. If you corner a wild rat – it will fight. Wild rats are not afraid of attacking when they feel threatened and are actually capable of being quite vicious. Pet rats are the complete opposite. They enjoy the company of other rats and, in fact, can find themselves becoming distressed without company. They also like people and while it is possible for a pet rat to bite a person if they feel attacked – this is a rare event in most rat owners’ lives. Size: Wild Rats Are Smaller And Lighter Wild rats don’t tend to live for a long enough period of time to get to their full size. Their lives tend to be quite brutal and not only do they have to worry about predators, they often can’t find access to an adequate food supply and if they can – they may need to fight other rats for it. So, they tend to only make it to a length of around 9-10”. This is in stark contrast, to the well-fed and loved domesticated rat which has no such worries – they will reach 11-12” in most cases. A wild rat can make itself seem larger by pushing out its fur when threatened but it almost never is. Your domestic rat is almost always fatter than a wild rat too. This is partly due to the easy availability of food and partly because there’s much less physical activity in a domestic rat’s life compared to a wild rat’s. Coloring: Wild Rats Tend To All Look The Same Wild rats tend to be either brown or black. Though a brown rat may have lighter fur on the underbelly. Domestic rats come in a wider-variety of colors of fur. The white rat with pink eyes is a classic domesticated rat and they were bred to this color during the 19th century and then for their sociability in the 20th century. Adaptation: Domestic Rats Like To Live In Captivity The biggest and possibly, saddest, difference between domesticated and wild rats is their reaction to being kept in captivity. Wild rats go berserk as soon as they are caged. They are terrified by the lack of any hiding places and cannot stand the exposure to bright light. This experience is so stressful to the wild rat that it’s not uncommon for them to die of heart failure. The first litters from wild rats that survive and can mate in captivity (many cannot) tend to be smaller and less rugged than normal. It takes about 20 generations of breeding to transform wild rats into happy domesticated rats. Once rats are domesticated, they tend to enjoy being in captivity and do not find the experience at all stressful. So, as you can see – wild rats don’t do well indoors because they’re not built for it and domesticated rats have similar problems when outdoors because they are no longer built for it. This is why pet rats shouldn’t live outside: pet rats aren’t wild rats. They don’t understand the outside world and going outside is risky and trying to live outside would be fatal for them. They don’t know what to eat, where to go, how to handle predators or even how to handle wild rats.
OPCFW_CODE
On 11/26/11 7:06 PM, Dave Chinner wrote: > On Thu, Nov 24, 2011 at 05:50:42PM -0200, Carlos Maiolino wrote: >> On Thu, Nov 24, 2011 at 05:20:51PM -0200, Carlos Maiolino wrote: >>> xfsprogs (mainly mkfs) is using the logical sector size of a volume to >>> the filesystem, which, even in devices using Advanced Format, it can get a >>> bytes sector size if it is set as the logical sector size. >>> This patch changes the ioctl to get the physical sector size, independent >>> of the >>> logical size. >> Just as information, this patch proposal does not change the behaviour of >> mkfs in case the >> user is using libblkid, which in case, mkfs will take advantage of libblkid >> to retrieve disk >> topology and information. >> I'm not sure if libblkid is the best way to retrieve the device sector's >> size here, since >> this does not provide a way to retrive the physical sector size, only the >> logical size, but >> I can be very wrong. > If libblkid exports the PBS (physical block size) as exposed in > /sys/block/<dev>/queue/physical_block_size, then we should be able > to get it. > However, the issue in my mind is not whether it is supported, but > what is the effect of making this change? The filesystem relies on > the fact that the minimum guaranteed unit of atomic IO is a sector, > not the PBS of the underlying disk. What guarantees do we have when > do a PBS sized IO is doesn't get torn into sector sized IOs by the > drive and hence only partitially completed on failure? Indeed, if > the filesystem is sector unaligned, it is -guaranteed- to have PBS > sized IOs torn apart by the hardware.... > i.e. do we have any guarantee at all that a PBS sized IO will either > wholly complete or wholly fail when PBS != sector size? And if not, > why is this a change we should make given it appears to me to > violate a fundamental assumption of the filesystem design? I had the expectation that physical block size WAS the fundamental/atomic IO size for the disk, and anything smaller required read/modify/write. So I made this suggestion (and I think hch concurred) so that we weren't doing log IOs which required RMW & translation. i.e. for a 4k physical / 512 logical disk - wouldn't we want to choose Ok, if we have mismanaged the alignment and aligned to logical, not physical, then I guess there would be an issue... but at that point we've already messed up (though not catastrophically I guess)...
OPCFW_CODE
Ultra MegaMax Dominator (aka UMMD) is my third 3D printer design/build. I used much of what I learned from its predecessors, and tried a few experiments, resulting in a very high quality machine that produces very high quality prints. Time will tell if it meets my reliability goals. Take a look at my upgraded Stirling Engine with its new gas burner and flywheel! If you take a look at my previous post you’ll see how I built a 3D printed holder for my Stirling Engine kit. Since I needed a constant heat source I added a small gas burner salvaged from an old BBQ lighter and attached it to the engine. While building my zombie containment unit, I decided I wanted some LED displays or bar graphs to complement the containment status video running on the smaller secondary video monitor. Some other containment units used LED air pressure gauges from eBay. I wanted to achieve a similar look, but I also wanted my gauge to be software controllable so I could change the number of segments lit in response to events in the playback of the two videos. I decided it was time to build my own LED bar graphs. Yu Jiang Tham designed and built his own bartender robot named Bar Mixvah, that is available on Github: I built a robot that mixes drinks named Bar Mixvah. It utilizes an Arduino microcontroller switching a series of pumps via transistors on the physical layer, and the MEAN stack (MongoDB, Express.js, Angular.js, Node.js) and jQuery for the frontend and backend. In this post, I’ll teach you how I made it. You can follow along and build one just like it! Saulius Lukse writes, “Almost any sensor yields more interesting results if mounted on a moving platform. It’s time to mount TOF LIDAR on two precision rotary stages arranged for pan and tilt operation. Rig provides real-time position data along with distance to an obstacle. Using simple math we can calculate position in Cartesian coordinate system. Data is collected point by point to reconstruct 3D object model. After 3D reconstruction and colorizing in MeshLab I got amazing result of my room.” If you ever watch the original Star Trek, Captain Kirk and crew spend a lot of time mapping new parts of the galaxy. In fact, at least one episode centered on them taking images of some new part of space. It might not be new, but if you have a drone, you probably have accumulated a lot of frames of aerial imagery from around your house (or wherever you fly). WebODM allows you to create georeferenced maps, point clouds and textured 3D models from your drone footage. The software is really an integration and workflow manager for Open Drone Map, which does most of the heavy lifting. Getting started with WebODM or Open Drone Map is simple since they provide preconfigured Docker images. You don’t have to worry about assembling a bunch of dependencies to make everything work. There are other mapping applications in use, too. You can see a comparison of five popular choices in the video below. WebODM isn’t complete yet, but they intend to include mission planning and integration with mobile apps.
OPCFW_CODE
Sander Doherty mengirim sebuah pembaruan 2 tahun, 3 bulan lalu Most iPods, besides their music features, offer high quality video capabilities; it�s one of the reasons they are one of the most versatile gadgets on the market. We all enjoy watching movies to relax and unwind, so why not convert your own DVDs to the iPod format and watch them whenever you want, wherever you are? Transferring your DVD movies to play on your iPod, iPhone, Touch, or Nano can be simple, or in some cases, time-consuming and difficult. There are many iPod converters available, but only a few are worth purchasing: some are too expensive, some are hard to use, and many aren�t reliable (they mess up sound and/or picture, or they can�t convert new movies). A good iPod converter must meet the following conditions: � User-friendly interface � the program must be easy to use. � Fast encoding � the program must be able to encode fast so that the entire process will be finished fast. � Versatile � the Converter should be able to output video for all iPod Video formats: Classic, Nano, Touch, and iPhone. � Quality � the converted movie must offer excellent sound and picture quality. � Error Processing � the program must be able fix any problems that appear during the converting process. � Features � the program must have popular options implemented, such as picture quality, subtitles, different languages, etc. It�s also helpful if the program has a tutorial/help section where the user can learn how to take advantage of extra features. Another important thing regarding conversion from a DVD to an iPod is the ability to zoom from widescreen to full screen. A good converter should have this feature implemented in order to allow for a clean transition. The program should allow inexperienced as well as advanced users a satisfactory experience. For Full Crack Download , a good converter should have an option for automatically adjusting settings to achieve the best results. For advanced users, there should be the ability to manually adjust a wide variety of parameters. A good iPod converter should also allow converting from both NTSC and PAL DVD movies. Excellent iPod converters can be found for around $40.00, and they�ll do everything, and often more, than some of the more expensive programs. You can try and go the inexpensive route, using two or three free programs in combination (Videora, DVD Shrink, Handbrake, etc.), but I found that the free programs involved a large number of steps, produced mixed results, and wouldn�t convert many new DVD movies (those produced over the past four years). Don�t get me wrong, I�m a thrifty guy, and if there�s a cheap way to do things, I�m usually on-board, but in this case, the hassle just wasn�t worth it. The advantages offered by a good converting program far outweighed the trial and error hassles I found using outdated free programs. Converting your DVD movies to play on an iPod allows one to watch their favorite movies and T.V. programs while on the bus or subway, on airplanes or long drives, or whenever you have a few free moments. DVD conversion software is getting more and more popular, as they facilitate the perfect transition from the �big screen� to the �little screen�.
OPCFW_CODE
<?php namespace Krak\Mw; use RuntimeException, SplMinHeap, Psr\Container\ContainerInterface; /** compose a set of middleware into a handler ``` $handler = mw\compose([ function($a, $b, $next) { return $a . $b . "e"; }, function($a, $b, $next) { return $next($a . 'b', $b . 'd'); } ]); $res = $handler('a', 'c'); assert($res === 'abcde'); ``` @param array $mws The set of middleware to compose @param Context|null $ctx The context for the middleware @param string $link_class The class to use for linking the middleware @return \Closure the composed set of middleware as a handler */ function compose(array $mws, Context $ctx = null, $link_class = Link::class) { if (!count($mws)) { throw new \InvalidArgumentException("Cannot compose an empty set of middleware."); } $last = new $link_class($mws[0], $ctx ?: new Context\StdContext()); $head = $last->chains($mws); return function(...$params) use ($head) { return $head(...$params); }; } /** creates a composer function */ function composer(Context $ctx = null, $link_class = Link::class) { $ctx = $ctx ?: new Context\StdContext(); return function(array $mws) use ($ctx, $link_class) { return compose($mws, $ctx, $link_class); }; } /** forces a guard in the composed middleware */ function guardedComposer($composer, $msg) { return function(array $mws) use ($composer, $msg) { array_unshift($mws, guard($msg)); return $composer($mws); }; } /** Group a set of middleware into one. This internally just calls the compose middleware, so the way the middleware are composed works exactly the same ``` $append = function($c) { return function($s, $next) use ($c) { return $next($s . $c); }; }; $handler = mw\compose([ function($v) { return $v; }, $append('d'), mw\group([ $append('c'), $append('b'), ]), $append('a'), ]); $res = $handler(''); assert($res === 'abcd'); ``` @param array $mws The set of middleware to group. @return \Closure a middleware composed of other middleware */ function group(array $mws) { return function(...$params) use ($mws) { list($params, $link) = splitArgs($params); $next = $link->chains($mws); return $next(...$params); }; } /** Lazily create the middleware once it needs to be executed. This will cache the created middleware so that subsequent calls to this middleware will use the same generated middleware. ``` $create_mw = function($a) { return function() use ($a) { return $a; }; }; $val = 0; $handler = mw\compose([ mw\lazy(function() use ($create_mw, &$val){ $val += 1; return $create_mw($val); }) ]); assert($handler() == 1 && $handler() == 1); ``` @param callable $mw_gen Creates the middleware @return \Closure the middleware */ function lazy(callable $mw_gen) { return function(...$params) use ($mw_gen) { static $mw; if (!$mw) { $mw = $mw_gen(); } list($params, $link) = splitArgs($params); $link = $link->chain($mw); return $link(...$params); }; } /** Creates a middleware that will conditionally execute or skip the middleware passed in depending on the result of the $predicate ``` $mw = function() { return 2; }; $handler = mw\compose([ function() { return 1; }, mw\filter($mw, function($v) { return $v == 4; }) ]); assert($handler(5) == 1 && $handler(4) == 2); ``` */ function filter(callable $mw, callable $predicate) { return function(...$all_params) use ($mw, $predicate) { list($params, $link) = splitArgs($all_params); if ($predicate(...$params)) { $link = $link->chain($mw); } return $link(...$params); }; } /** returns the parameters itself. If it's a multi-param middleware, it'll return the params as a tuple, else it'll return the single value */ function identity() { return function(...$params) { return count($params) > 2 ? array_slice($params, 0, -1) : $params[0]; }; } /** always return the value passed in */ function stub($ret) { return function() use ($ret) { return $ret; }; } /** Will throw an error if this middleware is every reached. Designed to be a failsafe and provide a helpful error message when a composed stack of middleware fail to return a result. */ function guard($msg) { return function() use ($msg) { throw new Exception\NoResultException($msg); }; } /** @deprecated Use Krak\Invoke instead invokes a middleware checking if the mw is a service defined in a PSR Container */ function containerAwareInvoke(ContainerInterface $c, $invoke = 'call_user_func') { return function($func, ...$params) use ($c, $invoke) { if (is_string($func) && $c->has($func)) { $func = $c->get($func); } return $invoke($func, ...$params); }; } /** @deprecated Use Krak\Invoke instead */ function methodInvoke($method, $allow_callable = true, $invoke = 'call_user_func') { return function($func, ...$params) use ($method, $invoke, $allow_callable) { if (is_object($func) && method_exists($func, $method)) { return $invoke([$func, $method], ...$params); } else if ($allow_callable && is_callable($func)) { return $invoke($func, ...$params); } $msg = "Middleware cannot be invoked because it does not contain the '$method' method"; if ($allow_callable) { $msg .= ' and is not a callable.'; } throw new \LogicException($msg); }; } /** utility method for splitting the parameters into the params and the next */ function splitArgs(array $args) { return [array_slice($args, 0, -1), end($args)]; } function stack(array $entries = []) { return new Stack($entries); }
STACK_EDU
Can you check the Vostro 5468 M2 socket support M2 PCIe x2 or x4. Because when I attached the Samsung 950 Pro 512GB NVMe, it cannot detect. many thanks I will try to find out the number of lanes on Vostro 5468 and get back with you. As for the detection problem it's probably not a matter of lanes. When you say the drive isn't detected, where exactly are you discovering that? Is it during an operating system install? Or have you gone into the BIOS and found that the drive is not listed there? If the drive is listed in the BIOS then you need to make sure you're using the Samsung driver for proper drive detection and performance. That driver is found here: www.samsung.com/.../Samsung_NVMExpress_Driver_rev11.zip If you need help regarding how to install this driver reply back and I'll see what I can do. I heard back from a source who advised that the M.2 port on the Vostro 5468 is wired as SATA M.2, not PCIE M.2. So you actually could have an issue with drive type, like you had assumed before. I don't know if the Samsung NVME drive is backwards compatible with SATA M.2 ports. You'll have to contact the manufacturer to find out. Interesting... how about the M.2 port on a Precision 3620 tower? I plugged a Samsung 850Pro 1TB NVMe drive into one and the 3620 doesn't see it, even in the BIOS settings. The Precision 3620 comes with a PCIE 4X slot. I can't say why the Samsung 850Pro 1TB NVMe isn't detecting but I think you're probably seeing a compatibility issue. I can only offer support for drives we sell with systems. For example: Part number W53CK - 1TB PCIe PM951 for Precision 3620. I have a Dell E7470 with default M.2 Lite-on module. I try to upgrade it to Plextor M8PeGN for the last 4 months, but I can't get it working. First I was thinking the module was the issue. The bios reconise the module. In the Windows 10 Pro setup I see the Module and can install Windows without any issues. Only after first boot the boot partition is not find. After try a new install from Windows I see the old partitions and see also the data. Only it issn't possible the format a partition or delete them. After this I can't do anything anymore. I use the latest bios, try latest sata drivers in windows setup. For the rest no idear anymore. I understand from Plextor that the module is compatible with E7470, but is this correct? Anybody else with the same issues? I can't find anybody with Plextor Module and E7470. Another question is the Samsung 960 Pro working with the E7470? @Justin C Thank you very much for the info you offered. Until this Morning, I knew from your info, E5470 with I7-6820HQ' s slot M2.0 supports PICE 3.0X2 only. But E5570 supports X4 while H CPU. Can I ask you if Dell's team could issue a new E5470 BIOS to support the same as E5570? Thanks for your help in advance and Best Regards HI! Justin C Thank you very much for your infos. Can I know if Dell's support Team could issue a new Bios for E5470 with H CPU supporting M2.0 to PICE 3.0X4? In General, E5570 is similar to E5470 while the offical M 2.0 configration is not. Thanks for your kind help in advance. Yes, my 3620 has an empty 4-wide PCIE slot, as well as an empty x16 PCIE slot (the main x16 PCIE slot being occupied by an nvidia quadro K620), and even a legacy PCI slot. It also had an empty M.2 slot... I even see an empty header on the motherboard for a (non-existent) second M.2 slot... Anyway, I procured a 512GB samsung PM951 (actual capacity 476.8GB), and after swapping out the 1TB samsung 850 the PM951 does indeed show up in the BIOS just fine. any update for the latitude 7370? just wonderin if i can put 2nd ssd on the wwan mpcie slot?
OPCFW_CODE
Re: [glas] arithmetic operations From: Guntram Berti (berti_at_[hidden]) Date: 2005-11-04 07:48:28 On Fri, Oct 28, 2005 at 08:14:46AM -0500, Andrew Lumsdaine wrote: > On Oct 28, 2005, at 2:23 AM, Karl Meerbergen wrote: > > To make the choice easier, we could assume that a vector is a column > > (as > > is always the case in linear algebra). In this case > > trans(x)*y = dot product > > x*trans(y) = outer product > > herm(x)*y = hermitian inner product > > This is probably the closest we can get to linear algebra notation. > Right. I think these all make perfect sense mathematically. I wasn't > suggesting to leave out the use of * altogether, but rather to use it > only in the ways that make sense mathematically. And to have it always > mean the same thing. Another point to note is that if we use * on unqualified vectors like u*v for the dot (inner) product, we loose associativity: (u*v)*w != u*(v*w) (if we assume * also is used for the multiplication with a scalar). Therefore I personally never use * for the dot product. So if we use operators like *, we must make sure (in addition to corresponding to mathemtical convention) that its use is 100% unambigous and can be deduced from the types. So Karl's suggestion, using trans(x)*y etc. seems to make sense, because then any multiplication involving non-scalar quantities is formally a matrix multiplication, and errors can be detected easily (at least at If we want an operator for element-wise multiplication, we must then use something like elem(v) * elem(w) (to keep with Karl's notation) or elemwise_mul(v,w), as has been suggested before. Concerning the dot product: If we overload the comma operator, we can probably use the notation (v,w) common in mathemetics ;-) (just kidding ...) Another remark/idea concerning the inner product (also concerning norms): Sometimes it may be useful to make this a parameter of an algorithm, for instance, using different norms to compute stopping criteria, or orthonormalization using different inner products. This is perhaps an argument for preferring inner_prod(v,w) over a notation using operator *, because I feel that it is easier to influence the former using template parameters. Perhaps something like space::inner_product(v,w) ...
OPCFW_CODE
Nu finns det en ny BETA version av HomeStick med massa ändringar och nyheter. Den största nyheten är designen. Lite förhandsbilder i denna länk: Programmet kan laddas ner här: http://reor-software.eu/HomeStick/HomeS ... _setup.exe Detta är nyheterna/ändringarna/kvarstående saker att fixa till: NEWS AND CHANGES * New welcome window with more "First Startup-Settings". * New design. * Now you can add more than 20 groups. Its even esier to remove groups. * Device indicator: You will see an indicator on top when you change a device state when its activated (Visual -> Device indicator). * You can now choose if you like to send more then one command when you change the device state. The default value is 1 and you can send up to 5 commands after each other (Settings -> Synchronization). * You can now change the position for sensors and devices when you hold the mouse button on the name and draw the mouse. * Edit sensor value: If you need to calibrate your sensor but are missing those features on the, exemple thermometer, you can now edit the value in HomeStick. * You can now choose if you like to show the passed time or last time sience the sensors latest update (Settings -> Synchronization). * You can now setup your own dimmer values (Settings -> Dimmer values). * You can now hide ghost sensors (Settings -> Sensor data). * You can now turn off logging of sensor data for all sensors, of for some sensors you are not interested of to show some data. * You can now show the temperature as Fahrenheit. * HomeStick.exe contains two *.ico files. You can change the icon by yourself if you create a application link file (*.lnk). * New features for scenarios: You can now select what you want your device should do when the scenario is on. Even pictures can be added to show a better state og the scenario. * House plan. * IP cameras will now appear with a preview picture on the start screen. * YR.no: You can now search for loations without copy-paste the URL from a web browser (If you got a error message, you have to select another location). * YR.no: You can now see forecast for other locations beyond your default location. * New weather icons. * Comment: You can now write a notice for your stuff. For exemple: When you changed the batteries for a sensor, or what device is placed in a room. * You can now reset, create backup and restore data (About). * All devices should now be shown. Some "unknown" devices may show a question mark ("?"). Those devices are not able to change the state on. * Statistics: A whole new system to present collected data from your sensors and weather. * Weather radars over many places over the world are included (Weather forecast). It is possible that a few radays may not be loaded. * You can now activate a system tray icon (Settings -> App settings). The application will not be closed when you press the "X" in the title list instead shows up as a system tray. * You can now setup a FTP server to export files to (Settings -> Export and FTP server). * You can now select where you want to save the default files. * More possibilities for sensor export. * Show all your sensors in sequence as favorite (Visual -> Favorite). * If you have trouble with the popup menu for devices, sensors, groups etc. (sometimes it is disturbing that the menu is opening to often) then you can activate to holding the CTRL while open the menu (Settings -> App settings -> Panel popup menu). * Main code fixes. * Welcome window needs fixes (layout, text fields, settings, translations). * The german language translation is not finish yet (Some text will still be in english). * Events: The offset for sun time condition is not working. Telldus API is not accepting "sunriseOffset" and "sunsetOffset" when these values sends from the application. * Events: PRO functions are not included. * Online help and change log is only in english.
OPCFW_CODE
Gl_NormalMatrix replacement in shaders First let me apologize for asking what I am sure is an overly asked question but all my searching and testing has resulted in nothing. I am attempting to remove all the GLSL built-ins from my shaders and the last one for #version 150 is gl_NormalMatrix. From all my reading it’s the transposed inverse of the upper 3x3 of the product of the Model and View matrices. So I have tried numerous permutations of this line (mat3 glNormalMatrix = transpose(inverse(mat3(mView * mModel)))) with no success replicating gl_NormalMatrix. Below is a normal mapping vertex shader and it works perfectly with the gl_NormalMatrix built in but this built in is depreciated in version > 120 thus the exercise. I am making some assumptions here but I am pretty certain may matrices are correct and I pass a projection, view(camera) and model(world) matrix in to transform the object coordinates. Additionally I have annotated what space I believe the vec3 to be in and what direction the mat4 is intended to transform to. I don’t think it’s a problem with my TBN but not ruling that out also. I just assumed I’d be able to calculate a ‘replacement’ for gl_NormalMatrix from the components passed in but that is proving to be harder than I had suspected. The offending line is in the CalculateTBN near the bottom with the working line uncommented and the ‘what I think is correct’ commented. I don’t rule out the TBN calculations but I believe they are correct as they work with the gl_NormalMatrix. If seeing the fragment shader or my TBN calculations will help I will post but I suspect those components may be a side matter to this activity. Also this shader is not optimized I know but I’m following the mantra, make it work (built-ins), make it work right (remove built-ins), make it fast (move some or most of the matrix math back to CPU). Maybe that exercise will solve this problem but I suspect I should be able to replicate a replacement for gl_NormalMatrix in the shader also. Additionally any insight into making it better is always appreciated. Notes on what’s happening when rendering a plant with normal maps. With gl_NormalMatrix the lighting seems correct and stable to light source. Normal map looks correct, specular stay with camera angle. With my attempts to replicate the gl_NormalMatrix the dark side rotates with the globe, specular does odd stuff like going opposite directions, culminating at points on the globe and other odd effect. The normal mapping seems to be correct as it still looks accurate but with the shaded and specular moving it can be hard to tell. Any insight is greatly appreciated… uniform mat4 mProjection; // to clip space (projection) uniform mat4 mView; // to view space (camera) uniform mat4 mModel; // to world space (world) uniform vec3 vLightSourcePosition; // world space uniform vec3 vCameraSourcePosition; // world space attribute vec3 a_vVertex; // incoming vertex (object space) attribute vec3 a_vNormal; // incoming normal (object space) attribute vec2 a_vTexel; // incoming normal (object space) attribute vec3 a_vTangent; // to tangent space /* ------------------------------------------------------------------------------------------ */ varying vec2 vTexCoord; varying vec3 vLightDirection; // object space varying vec3 vCameraDirection; // object space mat3 CalculateTBN(vec3 vNormal, vec3 a_vTangent); vec3 vVertex = a_vVertex; // object space vec3 vNormal = a_vNormal; // object space vTexCoord = a_vTexel; // Calculate the positions from the model using the inverse of the model // Move from model space to local (object space) vec3 vLightPosition = vec3(inverse(mModel) * vec4(vLightSourcePosition, 1)); // object space vec3 vCameraPosition = vec3(inverse(mModel) * vec4(vCameraSourcePosition, 1)); // object space mat3 TBN = CalculateTBN(vNormal, a_vTangent); vLightDirection = TBN * (vLightPosition - vVertex); // tangent space vCameraDirection = TBN * (vCameraPosition - vVertex); // tangent space // Calculate the vertex position by combining the model/view(camera)/projection matrices gl_Position = mProjection * mView * mModel * vec4(a_vVertex, 1); // clip space mat3 CalculateTBN(vec3 vNormal, vec3 a_vTangent) mat3 glNormalMatrix = gl_NormalMatrix; // to normal space?? //mat3 glNormalMatrix = transpose(inverse(mat3(mView * mModel))); // to normal space?? // Calculate the Inverse TBN (to transform from object space to tangent space) vec3 n = normalize(glNormalMatrix * vNormal); vec3 t = normalize(glNormalMatrix * a_vTangent); vec3 b = cross(n, t); mat3 TBN = transpose(mat3(t,b,n)); // to tangent space
OPCFW_CODE
If your company uses Eloqua's marketing automation platform (MAP), you’ve likely invested considerable time and effort in managing lead data and audience segments. The next step is to integrate Eloqua and Interaction Studio so you can leverage data collected in each system and build richer customer and prospect profiles to gain a deeper understanding of each lead. Using data from Eloqua helps you to deliver more meaningful visitor experiences using Interaction Studio and passing behavioral and analytics data from Interaction Studio helps you better target your Eloqua campaigns. Interaction Studio Classic Only - The contents of this article are intended for customers using Interaction Studio (formerly Evergage Classic). Do not adjust your beacon version to downgrade or upgrade. - The Visual Editor Chrome Extension will no longer be available starting January 1, 2023. For more information, see this knowledge article. The first step is to set up an OAuth connection to your Eloqua account. - Log into Interaction Studio as an Administrator - Select Third Party > Integrations - Select Eloqua Select Setup and copy the OAuth Callback URL Log into Eloqua and create a new app in the AppCloud Devoloper Area Paste the OAuth Callback URL you copied from Interaction StudioYou will need to know the new OAuth Client ID and Client Secret for the following steps - Return to Interaction Studio Third Party > Eloqua - In Configure Connection, enter the the following information (located in Eloqua Cloud Services) in the fields provided: - Client ID - Client Secret - Click AUTHORIZE Configure Match Fields On the Eloqua Integration Synchronization tab, Contact Match Fields connects fields in Interaction Studio with fields in Eloqua for user identification. By default, the email address fields in Interaction Studio and Eloqua are linked. If you need to link additional fields, please contact Support. Configure Segment Synchronization On the Eloqua Integration Segments tab, you can synchronize Interaction Studio segments to Eloqua and vice versa. You must create any Interaction Studio segments or Eloqua lists before you can push to or pull from Eloqua. - Click Push a New Segment or Pull a New Segment - Select the Source Evergage Segment or Source Eloqua Segment - When pushing a new segment the Target Eloqua Segment will be autofilled; when pulling a new segment enter the Target Evergage Segment - Click OK Configure Field Synchronization On the Eloqua Integration Fields tab, you can select which attribute fields should be pulled from or pushed to Eloqua. Fields being pushed to Eloqua must be created in Eloqua before they can be configured on the Fields tab. The destination field name and label are also configurable. - Click Push a New Attribute or Pull a New Attribute - Select the Source Evergage Field - Select the Eloqua Custom Field - Click OK Configure Campaign Detection On the Eloqua Integration Campaigns tab, you can configure third party campaign landing URLs to track different information about a campaign. When a matching URL is clicked for the first time, a campaign is created and will appear in Interaction Studio CAMPAIGNS. Any subsequent clicks will register impressions for the campaign. - Click Add Parameter Mapping - Set the Parameter Type - Landing Page URL Query Parameter - Referral URL Query Parameter - Enter the Parameter Name - Set the Target Type (optional) - Campaign ID - Campaign Name - Campaign Source - Campaign Medium - Campaign Attribute - Campaign Experience ID - User Attribute - Enter the Target Field Name (optional) - Click OK What should I do if I have trouble with the synchronization? If the synchronization does not work as expected, please click Contact Support above and complete the form. In the Push Fields section, what do each of the Source Evergage Fields do? - ID - The unique Interaction Studio ID - It is an anonymous ID for non logged in users - Evergage Link - The link to the user profile screen - First Active - The day and time the visitor was first tracked by Interaction Studio on your site - Last Active - The day and time the visitor was last tracked by Interaction Studio on your site - Engagement - A scoring metric configured in Interaction Studio to show the engagement of a visitor with your site - Engagement Trend - An indicator of whether engagement for the visitor has increased or decreased since the last synchronization - Segment Names - A list of the segments that the visitor belongs to - Total Visit Count - A count of the total number of visits a visitor has made to the site since Interaction Studio began tracking - Total Action Count - A count of the total number of actions a visitor has completed on the site since Interaction Studio began tracking - Lifetime Value (retail only) - The total revenue the visitor has generated since Interaction Studio started tracking on your site When you choose one of the push fields, it creates an Eloqua field name and label. Will the system create those automatically, or do we need to create those fields in Eloqua? And can they be fields or Custom Data Objects? Eloqua will auto-create the Eloqua fields. Interaction Studio doesn't support custom data objects. The custom fields Interaction Studio creates are on Contacts. Are there any specific naming conventions required for the field names & labels? Interaction Studio will auto-generate an Eloqua field name for push fields and all fields names will be prefixed with "evg_". You also have the option to change the prefix or the field name that will be passed.
OPCFW_CODE
Could this alternative hash based MAC construction be as, or even more secure than an HMAC? Begin hashing (Key||Message) Encrypt the hash state (or some part of it, such as the first 128 or 256 bits), with the key. Add the encrypted hash state to the hash and return the result. This could be roughly described as Hash(Key||Message||Encrypt(Previous hash state)) There are several properties about this: It may be faster than an HMAC in practice, especially for shorter messages (modern hardware accelerated encryption is up to 5x-10x the speed of hashing). Encrypting the hash state in stage (2) is a psuedo-random permutation, rather than just a PRF and so may have stronger security guarantees (this is the job of the more math-oriented people [i.e. you guys/girls] to try to analyze though). A practical advantage of HMAC, though, is that it could work with arbitrary “secrets”, that are not necessarily psuedo-random, but this construction would only work if the “secret” is an actual key associated with some cipher (though the “secret” could be hashed of course, to yield a key) - this may not be a significant limitation in practice though, as in most cases it is used with an actual key. (Also consider a “weaker” version where stage (1) only performs Hash(Message), [i.e. without the key]) [Non-expert alert: I’m not a cryptographer or even a computer-scientist, so I don’t have the knowledge or capacity for the type of percise mathemtical reasoning that’s needed to analyze this. I guess that’s why I’m asking it here..] If you are concerned with speed of a MAC and have hardware-accelerated AES encryption, you definitely want to consider CBC-MAC with the length of the message at start, and right-padding of the message with zeroes; this is demonstrably secure (when using a key dedicated to MAC), and even standardized as ISO/IEC 9797-1:2011 Padding Method 3. As they put it: "The [first] block consists of the binary representation of the length (in bits) of the unpadded [message], left-padded with as few (possibly none) ‘0’ bits as necessary to obtain a [128-]bit block". (continued) if for some reason the length of the message is not known at the beginning of the MAC, there's the more elaborate CMAC; or OMAC2. Do not use straight CBC-MAC with a variable-length message. Sure, I'm already using CBC-MAC! (and I'm aware that prepending the length or encrypting the last block is also secure for variable length messages). I just asked this out of curiosity.. :) Security The level of security is likely to depend on the cryptographic primitives - the actual hash function and cipher - used. It is very likely that you can construct a function that is insecure, e.g. where the cipher is used for both the hash function an encryption. So you need to prove that the hash function and the encryption primitive are not influencing the security, even though they are using the same key. It is much easier to use a single PRF or PRP and prove that secure. Using one function also greatly simplifies implementations. Implementation issues An implementation issue is that intermediate hash states are often not defined or clear. SHA-1 and SHA-2 do have "logical" intermediate states after a block is hashed as the hash output is basically (part of the) final state. The final state for these hash functions does not significantly differ from the intermediate state. This is not obvious for other hash functions. This is for instance the case for Keccak - the winner of the SHA-3 contest - as well as most other SHA-3 candidates. Using an intermediate hash state should therefore be avoided. Expected properties I'll iterate over the expected properties in order: It may be faster than an HMAC in practice, especially for shorter messages (modern hardware accelerated encryption is up to 5x-10x the speed of hashing). Usually performance is less of an issue for shorter messages. I would argue that it is only faster for shorter messages; the effect on larger messages will be negligible. Even then, it would require the cipher implementation to be cached (and in the case of e.g. Java, optimized) in addition to the hash function. Encrypting the hash state in stage (2) is a psuedo-random permutation, rather than just a PRF and so may have stronger security guarantees (this is the job of the more math-oriented people [i.e. you guys/girls] to try to analyze though). A hash function especially within HMAC already provides very good security guarantees. It remains to be seen if using a PRP would have any advantages; I would expect it to be less secure. A practical advantage of HMAC, though, is that it could work with arbitrary “secrets”, that are not necessarily psuedo-random, but this construction would only work if the “secret” is an actual key associated with some cipher (though the “secret” could be hashed of course, to yield a key) - this may not be a significant limitation in practice though, as in most cases it is used with an actual key. One of the more annoying aspects of CBC based MAC's is the non-variable key size. This is for instance rather annoying when using the MAC as building block for a key derivation function (KDF). So this is - in my opinion - a significant limitation. CPU hash acceleration Note that there are also CPU level optimizations for hash functions. Well known ones live in the Intel SHA extensions, the VIA padlock security suite, the Sun Ultra-Sparks... This should be pretty readable for a non-expert. Don't hesitate to ask if it is not. Thank for your time! It seems like there's a rather large gap between "being able to use cryptographic constructs correctly" or "being able to implement cryptographic functions" to "being able to reason and prove properties about cryptographic constructs". I guess the first or second should be sufficient for myself. Since I once worked a [Javascript] SHA-1 implementation (mostly to improve performance of an existing one) I assumed the type of hash function used would have the intermediate state that's equal or be very close to the output at that point. @Anon2000: in crypto perhaps more than elsewhere, the devil is in the details. If you look closely at a typical SHA-1 implementation, the state has the 160-bit chaining variable so far, the length so far (usually 64-bit, could be in bits or bytes), and the message-block-not-hashed-yet (usually up to 511-bit, which length may or may not be tracked separately). If you want a portable implementation enciphering or hashing that, you need to care of all these details, including endianness and order of the various fields. If you consider only the chaining variable, you must be careful about padding. I would expect the hash function 1. to be precisely positioned at the end of a block and 2. for it to process a final block for this to work. And it could probably made to work. But the above shows that there are probably too many snags for it to become a generic scheme. Note that for Keccak you would not need the encryption at all. You could also have a look at GCM mode / GMAC for accelerated authentication. @fgrieu I didn't actually implement SHA-1 from scratch, but worked carefully on existing (correct) implementation. But anyway, the nice thing about writing software (compared say, to writing papers) is that you can implement an algorithm without a 100%, complete understanding of it. And then test it to match an existing implementation. So in this case it's being aggressively tested against OpenSSL by bit-bit comparison of the output for random inputs of various lengths. @MaartenBodewes Seems like there are many technical nuances for the somewhat vague concept of "hash state" and now that I look at the SHA-1 code again I do realize there's a 512 bit block, padding, variables etc. I didn't originally consider anyone would take idea this far though to actually think about how the practical details on implementing this. So I guess this may actually work with some hash functions more easily than others. Sure I'm aware there are other (probably better methods) out there. Interestingly, HMAC itself could have actually hashed the "hash state" and added it instead of using a different outer hash. That might not have as strong properties though (I have no idea, and it may not matter in practice or in theory and just make it overly-complicated). @MaartenBodewes According to this answer, as long as the desired hash size is smaller or equal to the block size of the cipher E(H(Plaintext)) is secure (only when used with a block cipher). To get more than 128bits (though some applications, such as the one I'm working on now, don't need extreme collision resistance), one could use a non-standard cipher like Rijndael with 256 block size. [Of course, on some platforms and algorithms encryption is faster than hashing - where CBC-MAC or CMAC would be better choices, but some the opposite is true] Or alternatively E(H(Ciphertext)) of course, which removes the need to decrypt before authenticating.
STACK_EXCHANGE
How to query DateTimeOffset with DocumentDb Assume I insert records into Azure DocumentDb of the following model: public class Message { [JsonProperty(PropertyName = "tid")] public string Id { get; set; } [JsonProperty(PropertyName = "start")] public DateTimeOffset StartAt { get; set; } } They both automatically get stored as strings. I want to be able to query StartAt, so I added a RngeIndex on it. I used the Azure Portal to verify that the index works properly. With that out of the way, I load up the DocumentDb .NET SDK and try the following query: var since = DateTimeOffset.UtcNow.Subtract(duration); return Client.CreateDocumentQuery<T>(Collection.DocumentsLink) .Where(m => m.AtStart > since) .AsEnumerable(); But I get the error [DocumentQueryException: Constant of type 'System.DateTimeOffset' is not supported.] Microsoft.Azure.Documents.Linq.ExpressionToSql.VisitConstant(ConstantExpression inputExpression) +3204 Microsoft.Azure.Documents.Linq.ExpressionToSql.VisitBinary(BinaryExpression inputExpression, TranslationContext context) +364 Microsoft.Azure.Documents.Linq.ExpressionToSql.VisitBinary(BinaryExpression inputExpression, TranslationContext context) +349 Microsoft.Azure.Documents.Linq.ExpressionToSql.VisitScalarLambda(Expression inputExpression, TranslationContext context) +230 Microsoft.Azure.Documents.Linq.ExpressionToSql.VisitWhere(ReadOnlyCollection`1 arguments, TranslationContext context) +55 Microsoft.Azure.Documents.Linq.ExpressionToSql.VisitMethodCall(MethodCallExpression inputExpression, TranslationContext context) +799 Microsoft.Azure.Documents.Linq.ExpressionToSql.Translate(Expression inputExpression, TranslationContext context) +91 Microsoft.Azure.Documents.Linq.ExpressionToSql.TranslateQuery(Expression inputExpression) +46 Microsoft.Azure.Documents.Linq.SQLTranslator.TranslateQuery(Expression inputExpression) +20 Microsoft.Azure.Documents.Linq.<ExecuteAllAsync>d__7.MoveNext() +177 System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +179 System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +66 System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task) +30 Microsoft.Azure.Documents.Linq.<GetEnumeratorTAsync>d__10.MoveNext() +632 Is there a way to execute a type-safe query without changing the underlying model? You can perform range-query on dates, but not as a "type-safe" query. This is because DocumentDB does not have a date time data type. Instead, DocumentDB adheres strictly to the JSON spec for supported data types (String, Number, Boolean, Array, Object and Null). As a result, you get an exception, Constant of type 'System.DateTimeOffset' is not supported, when trying to query directly on DateTimeOffset using the LINQ Provider. By default, the DocumentDB Client SDK serializes datetime object properties as an ISO 8601 formatted string which looks something like this: 2014-09-15T23:14:25.7251173Z. Adding a range index on strings would allow you to perform string range queries on the date. You could serialize var since = DateTimeOffset.UtcNow.Subtract(duration); as an ISO 8601 string to perform the query in your code snippet. Check out this blog post for a more in-depth discussion on working with dates. Please note that the blog is slightly out-of-date, as support for string range index and queries have been added since the time the blog post was initially authored. I recommend testing by storing various times in various time-zones/offsets and ensuring ordering is as expected. It's just that comparing strings will work only if they're all in the same timezone, i.e. UTC ending Z. This conversion will discard the captured timezone/offset information which may be problematic for your application, worse, become problematic in future should you ever need to work out what the observed local time was of a historic event. Countries change their timezone rules over the years so it can be significant work. Apply caution on finance and audit apps.
STACK_EXCHANGE
...website with design Framework like Twitter Bootstrap3, Foundation 5, UI Kit, Ink, Gumby, and/or Semantic UI. Someone that has experience with PHP, CMS, Ruby on Rails, HTML5, CSS, Java Script and possibly WordPress. You must be highly skilled and currently have lots of website developement experience. I will need to see examples of your work and be ...a a database-less CMS platform. == Problem == We're finding it hard to use and customize GravCMS and the business owner wants to manage their own content without needed a developer. == Project Description == 1. Convert the current site to a Wordpress theme 2. Add the ability to manage and list properties in the Wordpress admin. 3. Add the I am looking for a highly experienced wordpress developer. I am looking for someone that can pick up our front end code and create a wordpresss theme from it, and hook it onto a a fully functional CMS. The site page are a little more advanced that usual wordpress sites. The site needs to uses some plugins that we recommend and be optimised. The We need a team of front end and back end developers that can work with our team in the US. Programmers should have expertise in Umbraco, AnjularJS, CSS, C# entity framework, web api, SQL, [url removed, login to view], and Windows Azure. Responsive design and agile programming methods are required. This is a long term project but we will start with smaller test project I need someone to finish off a trial page. This page will be the basis for a non-CMS, static and basic HTML web site -- NO WordPress, Drupal or whatever your favourite CMS is. I'll be using the finished page as a sort of template to create all other pages for a simple 10 page web site. The page will be based on the largely finished design I have ...something beautiful. We would prefer that you build the CMS in Wordpress so that we can easily manage it moving forward. If successful, there will be the opportunity to work on another website for another business that I run. To be successful, we require someone that understands: - Wordpress - HTML/CSS - Conversion optimisation - Graphic design I Hello, I need a 1-Page HTML/CSS Template. This will be regular old fashioned HTML/CSS code. No Wordpress or Drupal or CMS system. Just plain HTML/CSS. 1-Page. The site is [url removed, login to view] and it will have the same general LAYOUT as the attached site. Come up with a nice mystical color scheme and background astrology related. Leave We are looking for long term Wordpress developer who can do custom css/ php / html / CMS Based Websites. If you can speak Tamil and English, Please contact me. mu budget is $5-$10 Task Description: Customize WordPress bottom area my website is [url removed, login to view] and i want footer to look like this website [url removed, login to view] . let me know if yo... We are looking for long term Wordpress developer who can do custom css/ php / html / CMS Based Websites. If you can speak Tamil and English, Please contact me. Task Description: Customize WordPress bottom area my website is [url removed, login to view] and i want footer to look like this website [url removed, login to view] . let me know if you can do it. wha... ...należeć będzie kodowanie: stron www landing page'y modyfikacji już istniejącej strony kreacji mailingowych Poszukujemy osób ze znajomością HTML, CSS oraz CMS Wordpress. Nie musisz posiadać doświadczenia w każdym obszarze - po przesłaniu przez Ciebie aplikacji, uzgodnimy w czym czujesz się najlepiej i w czym mógłbyś nam pomóc. Company Don't bid on this project Wordpress Developer Needed For Hourly Based. We Need a person who will work in Indian Time Zone 7 AM to 10 PM and who will know wordpress cms, woocommerce, html ,css and custom php work very well. We don't require company for this work we need a individual worker, please write "hourly" otherwise i am hide your bid I have a html template I have purchased and need edited to match by business branding guidelines, and add photos and copy. ...and need edited to match by business branding guidelines, and add photos and copy. And if it's possible integrated into a CMS like wordpress so I can edit my self later on. Basic HTML editing needed, as well as JS, and CSS. Hello designers, I need a WordPress menu/bar designed to fit the theme of my site. 1. I need the menu/bar to look like the railing on a tour boat. ( Created by illustration as opposed to an image). 2. I need the header to be made transparent via a css snippet in my custom css section. This will allow the top of the background image to be ...retail store (non-corporate). It will be simple as they do not need many features. Framework should be lightweight. HTML, CSS and PHP are ok. Will be hosted on Linux server. No CMS should be used to build the site (like Wordpress). Ask me if you have any questions. I don't read long project proposals - please identify why you're the best person for this
OPCFW_CODE
Cloud computing has been a major shift in the last decade. Many organizations are taking advantage of the assorted aspects of cloud and network computing for the profit. However, businesses no longer supposed to expend a lot of money to set up a server or rent a data server. As a result, cloud providers are getting prominent standings in the pool of IT world today.All the same, starting a flourishing business at Azure is not easy; this includes many benefits. Therefore, on account of this, it is crucial to initiate an accurate step and follow the right path to reach your destination. Thus, the shadowing discourse will assist you in order to discover the basics of becoming an Azure-enabled developer, with an image of the skills and types of certifications. Determining the Basics of Azure Earlier, we are supposed to initiate the basics to be a developer with specialized skills of Azure, let’s first consider the basics of Azure! Azure is simply Microsoft’s cloud platform. This can help businesses secure money to manage their cloud computing. Companies can choose any of Azure developer certification Bootcamp in Texas or any other US states to support their needs. Adding and removing Azure IT resources can improve business flexibility. However, it is believed that this is why Azure is considering a bright computing system for you to build a cloud computing career. Reasons for Choosing an Azure Developer Job Let’s start the current discussion with the idea of why one has to choose andtry to be a developer with the power and skills of Azure. It is believed that Azure offers an important characteristic which is supposed to increase the quality apparently. On the other side, it is determined that Azure uses nearly 120,550 new customers each month, and the unparalleled attributes of the Azure expression accordingly. The best features you will find are Windows compatibility, cloud interoperabilityalong with the Azure. According to Azure Growth Research, it is appropriate to highlight better job possibilities.Today, development professionals can enhance their professionalism and credibility with Azure. The Basic Skills Needed for the Azure-Developer All the same, it is believed that skills and abilities are considering as the most crucial and important part of aura prevention reactions. You need to learn the skills and knowledge areas one has to prepare for your Azure courses. However, onemight initiate to exhibit with the Azure Skill mark by practicing using three key skills. Improving these basic skills can be considered as the basis for the path to developing Azure developers. - The first element is knowledge regarding the productions of Microsoft. However, one should remember to work with Microsoft products. Hence, all that one is required to be clear about products like Power-Shell because Azure is enabled - One must have special knowledge of programming or coding languages like HTML-5, and Java-Script. Though such languages are supposed to be often used in Azure for application development - The third most important skill you need to secure Azure development work is cloud or networking Certification Demanded the Azure Development Jobs The most important factor that can lead to the development role of the Azure platform is certification. Azure developer certification Bootcamp is a promising tool to differentiate from the competition. Get yourself Azure Developer certification study guide to learn more about this certification. With Azure Developer Certification, you may be able to demonstrate your potential and the skills you need. In addition, the ability of professionals to work effectively in the Azure cloud is explored. Certification can provide tangible evidence of your knowledge and ability to develop Azure. Entrepreneurs choose certified professionals rather than candidates who do not have well-defined development experience on cloud platforms. Microsoft Certified Azure Developer Associate is one of the best ways to get started in your Azure development career. The test you need to join a Microsoft Certified Azure Developer for this badge is AZ-203. Microsoft Azure Solution Development test identifies and demonstrates your ability to develop Azure platform solutions, develop Azure storage, deploy Azure security, optimize and troubleshoot Azure platform, connect and use Azure and other third-party service services. Focussing onParticular Power of Azure Developer In addition to the basic capabilities of the said Azure developer, you must also acquire a certain power. Though, such attainment is very important when designing and developing cloud software with Microsoft Azure which are as follows: - There is a great demand for resource efficiency in the job description of the Azure developer. Organizations are heading towards the platform of cloud to reduce the cost of using resources. Because Azure requires consumer pricing, good developers always focus on cost. - Qualified Azure programmers must becapable in order to create scripts for their environment. This gives the environment access to the code. The scripting surrounding might be used to recreate a development environment with code. - More likely one requires special abilities to plan your contingency development. Azure developers should carefully consider the most likely problems. The result was that they could take immediate action in the event of adverse events. - The last special talent Azure developers need is choosing the right service. However, determining morethan 95 core Azure services, developers can get lost! Why Does Anyone Consider to be an Azure-Developer? The popularity of Microsoft Azure makes it an ideal platform for sharp people. According to Market-Watch, Microsoft Azure developer certification Bootcamp counts 120,550 new customers each month. Azure’s expertise, flexibility, effective pricing, and hybrid Azure interoperability make it an attractive choice for many businesses, so the demand for Azure developers remains high. Therefore, it is wise to develop the necessary skills in an area that offers many opportunities for work and further growth. Also, even if you are already a pro, getting to know Azure will increase your value and your marketing. All you have to do is acquire the right skills, and Azure professional development will help you do that.
OPCFW_CODE
– Windows 10 vhd download free You seem to have CSS turned off. Please don’t fill out this field. Please provide the ad click URL, if possible:. Oh no! Some styles failed to load. Help Create Join Vvhd. Application Development. IT Management. Project Management. Resources Blog Articles. Menu Help Create Join Login. Open Source Commercial. Freshness Freshness Recently updated 6. Convert social media comments into sales, automatically invoice shoppers, and manage all aspects of your business with нажмите для деталей 1 comment selling platform and total e-commerce solution. Learn More. A powerful solution that streamlines the way you vownload compensation. For HR Professionals. Because our solution is configurable, nearly everything from the type of pay program accommodated to the organization of data is windows 10 vhd download free to your needs. And, it can work alongside your existing HR software. No more using wlndows cumbersome, unconfigurable system. Our online, cloud-based solution gives you the versatility to administer pay programs with rfee. How smart. The diwnload OS platforms are Windows up to the vyd Windows 10 on x86 and x64 architectures. For interested developers the source code is included and licensed under the GPL. Our easy-to-use software helps property managers, board members and security to streamline and digitize their operations Run a more efficient, cost-effective and connected condo. Our web-based software is windows 10 vhd download free for HOA, Condos, and properties. A simple way to streamline your maintenance and operations for both property managers and tenants. It is wondows when migration or hypervisor switch is required. VM Migration Assistant has The windows windows 10 vhd download free is written in C and Linux with JavaFx. Hvd main reason to create this project is to learn these languages and to practically understand how they work, such type of project inspired me windows 10 vhd download free lot. Ubuntu Server Image for каком windows 2000 sp4 download free что-то machines file format – VHD. Image should run in VirtualBox or VMware. The image was made especially for web developers. Fast start for any web project, and different frameworks or CMS. Login: user Password: user. Uses “dislocker” and “qemu” as backends. No contracts. No hidden fees. FREE custom theme! This utility now also supports physical disks. This tool uses the VirtualBox vboxmanage tool in command line. VHDs can vud deployed via BitTorrent, multicast, or direct wowapp download windows. BITs deployment may be implemented in future releases. This system was developed to ease VHD File recovery tool. I found a lot of different solutions but most involved me having to modify a lot of code. Related Searches rufus for linux. Software Development. Desktop Environment. Thanks for helping keep SourceForge clean. X You seem to have CSS turned off. Briefly describe the problem required :. Upload screenshot of ad required :. Windows 10X Build (VHD) : Microsoft : Free Download, Borrow, and Streaming : Internet Archive – File hashes Disk2vhd is a utility that creates VHD (Virtual Hard Disk – Microsoft’s Virtual Machine disk format) versions of physical disks for use in. Download free virtual machines to test Microsoft Edge and IE8 to IE and Microsoft Edge Legacy using free Windows 10 virtual machines you download and. Download Pre Installed VirtualBox Images (Linux, Windows & Others) – Sysprobs. VHDX File Viewer for Free to Open, read, view or Explore VHD files of Hyper – V Virtual Hard Disk Image in Windows 10, 8, 7, XP, Vista etc. There is no need to mount VHD/VHDX files to preview its contents. Free view GPT & MBR disk types and also open /5(50). Jul 02, · vhd free download. VHD2ISO This tool convert a virtual haddisk to an bootable iso file. The driver is used like WinVBlock and FiraDisk to access GRUB4DOS mapped drives from Windows. At the moment VHD file disk and RAM boot are supported. The supported OS platforms are Windows up to the newest Windows 10 on x86 and x64 architectures. Windows 10 Enterprise – 20 GB download. This VM will expire on 7/10/ VMWare Hyper-V VirtualBox Parallels. This evaluation virtual machine includes: Windows 10, version 20H2 () Visual Studio (latest as of 4/15/21) with the.
OPCFW_CODE
I'm having a hard time understanding what I'm supposed to do. The only thing I've figured out is I need to use yacc on the cminus.y file. I'm totally confused about everything after that. Can someone explain this to me differently so that I can understand what I need to do? We will use lex/flex and yacc/Bison to generate an LALR parser. I will give you a file called cminus.y. This is a yacc format grammar file for a simple C-like language called C-minus, from the book Compiler Construction by Kenneth C. Louden. I think the grammar should be fairly obvious. The Yahoo group has links to several descriptions of how to use yacc. Now that you know flex it should be fairly easy to learn yacc. The only base type is int. An int is 4 bytes. Booleans are handled as ints, as in C. (Actually the grammar allows you to declare a variable as a type void, but let's not do that.) You can have one-dimensional arrays. There are no pointers, but references to array elements should be treated as pointers (as in C). The language provides for assignment, IF-ELSE, WHILE, and function calls and returns. We want our compiler to output MIPS assembly code, and then we will be able to run it on SPIM. For a simple compiler like this with no optimization, an IR should not be necessary. We can output assembly code directly in one pass. However, our first step is to generate a symbol table. I like Dr. Barrett’s approach here, which uses a lot of pointers to handle objects of different types. In essence the elements of the symbol table are identifier, type and pointer to an attribute object. The structure of the attribute object will differ according to the type. We only have a small number of types to deal with. I suggest using a linear search to find symbols in the table, at least to start. You can change it to hashing later if you want better performance. (If you want to keep in C, you can do dynamic allocation of objects using malloc.) First you need to make a list of all the different types of symbols that there are—there are not many—and what attributes would be necessary for each. Be sure to allow for new attributes to be added, because we have not covered all the issues yet. Looking at the grammar, the question of parameter lists for functions is a place where some thought needs to be put into the design. I suggest more symbol table entries and pointers. The grammar is correct, so taking the existing grammar as it is and generating a parser, the parser will accept a correct C-minus program but it won’t produce any output, because there are no code snippets associated with the rules. We want to add code snippets to build the symbol table and print information as it does so. When an identifier is declared, you should print the information being entered into the symbol table. If a previous declaration of the same symbol in the same scope is found, an error message should be printed. When an identifier is referenced, you should look it up in the table to make sure it is there. An error message should be printed if it has not been declared in the current scope. When closing a scope, warnings should be generated for unreferenced identifiers. Your test input should be a correctly formed C-minus program, but at this point nothing much will happen on most of the production rules. The most basic approach has a global scope and a scope for each function declared. The language allows declarations within any compound statement, i.e. scope nesting. Implementing this will require some kind of scope numbering or stacking scheme. (Stacking works best for a one-pass compiler, which is what we are building.)
OPCFW_CODE
Wonderfulnovel The Beautiful Wife Of The Whirlwind Marriage read – Chapter 1218 – Be My Woman crayon celery recommendation-p3 Novel–The Beautiful Wife Of The Whirlwind Marriage–The Beautiful Wife Of The Whirlwind Marriage Chapter 1218 – Be My Woman peck lip “Wow, Su Wan, you’ve hit it vibrant. This apparel is rather costly.” how is it my fault that i look like a girl Considering that Su Wan acquired walked out, Gu Jingyu shook his top of your head. Everybody else hadn’t spotted her in the beginning, but it really was following Gu Jingyu looked above with squinting vision they also looked more than. “What do you reckon?” He drew in much closer again, considering her. Oh yeah G.o.d, the negative impacts were really fantastic. They arrived in succession, experiencing changed their apparel. He directed at Su Wan. The number of young ladies right away divided them up amongst themselves pa.s.sionately. “This can be a new style that arrived two days or weeks previously. They only have just one part for each of these clothes.” This was particularly if she recalled how she acquired hugged him and begged him in past times, begging him to… help you save her… “No demand. I’ll gift item it in their eyes. Everyone is able to restore the item they are really putting on. For you…” She rapidly took it off, smoothed your creases, and wanted to search for Gu Jingyu’s telephone number. On the other hand, she recalled it was out of the question on her to possess his variety, where there won’t a lot of people within the output crew would you have it both. Nonetheless, it was actually already past too far on her to come back now. She could only take the journey back in her hostel, modify out of your attire, and after that return it to him. “How could this be related to you being my girl?” my time to shine – my hero academia fanfiction On the other hand, he ongoing to say for Su Wan to keep regarding. This built them all looked toward Su Wan enviously. They kept another bit of outfits for Su Wan. The handful of young ladies promptly divided them up amongst themselves pa.s.sionately. In addition, wasn’t he and Lin Che… Anyone looked at her. Once they looked at Su Wan, they recognized that Gu Jingyu loved her costume. The eyes of the shop’s team promptly gleamed in delight. “Alright, I’ll ask them to twisted up without delay.” She appeared very dog.i.te. When she wore this dress, she gifted away from the feeling of a top university college student. They came out in succession, obtaining modified their apparel. “I feel it expenditures quite a few hundred thousand dollars.” Seeing that Su Wan got went out, Gu Jingyu shook his head. He aimed at Su Wan. Gu Jingyu investigated her. “Didn’t I already say it? I want you to become my lady.”
OPCFW_CODE
Many times we feel the need of using Orchestration Variables in Maps. But there is no straight forward way to achieve it. Below are the two ways through which we can achieve the same. - Is to create a dummy schema with Elements which needs to be used in Maps. In below example I have created a dummy schema with Element as “Age” because I wanted to access intAge variable in Map. - Is to assign some dummy value in map and later assign the actual value in Message Assignment shape. In below sample I have assigned “Null” (from String Concatenate functoid) and later within the Message Assignment shape assigned the value from Orchestration Variable (intAge). Which method should be chosen depends on below points. - 1st technique should be used when you need this variable extensively like to perform some DBLookup etc. In below example “intAge” value is used as a lookup value and required in making decisions. - 2nd technique is preferable when you want to populate few fields without much processing. Now let’s come to our Example. In this post I will use this sample BizTalk solution and try to portray both the techniques. Here we receive a simple XML having information of a person like- “Name”, “DateOfBirth” and “City”. In Orchestration will calculate the Age of the person and in Map on the basis of it’s value we will decide whether person in eligible for Voting, Marriage and Drinking or not. Source & Destination Schema Image As I needed value of Age variable to make decisions in Map, so I created another schema with just one element “Age” as shown below- Created msgAge by below code- Now next step is to create a map which takes two schemas as input- Input schema and Age Schema and generates one Output Schema. - Drag and drop the Transform shape - Double click -> New Map-> Select two in Source Message(msgAge & msg In in our case) - Select output schema as destination. - Check the box to launch BizTalk Mapper. This will generate a map with two parts (InputMessagePart_0 & InputMessagePart_1), under “Root” record. It actually generates a multipart message with parts referring to each input/output message. Now in map you can play around with the value of Age element. In our sample application I am performing below checks on its value. - Age >=18 -> Eligible for Voting - Age >=22-> Eligible for Marriage - Age >= 25-> Eligible for Drinking Alcohol Don’t miss to see the irony here- In India being few months younger to 25 years allows you to marry a girl or choose your Prime Minister of country (& other representatives) but makes you criminal if you drink alcohol. Read more about how to develop and test maps with multiple source and destination schemas here. Now let’s demonstrate the second technique to assign values in Orchestration. For this you need Message Assignment shape along with Transform shape in Construct shape as shown below. Message Assignment Shape Image In Map assign Null(or any other dummy value) to all those elements for which Orchestration variables are required. For example in our sample Age element is assigned as NULL using string concatenate functoid. Later in orchestration using Message Assignment shape assign the required values as shown above. Hope it was helpful. Download the sample application from here. Word version of this blog is here. @Gmail, @Facebook , @Twitter, @LinkedIn , @MSDNTechnet, @My Personal Blog
OPCFW_CODE
What Is Recruiting? Recruiting is the stage of the employee life cycle in which prospective candidates are sourced, interviewed and assessed in order to identify the best fit for a job opening. This process of identifying who will be hired is typically based on required skills, relevant background and whether or not the person is culturally additive to the organization. In other words, is the person a great fit based on their qualifications and alignment with the company’s values? How Does Recruiting Work? Recruiting typically follows this process: sourcing, recruiter screen, hiring team assessment and finally, the offer stage. Sourcing begins the process of recruiting, and can be classified as either passive or active. Generally, recruiters will utilize both ways of sourcing. - Passive sourcing is when a recruiter sources by posting open positions on the company’s career site and various job sites in order to collect applicants. - Active sourcing is when a recruiter directly reaches out to prospective candidates who closely match job requirements for an open position. Recruiters do this outreach over various online networking platforms and social media, as well as during in-person events. Once a recruiter successfully sources a candidate, they’ll follow up with a recruiter screen with the candidate. This initial screening phase allows the recruiter to get a sense of the candidate’s background and skills in order to identify if they should move forward in the process. This also allows the candidate to get more information about the role and the company, as well as ask questions that are important to their decision making. Hiring Team Assessment Candidates that pass their initial screens will make their way to the hiring team interview phase where there is a greater focus on assessing the candidate’s skills. Depending on the role’s required skills, these assessments can vary and include coding assessments, writing challenges, Q&As and more. Candidates who are identified to be the best fit for the role based on the hiring team’s needs are extended an offer. In this stage of the process, the recruiter’s goal is to close the candidate and get them to join the team. Recruiters will usually go over the total compensation for the role, including salary, performance bonus, benefits, perks and additional forms of compensation such as stock options. What are 3 Methods of Recruiting? - Internal Recruiting is the act of recruiting by promoting job openings to existing employees as a way to provide career development opportunities and help with employee retention. - Technical Recruiting focuses on recruiting candidates for technical positions, such as engineering and product roles. Some of these roles may include software engineers, network architects, product managers and QA specialists. - Executive Recruiting focuses on recruiting candidates to fill executive level positions, such as CFO, CTO, and CEO. Executive recruiters may also work on other senior management roles that may not be at C-level. Why Is Recruiting Important? As a key part of the HR team, the recruiting function enables the company to compete for and acquire talent that is essential to the success and growth of the organization. An effective recruiting team is able to attract and recruit a diverse pool of candidates for the hiring teams with whom they partner. Being one of the first experiences of the employee life cycle, recruiting is critical to providing an excellent experience for potential employees, while building talent brand awareness for a company. What Are the Benefits and Risks of Recruiting? Effective recruiting can boost positive brand awareness as a recruiting team’s outreach extends to more people. As candidates consistently experience a great interview process, it is more likely for them to engage with the company’s brand and refer others. When ineffective, recruiting can negatively affect a company’s talent brand. In cases where candidates go through a bad interview experience, it is likely for those candidates to speak negatively about the company, whether by leaving a review online or through word-of-mouth.
OPCFW_CODE
How to execute sql script in multiple users (schema) - ORACLE i have an oracle Database with 3 users (user1/user1 , user2/user2, user3/user3) and i have the same number/structure of tables in the 3 users. my issue is : when i want to update for example my table1 in the user1 , i want to update the same table1 in the user2 and user3 in order to keep them updated , i want to execute my script once , without login to the other users and execute the same script because i will have an other users (4 and 5 ) and this will take a lot of time to execute one script in the all users. i'm wondering if there is a tool or technique to execute an script once for multiple users. Thanks in advance. You can use SQLPlus to build a script that runs your script once per user. Say you have a script like this: script.sql: select count(1) from obj; and you want to run it for two users; you can build a script like the following: scriptLauncher.sql: conn alek/****@xe @d:\script conn hr/****@xe @d:\script The result: SQL> @d:\scriptLauncher Connected. COUNT(1) ---------- 15 Connected. COUNT(1) ---------- 35 Of course this means that you have to store your passwords in a plain text fle, which may be a security risk to take in consideration. In addition to this answer, consider granting proxy access to the various users to a single user. Connect to each proxy user using that single user like the following: conn alek[hr]/****@xe. Finally, parameterize the password for that single user. This should avoid the security concerns. Try this if your script has only DML operations SET SERVEROUTPUT ON DECLARE TYPE my_users_type IS VARRAY (5) OF VARCHAR (256); my_users my_users_type; BEGIN my_users := my_users_type ('USER1', 'USER2', 'USER3'); --your users FOR i IN my_users.FIRST .. my_users.LAST LOOP EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA =' || my_users (i); @D:\yourscript.SQL; --your script with path END LOOP; end; / yourscript.SQL should contain only DML commands splitted by ; Nothing else. Or you can modify it - create a procedure has string parameter. Parameter will be single DML command which will be executed in loop for all users. or just use ALTER SESSION SET CURRENT_SCHEMA = USER1 @D:\yourscript.SQL; ALTER SESSION SET CURRENT_SCHEMA = USER2 @D:\yourscript.SQL; ALTER SESSION SET CURRENT_SCHEMA = USER3 @D:\yourscript.SQL; No limitations to your script yourscript.SQL, except don't use schema names as prefixes in your DML operations P.S. Executing user should have enough rights. I would have a folder in which I have the sql script an OS script which connects to each database at a time and runs the script OR Run it with a user which has rights over all the other 3 and personalize the script so it runs for each.
STACK_EXCHANGE
Gentlemen club woodbridge Pawtucket If this is a pervasive, never ending fantasy that you think about all the time, you are certain this is a Awaken massage Palm Coast of your very soul, then it is time to contact me and begin the most fulfilling journey of your life. W4m Are you seeking for a special someone. Just someone to text with when I'm bored. Relation Type: Lonley Ladies Search Dating Man Seeking: I Want Nsa Sex Relationship Status: Single How long does it take to get from Smithfield Backpage female escorts Napa USA Atlanta? Mode details. Premier gentlemans club in providence ri nearby select an option below to see step-by-step directions and to compare ticket prices and travel times in rome2rio's travel planner. Permalink Dismiss GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software. Phone Website us. A change in scenery? Book at least 14 days Evanston times free press classified advance for a cheaper Saver Fare. VIP Bottle Service. Website united. All this I judge from the zeal with which the said gentleman was constantly. We have a HUGE outdoor patio open seasonally that has its own Craigslist free furniture Lakeland Ohio. Service Gratuity is not included in the bottle prices. Atlanta to Savannah Atlanta to Asheville. Foxy Lady. We are open sunday — thursday am and open until 2am friday and saturday. welcome to ri dolls He sold the land in to the Gentlemen's Driving Club (later renamed the Piedmont Driving Club), who wanted to establish an exclusive club and racing. Royal Pasadena massage Pasadena reviews Out Our Online Store Write a description for this list item and include information that will interest site visitors.You'll find three bars, three stages, three luxurious champagne rooms, VIP rooms and fully nude, exquisite ladies, in our square foot Vegas-Style. Branch: master. The road distance is Come see me…. You will receive future updates! Bachelor parties with the dolls! github is home to over 40 million developers working together to host and review code, manage projects, and build software together. Other clubs in the Carson prostitutes rates have silly rules that prevent single women from entering. For example, you may want to describe a team member's experience, what makes a product special, or a unique service that you offer. Gentlemen club woodbridge Pawtucket jetblue. Popular routes. Please enter a valid address. Browse our vip services write a description for this list item and include information that will interest site visitors. strip clubs in pawtucket, ri Please. Quickest way to get there Cheapest option Distance. From the champagne rooms to the nightlife, you are sure to have a night to remember. Phone NY pandoratravelinc gmail. Website southwest. We will make sure that you have an unforgettable night with everything Black male escorts new New South Memphis need to make sure that your night is truly spectacular. If you Gentlemen club woodbridge Pawtucket to plan your VIP party or special event such as a bachelor party or a night out with friends, we can provide you with the best VIP rooms for an amazing time. Find strip and gentlemen's clubs in woodbridge on february 25th at 8pm, you'll have the pleasure of seeing some See Party Packages. Amtrak is a rail service that connects the US and three Canadian provinces. Hot Lafayette accent guy our World Famous Leggs and Eggs every Friday morning at 6am and enjoy our outside patio bar and grill. We have our main Lorain ladies looking for love area topless and we have our all nude room Eldoradoboth have their own bars, Troy massage deals have more than enough room for you to sit comfortably with Single woman living in Conway or a few of our girls. Our Location. Got what it takes? Meet The Dolls. Greyhound is a leading bus company based in Dallas, Texas, serving over Gay life partner in Springfield across North America, Mexico and Canada. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software. What better way to celebrate your Bachelor or Bachelorette Party? For more information…. Browse Our VIP Services Write a description for Blythswood square North La Crosse prostitutes list item and include information that will interest site visitors. Phone 1 gobuses facebook. A group from the Triangle Club orchestra furnished the music, and President Dodds George Wnddington is with the Blackstone Valley Gas & Electric Co., Pawtucket, R.I.; _ John Woodbridge is treasurer Schaumburg model boys Pan American Airways, Far east massage in Tampa York.
OPCFW_CODE
Having problems connecting to the internet while in-game. Origin is connected (on-line). I restarted my router, uninstalled Origin, Repaired ALL SIMS 4 PACKS, Uninstalled Sims 4, Updated everything, Checked my firewall, checked my DSN, Reinstalled everything (Origin and Sims 4). Done practically everything to get online in the Sims 4 game. I just can't connect to the Gallery what so ever. Can anyone help me? Product: The Sims 4 Which language are you playing the game in? English How often does the bug occur? Often (50% - 99%) What is your current game version number? 1681561020 What expansions, game packs, and stuff packs do you have installed? All except university, star Wars, knitting and spooky Steps: How can we find the bug ourselves? Try and connect to the gallery What happens when the bug occurs? Gallery asks to connects to internet, it just keeps trying to connect and stops or I get red crosses. What do you expect to see? My gallery working Have you installed any customization with the game, e.g. Custom Content or Mods? Not now. I've removed them. Did this issue appear after a specific patch or change you made to your system? No The gallery has never worked correctly. I can see my creations but cannot connect to see others. I've tried rebooting computer, clearing cache, repairing game. Turning it on and offline in origin. Nothing works I don't work or have any association with EA. I give advice to the best of my knowledge and cannot be held responsible for any damage done to your computer/game. Please only contact me via PM when asked to do so. I've tried the new updated fixes, but none worked. At least for me, I'm unsure if its the firewall at this point. For experimentation, I tried getting onto the gallery on my laptop. Worked just fine. Same internet, same firewall settings. Only thing that hasn't happened to it yet is a particular windows 10 update. I'm not entirely convinced this bug happened due to an OS update though, as my sims game was working fine before I updated to Snowy Escape. Though I tried just uninstalling Snowy Escape itself, and that didn't work either. So, if it's not the firewall, or the ransomware, OR the OS, what could it possibly be other than something in the update? Trying more tweaks to see if I find a solution. (There's a new scheduled update for the OS, so I'll see if that does anything.) Help! Since the update I cannot access the Gallery or check my feed--anything that needs an internet connection. Here's the issue: my internet works just fine for everything else. All systems there are working just fine. I did a UOTrace and spoke to chat, went through all the steps. Then had my techie brother come over, who suggested a complete uninstallation and reinstallation of the whole game. Did it, done, checked that off the list. I still have the same issue. I can get into Sims4 just fine, but I just can't upload any builds, download any builds from the Gallery, see what's new/being shared on feed/check out Spark'd, and so on. I can't even (while Sims4 is open) look at the new pack, or make purchases from there. I've tried everything I can think of. I've even cleared the cache and emptied all folders per chat request, but kept photos and saved files. Do I have to delete Origin completely and reinstall that, too? I have so many builds I want to upload, but can't. Can anyone help? A Very Frustrated Simmer
OPCFW_CODE
Just spent a frustrating and tortuous hour on the phone to a poor unfortunate lady several thousand miles away in Virgin Media land, after spending all day trying to connect my new tablet to VM's TV Anywhere. I say unfortunate because she has to deal with frustrated and extremely angry people like me all day, God help her. The bottom line seems to be that my device isn't supported, even though it runs Kitkat 4.4 and I read earlier today (on Virgin's own site) that a tablet running Kitkat 4.0 and above IS supported (update: just checked: it says these are the "minimum requirements" - which is meaningless really if they're not fully supported). Great, thanks Virgin! Thank you so much! Brilliant! One of the main reasons for buying this was to use TV Anywhere. I suggested she speak to her boss and get them to change the name from TV Anywhere to TV In-A-Few-Selected-Places. Not quite as catchy, I grant you, but definitely accurate. There are only 13 tablets that are fully supported apparently; these consist of 5 Amazon Kindle Fire models, 2 Google Nexus models, 5 Samsung Galaxy models, and a Sony Xperia model. If you've got any one of the dozens of other tablets available (hundreds, for all I know) then you're out of luck. Frustrated doesn't even begin to describe the way I feel. Try 'angry beyond words'. No wonder Virgin Media deal with us poor suckers at a distance. I'll be looking into other options. There are little set-top boxes in Asda that give you dozens of channels, and they only cost about 15 quid - a very healthy option considering my account is bleeding over £70 a month for this so called service.Long as I can get broadband from a proper supplier, I'm off. I'm sorry to learn that your tablet doesn't work with TV Anywhere. Even though the device itself may not be fully supported, there are still things we can check with you to see what's causing the issue. As it's not supported, it just means we're not able to raise these faults up with IT. All other tablets are enabled to use TV Anywhere apart from those on the bottom of the link you gave us. Those devices are only enabled for Manage TiVo® only. So I can help you, what is the Make and Model of your tablet? Let me know. Kath_F Forum Team Tech fan? Have you read our Digital life blog yet? Check it out I had the same problem with my new tablet (I posted a request for help on the TV Anywhere forum). My tablet was running Kitkat and when I tried TV Anywhere I kept getting the message that it wasnt supported. I have since upgraded to Lollipop and TV Anywhere now works fine. Sorry Kath, I forgot to check if there were any answers/replies to my post (probably bcos I was so completely cheesed off!). Only just saw your reply. Thanks for offering to help. In the About section of the Settings, it's listed as Model number: BS1078, Android version: 4.4.2, Firmware version: v4.5-eng.wyl.20151012.155028, Kernel version: 3.3.0,yonesnav@wyl#82, Build number: fiber_bs1078-eng4.4.2KOT49H20151012, Processor type: QuadCore, SELinux status: Enforcing. Most of that's Greek to me, or might as well be. Maybe it means more to you. I hope so. I can't see a brand name on it anywhere. Apart from when it boots up it shows "Allwinner". Don't know if that's the brand or not. I'd alread downloaded the app and tried it, but I've just tried it again in case it's working now. The app opens up and I sign in. I see the list of channels. I select one and click 'Watch Now'. A dialog pops up, saying "Watch Now [and below that] Watch on TV [appears to be greyed out], and under that Watch on Phone". I select Watch on TV. A dialog comes up immediately. It reads: "Away From Home Network [underneath] When you are away from you [sic] home network. Some options are not available. [got the makings of a proper sentence there, but didn't quite make it!). To access all options, connect to the same Wi-Fi network as your TiVo DVR. Two buttons below, OK and Connect I click Connect. It takes a while ... A dialog comes up, reading: "No TiVo DVR has been found on this Wi-Fi network. Please make sure your TiVo DVR and network are working properly." Two buttons below, OK and Need help? I'm using the TiVo all the time and it's working okay. I'm also using the browser on the tablet and that's working fine. Is there a way to make sure the TiVo is on the same network? That might be the problem, but I assumed they already were since I'm using them both. Maybe this is the problem, but I have no idea how to get the TiVo on the same network. Actually, the TiVo is running the signal from the broadband so it must be ... No, hang on, the broadband is only for the computer isnt' it? The TiVo is separate. I'm confused. If they are separate I really don't know how to get the TiVo on the same network. And I don't know how I'm supposed to know this (if this is the problem, and the possible solution). Surely this should be explained to us? Hang on, I've just tried the Need help? option. It takes me to a VM page that indicates I need to use a Powerline adapter ... no, hold on, two of them, one where the TiVo box is, the other where the Super Hub is. This is getting a bit ridiculous now. Do I need to buy these adapters? Any information gratefully received, Kath! Thanks for putting up with a non-techy guy who's a bit out of his depth!
OPCFW_CODE
What is radio-emitting gas? What is radio-emitting gas? (I think it refers to the Magellanic Stream, but I can't be convinced.) Can you add the source of your image. Referring only to the image you present, the radio part of the image (I think this is a stacked optical and radio image) shows something called the Magellanic stream. Despite what the wikipedia page says, I believe it was discovered by Dieter (1971) using 21 cm observations of neutral hydrogen atoms. The radio emission in the case of the picture you show arises from the so-called "spin flip" transition associated with changes in the relative spin of the proton and electron in a hydrogen atom. A configuration where the spins are anti-parallel has a lower energy than where they are parallel. (Quantum mechanics only allows these two possibilities). The energy difference is small - only $5.87 \times 10^{-6}$ eV. In hydrogen gas in space it is possible for the atom to make a transition from the higher level state to the lower level, in the course of which it emits radiation with a wavelength of 21 cm (i.e. radio waves). This is not seen in terrestrial laboratory samples because the gas is collisionally de-excited before it has the chance to emit radiation. Hence 21cm radiation is an example of a forbidden line, with a long radiative lifetime and which is only observed from the rarefied neutral hydrogen gas found in the interstellar medium. The intrinsic width of the emitted spectral line is very narrow. This means that the observed wavelength of this 21 cm line is an excellent probe of the motion of the emitting gas, since the observed radio waves will be doppler shifted from the 21 cm wavelength at which they were emitted according to the line-of-sight velocity of the gas. The gas in question here has been stripped from the two satellite galaxies of the Milky Way, known as the Magellanic clouds. Although the picture also shows copious 21 cm radiation arising from neutral hydrogen gas in the plane of our Milky Way. There are many other ways that gas can be a source of radio emission. In addition to atomic and molecular transitions that give radio emission at discrete wavelengths (including the radio equivalent of lasers, called masers), there can be continuum emission caused by the acceleration of electrons in the electric fields of partially and completely ionised ions - so called bremsstrahlung or free-free emission. There is also recombination radiation (free-bound) radiation and continuum radiation emitted by electrons that are accelerated in electromagnetic fields - cyclotron radiation; synchrotron radiation (if the electrons are relativistic); curvature radiation (a variety of synchrotron radiation) or even just the Rayleigh-Jeans tail from cold blackbody emission (e.g. in the cosmic microwave background). A decent introductory summary can be found here. Well, now you will allow me to be picky: The question is "What is radio emitting gas?", not "What do we see here?". One type of ratio-emitting gas is ionized gas, which is penetrated by a magnetic field. Also known as a plasma, the charged particles that make up this gas will start rotating around the magnetic field lines. Their paths will have a local radius of curvature equal to the gyroradius, which is the natural movement radius of charged particles in a magnetic field. Then, because circular motion is accelerated motion, and accelerated charges in general emit electromagnetic radiation (EMR), the gyrating charges will emit EMR. One can calculate that for typical magnetic field strengths in interstellar space this radiation will then be emitted in the radio-range. The details depend a lot on where the charged particles come from, what their energies are, viewing geometry of the observer etc. but essentially it's that. Atomic gas emits radio waves. That is what 21cm radiation comes from. @RobJeffries: Wow, so a partial answer is reason to downvote? Drastic. I upvoted, but such a complex explanation would benefit from PARAGRAPHS to tell the reader when he should pause to think of the previous phrase... before going on to the next one. Paragraphs give a text structure, clarify it, and prompt the reader to understand a paragraph as an actual phrase, they help him navigate and read over again. Generally avoid starting a complex phrase with "Also known as a plasma" when you can start it with "The charged particles" "Also known as"... is not a start to a text, a simple, clear, conceptual phrase progression. I write unclearly myself so well done you ;) @JanDoggen Indeed. And although I was, now I am not.
STACK_EXCHANGE
Like TechNet UK on Facebook How tidy is your physical environment? Have you any cabling left lying around or temporarily put in place which has now become 'live'? Being the curious cat's we are, we thought it would be fun to see a few photo's of your server rooms. Chaos or Clean? To find out we have decided to run a monthly competition, kicking off this month we want to see just how chaotic or well maintained our readers keep their server cabinets. Prize ‘bundles’ (bundles I hear you say… yes bundles) awarded to both – ‘Tidy’ and ‘Untidy’ server cabinets each month. So are you a ‘Super Server Sammy’? Or are you a ‘Chaotic Cabled Colin’? 1. Like the TechNetUK Facebook page. 2. Upload your server cabinet housekeeping image to the TechNetUK Facebook page wall, using the hashtag - #TechNetTidy 1. Follow the TechNetUK Twitter page. 2. Tweet your server cabinet housekeeping image to @TechNetUk, using the hashtag - #TechNetTidy Find full competition terms and conditions here. - The tidiest entry of the month will win a branded ‘TechNet UK’ cup as well as a winning competition t-shirt (as seen sported here). - The untidiest entry of the month will win a ‘booby’ prize (which will help you clean up your technical act), as well as a competition t-shirt. HOW TO WIN Winning entries will be determined by the TechNet team and at least one independent judge on the 23rd September 2013. Judging will be based on: - Originality and Compliance with theme - think outside the box! Keep an eye out on our Facebook and Twitter pages for entries, good luck to all! Is that a picture of you with a mug of coffee in your network room? Isn't that a bit risky? You know; liquids, critical electrical infrastructure...? Looks like the core switch rack in the Thames Valley Park MTC ;) @Liam, I have hands of steel - plus I would never allow a single drop of coffee to spill.. my mornings depend on it ;). @Bill you can tell all that, from just this.. Impressive! - Still waiting on our first entry to come through guys... T-Shirts up for grabs :D! #TechNetTidy The problem with this competiton is that I'm a bang up to date techy and have neither Facebook or Twitter accounts and have no intrest in creating them.... Phil, if you would like to enter please email me in a copy of your image to - email@example.com, we will be more than happy to include your entry.
OPCFW_CODE
Integration Tests Hi Please have a look at this and let me know what do you think about it. I hope this saves us all the time we spend on manually testing our work. Integration Tests Integration tests are meant to test the project with real data. This is a semi-automated test and a tester needs to interact with bot for some cases during the test execution. However, tester can select specific test(s) and run them. Code is available in tests branch of this repo: https://github.com/TelegramBots/Telegram.Bot.Core/tree/dev For testing, a .NET Core xUnit project is used. Test project could be run in VS, CLI, or a CI hosting. See sample output on Travis-CI here: https://travis-ci.org/TelegramBots/Telegram.Bot.Core/builds/251359910 And this is what bot posted in Telegram chat while running those tests: Those arrows indicate the test cases that tester needs to interact with bot. Test Setup A Telegram bot, of course, is needed and its token should be provided to the runner in appsettings.json file or from environment variable prefixed by TelegramBot_. Other optional parameters such as the UserId and ChatId could be used to send output to a specific chat/user. Test Initiation Bot needs to know the chat it should post to. There are 2 ways to tell it. /test Command When test project starts, bot waits for 2 minutes to receive a /test command update from any chat. Once gets that, starts the process and outputs to that chat. Specific Chat ID If Chat id is specified in the configurations, bot doesn't wait for any initiation command and sends the output to that chat. I like this very much, always wanted some tests 👍 @pouladpld I like them, I have tried them, but for some reason, it shows they all failed when I finish. Every single one was terminated by a TaskCancelledException or sth xD Though I saw that everything worked fine during execution, that seems a little bit strange to me :) @Olfi01 This is unusual! There is a timeout of mostly 2 minutes that tests wait for tester to interact. Passing that, async operation tokens are going to be cancelled. Make sure your test bot has the followings: Inline Query Payment Provider Unfortunately, it is easy to miss some settings so verify that the configuration values are read correctly in tests. Just put some breakpoints and check values such as PaymentProviderToken. @pouladpld I have done all of that, the test executed perfectly fine and even showed a success status, but after i finished all, the errors came up. Probably it's just a bug of visual studio and not your fault :) @Olfi01 Try running them from dotnet core CLI. Does it behave the same? @pouladpld err im busy with sth els rn i will do that later @pouladpld sorry had to delete the core repo to Transfer the main repo (I have a local copy) would you like to create a branch for developing the test Project? That's fine. I pushed them here: https://github.com/pouladpld/telegram.bot/tree/dev I am merging them right now. Too bad I didn't make a copy of notes in Project section of Core repo 😞 . I merged them and all test cases are passing. https://github.com/TelegramBots/telegram.bot/blob/integ-tests/docs/wikis/tests/integ-tests.md
GITHUB_ARCHIVE
/* --------------------------------------------------------------------------- Function: BitArray class Include: bitarray.h Description: The bit array class can be used as a giant bit array to maintain various state settings. Copyright (C) 1993 Michael D. Lore All Rights Reserved. --------------------------------------------------------------------------- */ #include <string.h> #include "tree/bitarray.h" BitArray::BitArray(unsigned numberOfBits, bool SetAll) { numBits = numberOfBits; numInts = numberOfBits / (sizeof(unsigned) * 8); if (numberOfBits % (sizeof(unsigned) * 8)) numInts++; try { bitArray = new unsigned [numInts]; } catch (...) { bitArray = NULL; } if (bitArray == NULL) throw AppException (WHERE, ERR_OUT_OF_MEMORY); if (SetAll) setAll(); else clearAll(); } BitArray::~BitArray() { if (bitArray) delete [] bitArray; } void BitArray::clearAll() { memset(bitArray, 0, numInts * sizeof(unsigned)); } void BitArray::setAll() { memset(bitArray, 0xFF, numInts * sizeof(unsigned)); } unsigned BitArray::setNext() { for (unsigned i = 0; i < numInts; i++) { if (bitArray[i] != BITARR_ALLSET) { unsigned freebit = 0; for (unsigned j = 0; j < (sizeof(unsigned) * 8); j++) { unsigned msk = 1 << j; if (!(bitArray[i] & msk)) { freebit = j; break; } } unsigned rval = i * (sizeof(unsigned) * 8) + freebit; set(rval); return rval; } } throw AppException (WHERE, ERR_NOT_FOUND); // bitmap full! // return 0; } unsigned BitArray::clearNext() { for (unsigned i = 0; i < numInts; i++) { if (bitArray[i] != 0) { unsigned freebit = 0; for (unsigned j = 0; j < (sizeof(unsigned) * 8); j++) { unsigned msk = 1 << j; if (bitArray[i] & msk) { freebit = j; break; } } unsigned rval = i * (sizeof(unsigned) * 8) + freebit; clear(rval); return rval; } } throw AppException (WHERE, ERR_NOT_FOUND); // bitmap empty! // return 0; } bool BitArray::set(unsigned bitNumber) { if (bitNumber >= numBits) throw AppException (WHERE, ERR_RANGE); bool rval = isSet(bitNumber); bitArray[index(bitNumber)] |= mask(bitNumber); return rval; } bool BitArray::clear(unsigned bitNumber) { if (bitNumber >= numBits) throw AppException (WHERE, ERR_RANGE); bool rval = isSet(bitNumber); bitArray[index(bitNumber)] &= ~mask(bitNumber); return rval; } bool BitArray::isSet(unsigned bitNumber) const { if (bitNumber < numBits) { if (bitArray[index(bitNumber)] & mask(bitNumber)) return true; } return false; } // End
STACK_EDU
use std::sync::Arc; use crate::audio::tracker::song::SongId; use crate::audio::consts::SONG_TRACK_CHANNELS; use crate::sound::playback::chain_playback::ChainPlayback; use crate::sound::sound_rom_instance::SoundRomInstance; use crate::sound::playback::tracker_flow::TrackerFlow; use crate::sound::playback::tracker_oscillator::{TrackerOscillator, TrackerOscillatorFlow}; #[derive(Debug, Clone)] pub struct SongPlayback { pub song: Option<SongId>, pub(crate) chain_index: usize, // The current location in the song pub tracks: [ChainPlayback; SONG_TRACK_CHANNELS], pub(crate) chain_states: [TrackerFlow; SONG_TRACK_CHANNELS], pub(crate) rom: Arc<SoundRomInstance>, oscillator: TrackerOscillator, } fn default_chain_states() -> [TrackerFlow; SONG_TRACK_CHANNELS] { std::array::from_fn(|_| TrackerFlow::Advance) } impl SongPlayback { pub(crate) fn new( song: Option<SongId>, tracks: [ChainPlayback; SONG_TRACK_CHANNELS], rom: &Arc<SoundRomInstance>, output_sample_rate: usize, ) -> Self { let mut out = Self { song, chain_index: 0, tracks, rom: rom.clone(), chain_states: default_chain_states(), oscillator: TrackerOscillator::new(output_sample_rate), }; out.set_song_id(song); out } pub(crate) fn tick(&mut self) -> [f32; SONG_TRACK_CHANNELS] { match self.oscillator.tick() { TrackerOscillatorFlow::Continue => (), TrackerOscillatorFlow::UpdateTracker => { self.update_tracker(); //TODO: Should we handle this output? } }; let mut iter = self.tracks.iter_mut(); std::array::from_fn(|_| iter.next().unwrap().phrase_playback.instrument.tick()) } /// Sets this playback to play specified Song Id. /// Passing in None will mute the playback. pub(crate) fn set_song_id(&mut self, song: Option<SongId>) { self.song = song; self.chain_index = 0; // If the song is valid, update all chains to // use the correct indices and data if let Some(song) = song { let song = &self.rom[song]; self.oscillator.reset_bpm(song.bpm); let next_chain = song.tracks[0]; self.chain_states = default_chain_states(); self.tracks .iter_mut() .zip(next_chain.iter()) .for_each(|(track, next)| { track.set_chain_id(*next); }); } else { // Otherwise, just stop all of the playbacks self.tracks.iter_mut().for_each(|track| { track.set_chain_id(None); }); } } /// Calls update_tracker on each chain playback, /// if all are done, will increment our current chain index /// within the song pub(crate) fn update_tracker(&mut self) -> TrackerFlow { // Call update on each of the chains, but // only if they should continue playing self.tracks .iter_mut() .zip(self.chain_states.iter_mut()) .for_each(|(tracker, state)| { if TrackerFlow::Advance == *state { *state = tracker.update_tracker() } }); if self .chain_states .iter() .all(|state| *state == TrackerFlow::Finished) { self.next_step() } else { TrackerFlow::Advance } } /// Advances the tracks to the next chain within the song. pub(crate) fn next_step(&mut self) -> TrackerFlow { // Song doesn't exist, so we're done if self.song.is_none() { return TrackerFlow::Finished; }; let song = self.song.unwrap(); self.chain_index += 1; // Song doesn't have any more entries, so we're done let next_chain = self.rom[song].tracks.get(self.chain_index); if next_chain.is_none() { return TrackerFlow::Finished; } let next_chain = next_chain.unwrap(); self.tracks .iter_mut() .zip(self.chain_states.iter_mut().zip(next_chain.iter())) .for_each(|(track, (state, next))| { *state = TrackerFlow::Advance; track.set_chain_id(*next); }); TrackerFlow::Advance } pub(crate) fn replace_sound_rom_instance(&mut self, new_rom: &Arc<SoundRomInstance>) { self.rom = new_rom.clone(); self.tracks .iter_mut() .for_each(|track| track.replace_sound_rom_instance(new_rom)); } }
STACK_EDU
This course intends to introduce the human brain and its processes especially with regards to electrochemical activity, the issues of mind and consciousness. It also indicates towards further possibilities of research. Anyone with an interest in Human brain can join. Just a basic understanding of Brain would suffice. It should be useful for students of Neurosciences, Psychology , Medical and people working at brain computer interface. INTENDED AUDIENCE: Any one interested in Psychology, Physics & The Brain INDUSTRY SUPPORT: Health; Psychology; Brain computer interface 2135 students has enrolled already!! ABOUT THE INSTRUCTOR: Dr Alok Bajpai has been trained in Psychiatry at National Institute of Mental health and NeuroSciences (NIMHANS) Bangalore. He did his DPM, MD and is currently practicing at Kanpur and is also the Psychiatrist with Counselling cell, IIT Kanpur. His research interest are in Physics of Brain, Sleep and EEG. Week 1 : Brain to Mind-- and how do we know it---(essentially single neuron to multiple) Brain and gross specialization --- areas , right-left , association ,connectivity and our tools to learn including EEG Week 2 : Being Conscious -- Dynamics --- how do we learn about it from EEG Week 3 : Cognition,Memory,Emotion -- Normal and Pathology Week 4 : Sleep Brain and Future-- with interactive session SUGGESTED READING MATERIALS: Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience, 2nd Edition by Bernard J. Baars (Author), Nicole M. Gage (Author) Principles of Neural Science, Fifth Edition (Principles of Neural Science (Kandel) by Eric R. Kandel and James H. Schwartz CERTIFICATION EXAM : The exam is optional for a fee. Date and Time of Exams: April 28 (Saturday) and April 29 (Sunday) : Morning session 9am to 12 noon; Exam for this course will be available in one session on both 28 and 29 April. Registration url: Announcements will be made when the registration form is open for registrations. The online registration form has to be filled and the certification exam fee needs to be paid. More details will be made available when the exam registration form is published. Final score will be calculated as : 25% assignment score + 75% final exam score 25% assignment score is calculated as 25% of average of Best 3 out of 4 assignments E-Certificate will be given to those who register and write the exam and score greater than or equal to 40% final score. Certificate will have your name, photograph and the score in the final exam with the breakup.It will have the logos of NPTEL and IIT Kanpur. It will be e-verifiable at nptel.ac.in/noc.
OPCFW_CODE
RSpec::Matchers.define :be_within_seconds do |expected_time, seconds_variation| match do |actual_time| # First make sure everything is a RunbyTime seconds = Runby::RunbyTime.new(seconds_variation) expected_time = if expected_time.is_a? Runby::Pace expected_time.time else Runby::RunbyTime.new(expected_time) end if actual_time.is_a? Runby::Pace actual_time = actual_time.time elsif actual_time.is_a? String actual_time = Runby::RunbyTime.new(actual_time) end actual_time.almost_equals?(expected_time, seconds) end failure_message do |actual_time| "expected a time between #{format_time_range(expected_time, seconds_variation)}. Got #{actual_time}." end failure_message_when_negated do |actual_time| "expected a time outside #{format_time_range(expected_time, seconds_variation)}. Got #{actual_time}." end description do "match a runby time of #{expected_time} varying by no more than #{seconds_variation} seconds" end def format_time_range(expected_time_s, seconds_variation_s) expected_time = Runby::RunbyTime.new(expected_time_s) seconds_variation = Runby::RunbyTime.new(seconds_variation_s) slow_time = expected_time - seconds_variation fast_time = expected_time + seconds_variation "#{slow_time}-#{fast_time}" end end module RSpec module Matchers module BuiltIn # Monkey patch the "Eq" matcher to make failure messages involving Pace and RunbyTime easier to read # Let me know if there is an idiomatic way to do this class Eq < BaseMatcher private def actual_formatted formatted_object = RSpec::Support::ObjectFormatter.format(@actual) if [Runby::RunbyTime, Runby::Pace, Runby::Speed].include? @actual.class return "#{@actual} ---> #{formatted_object}" end formatted_object end end end end end
STACK_EDU
i have a question i.e. what it means by 'user assigned' MI not available for Az Policy? while creating Policy with 'deploy' function we do get option to use user assigned MI [Enter feedback here] Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 726adac9-1848-01a6-9586-842c81f820f7 Version Independent ID: 249db438-8c95-8e69-f69e-8e48fcd0b359 Content: Azure Services that support managed identities - Azure AD Content Source: articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md Service: active-directory Sub-service: msi GitHub Login: @barclayn Microsoft Alias: barclayn @ArjunDeb420 Thank you for your feedback. We will investigate and update the thread further. @ArjunDeb420, Thank you for bringing this to our attention. It appears that the above-mentioned URL has not been updated with recent updates that Azure Policy now supports user-assigned managed identities! You can create a user-assigned managed identity and assign it to one or more of your policy assignments, offering easier management of managed identities and controlling access across the environment due to which you get option to chose either System or User assigned managed identity in Azure portal, Template and PowerShell. Reference: https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-policy-introduces-user-assigned-msi-support-faster-dine/ba-p/2661073 I will work with the content creator to have this resolved. I hope this was helpful. #please-close Hi Siva, yes please get this updated, if possible drop me a notification when update is done. Regards, Arjun On Tue, Nov 30, 2021 at 12:05 AM SivaKumarSelvaraj-MSFT < @.***> wrote: @ArjunDeb420 https://github.com/ArjunDeb420, Thank you for bringing this to our attention. It appears that the above-mentioned URL has not been updated with recent updates that Azure Policy now supports user-assigned managed identities! You can create a user-assigned managed identity https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal and assign it to one or more of your policy assignments, offering easier management of managed identities and controlling access across the environment due to which you get option to chose either System or User assigned managed identity in Azure portal, Template and PowerShell. [image: image] https://user-images.githubusercontent.com/57458999/143921690-c89ed53b-8ab0-42c5-b8ea-63a8881f1cab.png Reference: https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-policy-introduces-user-assigned-msi-support-faster-dine/ba-p/2661073 I will work with the content creator to have this resolved. I hope this was helpful. #please-close — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/MicrosoftDocs/azure-docs/issues/84533#issuecomment-981905817, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQJMCE4UZX4ZUFPOO7VLSLLUOPBXVANCNFSM5I6PLBMA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
GITHUB_ARCHIVE
/* * Copyright (c) 2016-2020 Pedro Falcato * This file is part of Onyx, and is released under the terms of the MIT License * check LICENSE at the root directory for more information */ #ifndef _ONYX_SIGNAL_H #define _ONYX_SIGNAL_H #include <signal.h> #include <stdbool.h> #include <onyx/list.h> #ifdef __cplusplus #include <onyx/scoped_lock.h> #endif #define KERNEL_SIGRTMIN 32 #define KERNEL_SIGRTMAX 64 void signotset(sigset_t *set); struct sigpending { siginfo_t *info; int signum; struct list_head list_node; #ifdef __cplusplus constexpr sigpending() : info{nullptr}, signum{}, list_node{} {} ~sigpending() { delete info; } #endif }; static inline bool signal_is_realtime(int sig) { return sig >= KERNEL_SIGRTMIN; } static inline bool signal_is_standard(int sig) { return !signal_is_realtime(sig); } #define THREAD_SIGNAL_STOPPING (1 << 0) #define THREAD_SIGNAL_EXITING (1 << 1) struct process; struct signal_info { /* Signal mask */ sigset_t sigmask; struct spinlock lock; /* Pending signal set */ sigset_t pending_set; struct list_head pending_head; unsigned short flags; unsigned long times_interrupted; bool signal_pending; /* No need for a lock here since any possible changes * to this variable happen in kernel mode, in this exact thread. */ stack_t altstack; #ifdef __cplusplus private: bool is_signal_pending_internal() const { const sigset_t& set = pending_set; const sigset_t& blocked_set = sigmask; bool is_pending = false; for(int i = 0; i < NSIG; i++) { if(sigismember(&set, i) && !sigismember(&blocked_set, i)) { is_pending = true; break; } } return is_pending; } public: sigset_t get_mask() { scoped_lock g{lock}; return sigmask; } sigset_t get_pending_set() { scoped_lock g{lock}; return pending_set; } sigset_t get_effective_pending() { scoped_lock g{lock}; auto set = get_pending_set(); auto blocked_set = get_mask(); sigandset(&set, &set, &blocked_set); return set; } sigset_t __add_blocked(const sigset_t *blocked, bool update_pending = true) { auto old = sigmask; sigorset(&sigmask, &sigmask, blocked); sigdelset(&sigmask, SIGKILL); sigdelset(&sigmask, SIGSTOP); if(update_pending) __update_pending(); return old; } sigset_t add_blocked(const sigset_t *blocked, bool update_pending = true) { scoped_lock g{lock}; return __add_blocked(blocked, update_pending); } sigset_t set_blocked(const sigset_t *blocked, bool update_pending = true) { scoped_lock g{lock}; auto old = sigmask; memcpy(&sigmask, blocked, sizeof(sigset_t)); sigdelset(&sigmask, SIGKILL); sigdelset(&sigmask, SIGSTOP); if(update_pending) __update_pending(); return old; } sigset_t unblock(sigset_t& mask, bool update_pending = true) { scoped_lock g{lock}; auto old = sigmask; signotset(&mask); sigandset(&sigmask, &sigmask, &mask); if(update_pending) __update_pending(); return old; } void __update_pending() { MUST_HOLD_LOCK(&lock); signal_pending = flags != 0 || is_signal_pending_internal(); } void update_pending() { scoped_lock g{lock}; __update_pending(); } void reroute_signals(process *p); bool add_pending(struct sigpending *pend) { scoped_lock g{lock}; if(signal_is_standard(pend->signum) && sigismember(&pending_set, pend->signum)) return false; list_add(&pend->list_node, &pending_head); sigaddset(&pending_set, pend->signum); if(!sigismember(&sigmask, pend->signum)) signal_pending = true; return true; } bool try_to_route(struct sigpending *pend) { scoped_lock g{lock}; if(sigismember(&sigmask, pend->signum)) return false; if(signal_is_standard(pend->signum) && sigismember(&pending_set, pend->signum)) return false; list_add(&pend->list_node, &pending_head); sigaddset(&pending_set, pend->signum); signal_pending = true; return true; } constexpr signal_info() : sigmask{}, lock{}, pending_set{}, pending_head{}, flags{}, times_interrupted{}, signal_pending{}, altstack{} { INIT_LIST_HEAD(&pending_head); altstack.ss_flags = SS_DISABLE; } ~signal_info(); #endif }; #define SIGNAL_GROUP_STOPPED (1 << 0) #define SIGNAL_GROUP_CONT (1 << 1) #define SIGNAL_GROUP_EXIT (1 << 2) struct process; struct thread; extern "C" bool signal_is_pending(void); int signal_setup_context(struct sigpending *pend, struct k_sigaction *k_sigaction, struct registers *regs); extern "C" void handle_signal(struct registers *regs); #define SIGNAL_FORCE (1 << 0) #define SIGNAL_IN_BROADCAST (1 << 1) int kernel_raise_signal(int sig, struct process *process, unsigned int flags, siginfo_t *info); int kernel_tkill(int signal, struct thread *thread, unsigned int flags, siginfo_t *info); void signal_context_init(struct thread *new_thread); void signal_do_execve(struct process *proc); static inline bool signal_is_stopping(int sig) { return sig == SIGSTOP || sig == SIGTSTP || sig == SIGTTIN || sig == SIGTTOU; } #endif
STACK_EDU
I’ve been once again trying to understand programming in a way that feels grounded. My manager suggested that solidifying first principles could be my ticket, and gave me a challenge: Nand2Tetris. Nand2Tetris is a self paced project in which the learner creates a simple computer(in a hardware simulator) from small modules — chips. I have endeavored upon the first chapter and corresponding tasks. Wow were they fun! First, I learned about boolean logic and logic gates and the HDL(Hardware Description Language) I’d be using. Then I learned how to use the hardware simulator. I went to a Ruby hacknight the other night! There, I spent an hour and a half pair-debugging what turned out to be a painfully simple issue. The issue I was having: I needed to generate a migration: rails generate migration add_product_url_to_wishlists. But when I would type that into the terminal, I would get the following error: rails consoleproduced the same error as the rails serverran successfully. I should mention that my view of the error message was not as it looks above. I have, before now, kept the terminal text large… In an attempt to learn more, faster, I’m changing the style of these posts a little. I’ll be sharing notes, rather than creating micro tutorials of what I’ve learned each day. Today I created a store application called depot: What happens in the depot? What do I do to prepare (after writing out user stories)? rails new depot. rails generate scaffold Product title:string description:text image_url:string price:decimal. In two previous blog posts, we’ve written a smart contract using the Solidity programming language and compiled the smart contract to prepare it for deployment. Now we’ll test the code using the Mocha testing framework. In addition to Mocha, we’ll use Ganache and Web3. Ganache will provide us with a set of unlocked accounts we will use in our local testing environment. In this case “unlocked” means that there is no need to use a public or private key to access… The tiny “Hello World” app has two pages that link to each other. It works! Now to break it so I can see why it works. First I added a typo in the code: Time.now()(a Ruby method on a Time object) to Time.know()(a nonsense method on a In the browser, the following appeared I have this tiny “Hello world” app. It has one page. Now I will give it another page. Each page will correspond with a separate view. For this new page I will use a new action method ( goodbye) which will be in the same controller I’ve been working with( New action method. Same controller. Here we go. goodbye.html.erb so that it will show “Goodbye”. That’s fine but I’d really like to add links between both pages, too. For links I will use a helper method! link_to() creates a hyperlink to an action. So in my Today was review day. This is all pertaining to my tiny “Hello World” app. Here’s how it werks: saypart of the URL is taken as the controller name and Rails creates an instance of the hellopart of the URL identifies an action. Rails invokes the method of that name in the controller. hellomethod creates a new Timeobject and saves it in the Today, I learned about adding dynamic data to my “Hello World” app. A Rails app, like any other web app, is associated with a URL. When someone points to the URL, they’re communicating with the application code. I navigated to http://localhost:3000/say/hello and I saw the following: First thing! Here’s the video of the movement as promised in the blog post before this: I liked to compare functions to phrases of movement. In dance, we’ll have a phrase of movement, let’s say it’s 1 minute long. And the way we know to begin that phrase is because we hear a musical cue, or see a movement cue, or hear a vocal cue. … I’m just a girl, standing in front of some code, asking it to love her.
OPCFW_CODE
Novel–Beauty and the Beasts–Beauty and the Beasts Chapter 1487 – Press Conference (1) complete found the tremendous adventures of major gahagan A reporter required, “Why will you be so smitten along with your girl?” Parker’s routine was really very packed. Thankfully, even though, he was extremely qualified in belly dancing and usually enhanced whatever he was explained right away, as a result conserving quite a bit of time. who is the mayor of west warwick ri He viewed that reporter and reported seriously without having hesitation, “Of study course my fiancee is prettier.” A reporter asked, “Why will you be so smitten together with your girlfriend?” Talking about that incident, Parker turned out to be sorrowful. He cleaned his vision, tears nearly moving down his face. Within a solemn way, he started out speaking with the reporters with that make a difference. Alas, in the mention of Bai Qingqing, Parker couldn’t end. Bai Qingqing brought up an directory finger and shook it mysteriously as she claimed, “The heaven’s secrets and techniques can’t be divulged.” Nonetheless, as this dilemma possessed regarding Bai Qingqing, it promptly stood out to Parker. “May I request how will be your associations.h.i.+p while using movie’s female lead, Zhang Yu, in personal? The two of you have already been voted by far the most heartwarming pair because of your discussion in ‘Princess-Knight’. Will you be a few in the real world?” Parker was swamped with all kinds of queries, one by one. Utilized to life-and-dying cases, he was dumbfounded and couldn’t think about a response right away. …Several tens of inquiries omitted. Parker was flooded with lots of different inquiries, one by one. Accustomed to living-and-loss of life cases, he was dumbfounded and couldn’t imagine a solution immediately. Problem: “Why? You are so handsome, why would any woman be reluctant to get married you?” He investigated that reporter and reported seriously with virtually no reluctance, “Of training my fiancee is prettier.” The Voyages of the Ranger and Crusader Parker was flooded with all kinds of questions, one after the other. Used to existence-and-passing away situations, he was dumbfounded and couldn’t think of a solution right away. This inquiry had been a very crafty an individual. To begin with, Parker obtained never divulged all of his private info, neither got he confessed he were built with a girl. However right now which the reporter required him this, people who didn’t know greater may believe that they experienced a girl. That way, they might fabricate much more news. Close to him, Xu Qiyang nearly fainted from frustration. “Do you might think Zhang Yu is quite? Who may be prettier—she or even your partner?” Section 1487: Hit Discussion (1) Alas, within the reference to Bai Qingqing, Parker couldn’t cease. Alas, for the mention of Bai Qingqing, Parker couldn’t cease. Next to him, Xu Qiyang nearly fainted from rage. Issue: “Why? You are so attractive, why would any gal be reluctant to wed you?” The reporters froze in astonish. Following that, it was for instance a decline of water experienced dropped to a pot of water and brought on an explosion. The inquiries fired at him relentlessly. Parker quite concurred with the assertion ‘why would any lady be reluctant to get married to you’. He edged nearer to one other party’s mic and mentioned, “Although I’m essentially the most attractive one, she has other choices. There’s almost nothing I will do about it. But one day she’ll get married me. She’ll get married only me!” A single thing to incorporate in this article: When Parker just given back from international, he obtained thrown a tantrum at his provider over that event for a fantastic few days. The Court of the Empress Josephine The reporters froze in amaze. Adhering to that, it was actually like a lower water experienced fallen towards a container of water and triggered an blast. The queries fired at him relentlessly. Parker quite agreed upon with this assertion ‘why would any lady be unwilling to wed you’. He edged even closer to the other one party’s mic and explained, “Although I’m the most handsome one, she has other options. There is almost nothing I can do about it. Only one working day she’ll marry me. She’ll get married only me!” Next to him, Xu Qiyang searched like he has been hit by lightning, and the countenance evolved dramatically. He tried out his wise to cease the reporters from asking them questions, plus Parker from answering. Parker’s daily schedule was really very filled. Luckily, nevertheless, he was extremely capable in grooving and more often than not perfected whatever he was coached quickly, thereby saving a great deal of time. Problem: “Why? You are so attractive, why would any girl be reluctant to get married to you?” Gradelynovel Beauty and the Beasts webnovel – Chapter 1487 – Press Conference (1) water listen to you-p2 Novel–Beauty and the Beasts–Beauty and the Beasts
OPCFW_CODE
Promoting, pushing, proslytizing Python. Some possible topics - - Finding audiences to promote Python to. - Choosing or assembling "standard" intro-to-Python talks? - Eye-catching, crowd-pleasing Python tricks. - Selling to management. If you are interested in continuing discussion or helping to develop promotional materials (articles, glossy brochures, pod casts, etc) for Python, please sign up on the mailing list: Notes from the session PSF has a strong desire to promote the language. Apache Software Foundation - - Has formal publicity contact, which press-types love - Issues formal press releases (cost about $500) Getting into educational world - - Lack of campus user groups - Educational advantage - good foundation for CS understanding - Need Py in curriculum - - Encourage your own alma mater to do so - CS curriculum committees may be intractable - CS decisions driven by perception of job market - May be easier outside CS dept., as pragmatic tool for scientists/engineers - Going straight to students - bottum-up - especially for classes that don't constrain language choice for projects - Need more material for educators, with separate designated teacher and student materials Many current Python users are invisible - don't see them in community Create a list of speakers that could present on Python Need a template/example of a "standard" evangelical talk - WesleyChun has created a talk, will make his slides available - PSF once funded Software Carpentry slides - http://www.third-bit.com/swc/ - - Screencasts can popularize "celebrity programmers" - well-known faces - user groups or PSF could push screencast creation - "Python@Google" video w/ Guido, etc. would be great - Arlington school system once created a video about Python - where is it now? - - Included awesome quotes suitable for managers Form regional scripting language conferences www.python.org still not newbie-friendly - upcoming restructuring should help - - Track website visitors according to newbie, experienced, manager, etc.? Or according to project need? - Ruby on Rails site includes a restricted ruby interpreter - shall we imitate? PSF fund productivity studies? - - One old "empirical" study on scripting languages in general exists - Could utilize university researchers always hungry for projects - But how to get it to industry's attention? "Success stories" - several are at http://pythonology.org Details on solving specific types of problems with Python (try this approach, these modules, case studies specific to that problem) Software companies make glossy brouchures, managers like and expect that... Testimonials on Subversion website helped it Articles are a good introductory format - get them published in journals (Dr. Dobbs, etc.) - don't neglect trade journals, ACM's special-interest publications Please follow these ideas up on the mailing list!
OPCFW_CODE
Upgrade Project to Latest OpenJDK and Gradle Versions Is it possible to make this project compatible with the latest OpenJDK versions and Gradle? If I upgrade, I am getting this error: C:\Temp\groovyfx\src\main\groovy\groovyx\javafx\canvas\FillArcOperation.groovy: Could not find class for Transformation Processor groovyx.javafx.beans.FXBindableASTTransformation declared by groovyx.javafx.beans.FXBindable I previously fixed another issue by changed in build.gradle from sourceSets { demo { compileClasspath += sourceSets.main.output + configurations.compile runtimeClasspath += sourceSets.main.output ... to sourceSets { demo { compileClasspath += sourceSets.main.output + configurations.runtimeClasspath runtimeClasspath += sourceSets.main.output I have a similar error using Intellij and more recent version of Gradle than this project. But appears nobody answer yet. I change some lines in build.gradle, but I the error is very similar of yours.. :( I really like to create a project using this project, because it appears to be far easier then codifying in Java I created a issue also for Groovy, and they answered telling I should debug the compilation of Groovy, specifc on class org.codehaus.groovy.transform.ASTTransformationCollectorCodeVisitor:239, with the try to find the actual classpath. Probably the classpath in the build.gradle should be fixed. I succeeded to migrate the project to OpenJDK 21. Here is a fork: https://github.com/wise-coders/groovyfx Please test and review it. I will make a pull request to the original project. Thanks! But still doesnt work for me. I first try with the Java 21 and it do not build and with Java 17. With Java 17 I have kind of same error, and with Java 21 I believe it's incompatible version of the files.. `* What went wrong: Could not open cp_settings generic class cache for settings file '/media/ambastos/Documents/home/ambastos/projects/others/java/new-groovyfx/settings.gradle' (/home/ambastos/.gradle/caches/7.6/scripts/ah25nx35rvt4fnjlg96jdg28f). BUG! exception in phase 'semantic analysis' in source unit 'BuildScript' Unsupported class file major version 65 ` and with Java 17: `> Task :compileAstGroovy Note: /media/ambastos/Documents/home/ambastos/projects/others/java/new-groovyfx/src/ast/groovy/groovyx/javafx/beans/FXBindableASTTransformation.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. Note: /media/ambastos/Documents/home/ambastos/projects/others/java/new-groovyfx/src/ast/groovy/groovyx/javafx/beans/FXBindableASTTransformation.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. Task :compileGroovy /media/ambastos/Documents/home/ambastos/projects/others/java/new-groovyfx/src/main/groovy/groovyx/javafx/binding/GroovyClosureProperty.java:152: warning: [removal] AccessController in java.security has been deprecated and marked for removal Closure closureLocalCopy = java.security.AccessController.doPrivileged(new PrivilegedAction() { ^ Note: /media/ambastos/Documents/home/ambastos/projects/others/java/new-groovyfx/src/main/groovy/groovyx/javafx/binding/GroovyClosureProperty.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. 1 warning Task :compileTestGroovy startup failed: /media/ambastos/Documents/home/ambastos/projects/others/java/new-groovyfx/src/test/groovy/groovyx/javafx/beans/FXBindableTest.groovy: Could not find class for Transformation Processor groovyx.javafx.beans.FXBindableASTTransformation declared by groovyx.javafx.beans.FXBindable 1 error ` I just using the gradlew command, not using a gradle installation on my machine I don't know how to help in this case. I cannot reproduce this issue. The idea behind my fix was to prepare a class in a separate source. The compiled class was required to compile the next source. One thing could help... What gradle version are you using? I use Gradle 7.6. I got the project working. I pushed my changes into https://github.com/wise-coders/groovyfx I created also a pull request to the original project, but nobody is answering.
GITHUB_ARCHIVE
As a FileMaker developer, sometimes you inherit projects that were not designed by you, and maybe not designed by a professional developer at all. To me, the most obvious telling sign that a database was designed unprofessionally is inconsistent navigation. Using a system with inconsistent navigation not only shows a degree of unprofessionalism, but it can also hinder performance, and even worse- could strand users on a layout to the point of them needing to restart FileMaker! In this blog, I will talk about some of the standards I use when designing the navigation in databases, as well as point out some mistakes I’ve seen in solutions developed by others. When modifying data via scripts, it is good practice to use layouts that are not user-facing. Doing this will prevent the scripts from being disrupted by script triggers, as well as just improve their performance. Personally, I also use these layouts for testing and development when designing new scripts or debugging problems found. You can quickly change or remove the displayed fields of your development layout without spending time worrying about appearance. That being said, you wouldn’t want a user stumbling upon one of these layouts, so you must make sure the checkbox is empty in the Manage > Layouts dialog box. Unchecking layouts will prevent users from being able to navigate to them via the toolbar. I try to keep the number of layouts that are visible in the toolbar very limited- sometimes even none. Doing this, along with other obvious features such as privilege sets, can really enforce the separation between developers and users, which will reduce accidents. However, limiting that number of layouts is a preference, not a golden rule. Within the Manage Layouts dialog, you can organize the visible layouts using folders and separators. This is obviously a very different method than hiding most of the layouts and using navigation buttons the way I am about to dive into, but if organized and desired, this method is totally valid. More often than not, the navigation of my database designs utilizes a button bar as the centerpiece. Once you have chosen the layouts that the user will have access to, you will want to trim that list down even further to decide which layouts you want on the navigation bar. Not all layouts that users are allowed access to need a section on the navigation bar. For example, a layout called “Inspections” based off Inspections could include a portal to store related data, perhaps “Findings”. If all the modification capability and data entry of the related data is provided by the portal, then maybe the navigation bar doesn’t need a section for “Findings”. Limiting the amount of sections in the main navigation bar can reduce clutter and allow for the user to feel more comfortable navigating around a simplistic system. Size & position To create a professional-looking navigation feature, it is critical that every navigation button bar: - Has the same segments in the same order. - Is the same exact size, style, and color. - Is positionally in the same place as each other layout. I recommend always putting the navigation bar in the top left corner, whether the bar will run horizontally or vertically across the screen. This will make sure that as long as you follow rules 1 and 2, placing the bar in the same place as others will be easy. Doing this will create seamless transitions when switching layouts- only the active segment will change, the text within the segments will not move. Building layouts with these consistencies allows for rapid development without compromising professionalism. The script that accompanies this button bar is no different. Every button can use the same script, then simply map a different parameter for each button to a corresponding if statement. This not only avoids clutter in your script workspace but also creates a script that is easy to change or add to. Each segment will have its own parameter (which much match the text exactly when using the “=’), but the button bar can simply match the active segment to whichever layout it is placed on. Make sure you do not direct users to a layout that doesn’t include your master navigation feature. For areas of the database that don’t seem worthy of their own layout with a navigation bar, consider card windows and/or portals. Following these rules and principles will create a standard of work that is easy to replicate and easy to modify while creating a professional, user-friendly interface that will make sense to even a first-time user.
OPCFW_CODE
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-treated I am trying to calculate weights for inverse probability weighting. For ATE and ATET the process is straightforward. For example in Stata: predict ps if e(sample) gen ate=1/ps if treatment==1 replace ate=1/(1-ps) if treatment==0 gen atet=1 if treatment==1 replace atet=ps/(1-ps) if treatment==0 My question is: how can i calculate the weights for the non-treated (ATENT)? If this is purely about Stata then it is off topic here so you need to clarify your statistical question a bit more. @Bel This is not a Stata question, so it would be helpful if you rewrote the question without using Stata code, but using mathematical notation. It would improve the chances of a good answer. For ATU, the weights on $y_i$ would be $$ w_i = \begin{cases} \frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\ 1 & \text{if}\ d_i=0, \end{cases} $$ where $d_i$ is the binary treatment indicator. For ATT/ATET, the weights are $$ w_i = \begin{cases} 1 & \text{if}\ d_i=1 \\ \frac{\hat p(x_i)}{1-\hat p(x_i)} & \text{if}\ d_i=0 \end{cases} $$ For ATE, the weights are $$ w_i = \begin{cases} \frac{1}{\hat p(x_i)} & \text{if}\ d_i=1 \\ \frac{1}{1-\hat p(x_i)} & \text{if}\ d_i=0 \end{cases} $$ You can find these formulas derived on pages 67-69 of Micro-Econometrics for Policy, Program and Treatment Effects by Myoung-jae Lee, except that I broke them into two pieces here. Here's how I might do this in Stata, with native commands when possible and also by hand with a weighted regression of the outcome on a binary treatment dummy: cls set more off webuse cattaneo2, clear /* (0) Get the phats */ qui probit mbsmoke mmarried c.mage##c.mage fbaby medu predict double phat, pr /* (1a) ATE */ teffects ipw (bweight) (mbsmoke mmarried c.mage##c.mage fbaby medu, probit), ate /* (1b) ATE By Hand */ gen double ate_w =cond(mbsmoke==1,1/phat,1/(1-phat)) reg bweight i.mbsmoke [pw=ate_w], vce(robust) /* (2a) ATT */ teffects ipw (bweight) (mbsmoke mmarried c.mage##c.mage fbaby medu, probit), atet /* (2b) ATT by Hand */ gen double att_w =cond(mbsmoke==1,1,phat/(1-phat)) reg bweight i.mbsmoke [pw=att_w], vce(robust) /* (3) ATU by Hand Only */ gen double atu_w =cond(mbsmoke==1,(1-phat)/phat,1) reg bweight i.mbsmoke [pw=atu_w], vce(robust) This gives the following three effects of maternal smoking on newborn weight: ATU = -231.8782 grams ATT = -225.1773 grams ATE = -230.6886 grams @Bel Was this helpful? @Bel I would start a separate question on that, where you clarify what multiple treatments mean more precisely. @Bel Also, if you think that I answered your original question, please select my answer using the check mark on the left. Question: When estimating the ATT, aren't we examining only subjects in the treatment group (d=1)? Why would there be non-zero weight attached to the control group (d=0)? @RobertF the point is to compare treated subjects to the untreated subjects most like the treated subjects based on covariates. If you put 0 weight on all the untreated, there would be nothing to compare the treated to. The system would be perfectly singular unless you dropped treatment as an explanatory variable. @MattP That sounds like the definition for the ATE - comparing similar similar subjects in the treated and untreated groups. It looks like the ATT (and ATU) estimate is using the same subjects as the ATE but has recalculated the weights. I'm not quite sure what the motivation is. @RobertF The motivation is that for some group (treated, untreated, or both), you are trying to find matches that look similar in terms of probability of treatment but are opposite in terms of actual treatment assignment. The different types of weights accomplish this. You indeed start with the same people, but the weights ensure that they are treated differently depending on the parameter of interest.
STACK_EXCHANGE
Why do I need to reboot Windows 10 when installing a webcam? Since I'm just using USB, why would I need to reboot after installing a Logitech Webcam? Does USB not support hot loading/unloading of drivers? It does; What exactly is being displayed? Can you provide a screenshot of that prompt? You probably don't need a reboot. Did you run the Logitech installer or simply let windows install the drivers? Installer programmers are lazy (I know.. I was one for 5 years). They also tend to be juniors and don't check the actual conditions required for a reboot. USB is hot plugable once a device's drivers are installed. This is especially true if you are using vendor supplied software that adds services which run at start to support additional features and run from the system tray. If you do not care about those features you can probably skip the reboot, but they will be present the next time you reboot. Each USB port on your system is likely on a different internal USB 'hub' The driver part which doesn't need a restart to be functional will probably claim it needs one the first time you connect the device to a different hub because software will treat each new device location as a new devices. The software installer isn't smart. You should just get this out of the way now, and choose "reboot later" for each of the successive prompts If windows already has drivers which support the device, then you likely wont be prompted for a reboot then either, but you may not get all the features, or you may just get a bunch of bloat from the manufacturer. Thanks - I thought even the underlying drivers could be hot plugged based on USB (there'd be some non-rebooting kernel attachment mechanism, similar to ld on Linux), but thanks! The way Generic Drivers work is that The USB device advertises the services that it supports when you attach it. If the Operating system has a driver or in the case of linux, a module which supports the generic feature, for instance Storage or HID for USB drives.then the generic functionality will work and no reboot is needed. For the webcam the vendor chose to not use a standard device type as a result generic drivers wont work. if you have windows search for and download USBDevView and you will get an enumerated list of every usb device ever attached to your system. One further thing, with Windows you are safest treating it like an autistic child. Be amazed when it does something you don't expect, but don't assume that anything you think should happen is the way it will happen. Windows isn't an advanced learning platform. Its responses are coded by multiple vendors and the manufacturer programmers. based on settings they tell the system to request.
STACK_EXCHANGE
a) Knowledge: Familiarization with virtual reality techniques, technologies and applications as well as advanced graphics techniques ( tracking and navigation devices, graphics and sound displays, tactile/force feedback interfaces, haptic rendering and texture, force feedback, geometric model management, processing of motion capture signals, spatial data structures). Exposure to virtual reality and advanced graphics programming. Acquaintance with virtual reality systems and their applications in areas such as education and training, arts, science, entertainment. b) Skills: Acquisition of skills in the use, programming and development of algorithms/techniques involved in virtual reality application. Promoting analytic and programming skills. Course Content (Syllabus) Historical overview. Principles of VR, related scientific disciplines, VR applications. Tracking devices, navigation devices, gesture interfaces. The human visual system, stereopsis. Graphics displays. The human aural system, sound source position determination cues. Sound displays, generation of 3D sound fields. Tactile/force feedback interfaces. Physical modeling, haptic rendering, haptic texture, force feedback calculation. Autonomous agents, crowds. Geometric model management, Level of Detail (LOD), cell segmentation. Motion capture, processing of motion capture signals. Mesh simplification and mesh subdivision techniques. Spatial data structures: Bounding Volume Hierarchies (BVH), Binary Space Partitioning (BSP) Trees, Octrees, Scene Graphs. Culling techniques. Fast intersection & overlap test methods. Image based rendering basics. Course Bibliography (Eudoxus) G. Burdea, P. Coiffet, Virtual Reality Technology, 2nd edition, Wiley 2003 A Watt, F. Policarpo, 3D Games: Animation and Advanced Real-Time Rendering, Vol 2, Addison Wesley, 2003 T. Akenine Moller, E. Haines, Real-time Rendering, 2nd edition, A. K. Peters, LTD, 2002 (ή νεώτερη) Υλικό (π.χ άρθρα) που διανέμεται στους φοιτητές /Course material, e.g scientific papers. Additional bibliography for study W.Sherman, A. Craig, Understanding Virtual Reality, Morgan Kaufmann, 2003 A Watt, F. Policarpo, 3D Games: Animation and Advanced Real-Time Rendering, Vol 1, Addison Wesley, 2003 D. Luebke, M. Reddy, J. Cohen, A. Varshney, B. Watson, R. Huebner, Level of Detail for 3D Graphics, Morgan Kaufmann, 2003
OPCFW_CODE
Last week we held our second FINOS Member Meeting and Open Source Strategy Forum of the year, this time in New York. We were back at New World Stages, where we held the conference in 2019, and had high expectations for our return to the theatres. We weren’t disappointed. With more than 250 in-person attendees, there was a tremendous energy throughout both events as people reconnected and also met for the first time. Reflecting on the content and conversations, three words do well to represent the days: momentum, interoperability, and community. In this brief recap we’ll dig just a little bit deeper into the ways OSSF NYC 2021 brought the community together. It seems like every year at OSSF we talk about the positive momentum driving FINOS projects, members, and community to achieve new successes for open source in financial services. This year was no different as we recognized FINOS’s continued growth and record number of members, projects, and contributors. As one individual from a new member firm responded to a question about securing FINOS membership in record time, “How did I do it? Because it’s worth it.” In the first keyonte of OSSF NYC, John Madsen of Goldman Sachs, took us back to the early days of FINOS and touched on the Foundation’s momentum, citing the breadth of financial services functions the Foundation now covers along with the growing number of significant contributions from major financial services firms. Open source readiness continues to be discussed in our community and our Open Source Readiness SIG and recently developed Open Source Maturity Model provide valuable tools, insights and resources that will help keep this momentum going. During his presentation, John also highlighted a number of themes, initiatives, projects, and opportunities that were discussed throughout the day including engaging with regulators, partnering with the Linux Foundation, developing standards, and the role of data in financial services including through projects like Legend and Morphir. These varied projects, the opportunities for projects and teams to work together, and our diverse community are invaluable to keeping the momentum going. Many “flavors” of interoperability were discussed and presented in both the Member Meeting and OSSF. FDC3 announced a new (free) training course exploring the vision, key concepts, benefits, and value of workflow-driven design and how the FDC3 standard makes it easy to get started with application desktop interoperability. Symphony highlighted recent contributions to FINOS including Symphony Bot Developer Kits (BDKs) for Python/Java, and also their commitment to open source through a roadmap that addresses fullstack developers, workflow developers, and working more closely with FDC3. The Legend and Morphir teams were joined by Microsoft to talk about integrating Legend, Bosque, and Morphir to deliver open solutions to common challenges related to understanding and meeting regulatory requirements. We were also treated to an incredible fireside chat between Brad Levy of Symphony Communication Services and Nadine Chakar of State Street covering many topics including the value of an integrated business model and operations in supporting client needs. As Nadine aptly put it, “We see the benefit of inviting vendors, partners, and clients into the ecosystem.” We are grateful to our growing community for the continued support, hard work, idea generation, and overall engagement that makes this all possible and also rewarding. Throughout the two days we heard from many different voices in our community, and below we highlight just some of those talks. Jo Ann Barefoot of the Alliance for Innovation Regulation and Sultan Meghji of the FDIC (and an open source veteran), engaged in a stimulating (and often entertaining) conversation about bringing together the regulatory, banking, and open source communities. Many of our colleagues and partners from other Linux Foundation projects joined us to present and discuss topics including Hyperledger on blockchain deployment in finance; Open Mainframe on the new generation of mainframers; Linux Foundation Networking on operationalizing open source projects; Joint Development Foundation (JDF) on establishing and delivering recognized industry standards; and FinOps on managing cloud spend with efficiency and transparency. We were delighted that our speakers also discussed ways that the open source and financial services communities can promote inclusion, including through a talk from Mojaloop on “Digitizing Financial Inclusion” and from GitHub and EY on “Engaging the FSI Community for a Good Cause.” Keeping on the theme of inclusion, 5 standout women in our industry shared their insights, wisdom, and experiences from their impressive careers in finance and technology. Thank you Ali, Jane, Kim, Rita, and Tamara for your valuable and poignant examples, stories, and reflections on achieving success while remaining true to yourselves. Last but certainly not least, we were very pleased to recognize many of our individual and corporate contributors in our Member and Community awards. We are grateful to individuals and companies like these, and many others, for all of their dedication and hard work. Thank you to all of our attendees, speakers, and sponsors for making this a memorable, in-person event. We hope to see you again next year - or even sooner! Interested in joining the FINOS Community? Click the link below to see how to get involved. This Week at FINOS Blog - See what is happening at FINOS each week. FINOS Landscape - See our landscape of FINOS open source and open standard projects. Community Calendar - Scroll through the calendar to find a meeting to join. FINOS Slack Channels - The FINOS Slack provides our Community another public channel to discuss work in FINOS and open source in finance more generally. Project Status Dashboard - See a live snapshot of our community contributors and activity. FINOS Virtual "Meetups" Videos & Slides - See replays of our virtual "meetups" based around the FINOS Community and Projects since we can't all be in the same room right now. FINOS Open Source in Finance Podcasts - Listen and subscribe to the first open source in fintech and banking podcasts for deeper dives on our virtual "meetup" and other topics.
OPCFW_CODE
Module to write styled and output html and css without using react We have some case where we do not have react and only HTML and CSS, sometimes just a div and a class. It would be convenient to maintain and test our components using styled. How can I produce when I write this in src/MyDiv.js const MyDiv = styled.div` border: 1px solid red; width: 50px; height: 50px; &:hover { border: 1px solid yellow; } ` myDiv.id = 'foobar'; and generate two files build/MyDiv.css .Feq456 { border: 1px solid red; width: 50px; height: 50px; } .Feq456:hover { border: 1px solid red; } and generate two files build/MyDiv.html <div id="foobar" class="Feq456"></div> I would like to do that with an entire rollup project, is there any tool or any recommendation to get from here ? @kopax I'm not quite sure I follow. This is quite ambiguous. You don't have React in the environment you're describing, yet you do have this output? Sorry, not quite sure this makes sense. If anything, I'd recommend you to go down the route of SSR-ing or snapshotting. There are great JAMstack tools out there that could help you like Gatsby or Next.js' exporting. Not quite sure though that's the answer you're looking for @kitten , we want to maintain our css and html with styled-components. The same way you would prepare a work space with stylus and jade with gulp, to output some bundled source. We need to make the workspace for such environment, as the output would be multiple html file as templates and css files. We have been looking into StyledComponent.js#L100 and we haven't found a way, to render the valid css, without using stylis and duplicating our code. @kopax Well, you cannot break out of React and expect to output html and css using React / styled-components, I suppose. I'd recommend taking a look at: https://github.com/zeit/next.js#static-html-export https://www.gatsbyjs.org/ https://github.com/geelen/react-snapshot etc Generally the way to success would be a statically exported React SSR app. Then you could capture styled-components's css and export it to a file and save the html that comes from React separately. @kitten, I think it is bad to write all our code in multiple languages, that increase mistake. We generally have react, but for many cases, we do not have it, for example the loaders, 404.html in the proxy, etc... in this case, we would in the past use a workspace with stylus and jade preprocessing. We would like now to use a workspace with styled so we just turn fulll styled. The ssr rendering options seems a good solution, but I have already managed to get the html from enzyme API and the generated className . I have not found a way to get the output css, I can do it with stylis. I have tried to access the rendered component in enzyme and that does not make it so much. But it would be nice if you could add such a feature in v4, that would definitely bring styled on a new scene react can't get. @kopax probably you have found some other solution by now but for posterity I'm leaving it here anyway. I had a similar (edge) case where I wanted to export the raw HTML generated by React outside of the React context straight into the DOM (where React was served in an iframe, so the parent DOM), but of course styled-components and even CSS wouldn't work because the dependencies are missing in the parent DOM. So I found a solution using style-it, which attaches the CSS as an inline style which worked for my use case.
GITHUB_ARCHIVE
How manage Rails log/development.log with Git(Hub) and multiple users I am newish to Rails and GitHub and struggling with managing two coders' development logs. Specifically: How do I merge two developers development logs? Can I automate their merger? Or can I differentiate them in some Railsy or GitHub Flowy way? The Rails app I am developing on command line CentOS 6 is being worked on by another developer. We are using a private GitHub repo to help us manage our codebase and are trying to follow The GitHub Flow. This strategy works well for almost every aspect of our project. The only major problem I've run into with this so far is that our development logs are (of course) out of sync. For instance, he branches from master, then I do. Then he merges to master, then I do, but my merge will fail, citing the automatic merge failure on log/develoment.log. Our logs will be structured like this: log/development.log - mine Shared: (Tens of thousands of lines of code from master branch point) Not: (Thousands of lines of code unique to my branch) log/development.log - his Shared: (Tens of thousands of lines of code from master branch point) Not: (Thousands of lines of code unique to his branch) So, I find going through this manually, even with diff tools like git mergetool, impractical because of the volumes of code involved. (Am I simply too much of a vim novice to be able to put this to good use? Do others find this trivial?) Is there a git or rails development strategy we can employ to keep these files without them clashing? (ex: some tinkering with Rails configuration to designate 'Development1' environment vs. 'Development2' environment)? Is there some command line tool that merges two clashing logs based on time last updated? I'm imaging a tool that, given two clashing git-tracked documents can merge them by comparing the branch point/time, using that as the 'shared' base and adding in the remainder based on which was more recently updated (more recent > appended last). A more advanced version would walk back through commit history to append updates based on commit timestamps. should belong to your gitignore... seek inspiration here: https://github.com/github/gitignore/blob/master/Rails.gitignore OK, so I'll take it as the standard that this should be part of the .gitignore. (Honestly, I should have thought of this: I never reference it...) And thanks for the link. Can you answer: What is the primary purpose of the development log? I guess I assumed it was meant to be a communal dev data change log ... Also, if you want to answer this (instead of commenting) I can accept it. Logs are useful for your own purposes: check requests sent check params sent check correct matching url/controller-action check sql queries check your own stuff (you can log things if you desire) So because its for your only purpose, no need to pollute your repository with it: add log folder to your gitignore. There is a recommended gitignore for Rails projects here. BTW, if logs in console are enough for you, save your disk space and add: config.logger = Logger.new(STDOUT) in development.rb
STACK_EXCHANGE
View Full Version : Twitter posts 10-22-2010, 06:22 PM Just wondering but am i the only one who checks the homepage like every 2 hours for new tweets? 10-22-2010, 07:32 PM I have a twitter account that I'm using frequently now (@danlimao btw) but it makes no sense to me to follow the offspring, since they rarely post anything there, so i'd rather check their updates on their homepage 10-23-2010, 04:57 AM No one likes Twitter, so just stop it. 10-23-2010, 05:21 AM Even Twitter is not cool anymore. 10-23-2010, 05:25 AM The only good thing with Twitter is that it Coco and his team could do the "Twitter Tracker" bit. And the fun thing in that was that Twitter is just awful. 10-23-2010, 09:22 AM Yeah, following Dalai Lama isn't the most fascinating thing about Twitter. Nor Alyson Hannigan. Why aren't you recommend us some names? 10-23-2010, 09:41 AM I can't imagine why following Allison Hannigan would be boring, she's so gorgeous anything she could ever say would be relevant. 10-23-2010, 03:15 PM Is it just me or is this completely off topic ? :D I almost created an account just to follow Douglas Coupland. But I was afraid my productivity would decrease to zero... Besides, Offspring posts everything they post on Twitter on facebook so... (Edit: Man ! He was in Paris in september and I missed that ! :( definitely getting an account...) Get back on topic 1565! :mad: 10-23-2010, 03:54 PM @xenijardin, @warrenellis, @mattfraction, @arjunbasu, @boingboing are relevant to my interests. But I like nerdery, and comics, and stories. What are you interested in? Not any of those guys. Apparently I'm interested in @noaheverett, @joshfreese, @BBCBreaking, etc. I guess this explains how shallow and narrow human being I am. 10-23-2010, 11:46 PM Well, first I was a little shilly-shally too. Accounts that are really sincere, informative and interesting are hard to find. Concerning the "Offspring-twitter" I would say better only a few tweets that are informative than loads of twaddle ;) (like Ashton Kutcher for instance :mad:). This guy having more followers than serious news services is still a mystery to me personally. Generally I check "my twitter" once a day. 10-26-2010, 04:33 AM Some guy takes 50 Cent's twitter musings and translates them into proper english. Fun abounds. 11-09-2010, 05:01 AM Noodles is posting on The Offspring's Twitter. Good to read about him. Powered by vBulletin® Version 4.2.2 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.
OPCFW_CODE
it’s been days now trying to fix the installation issue by myself, trying almost every possible repair I’ve come across. I hope there’s someone around that can help me out. I am unexperienced in these kind of fixes. This long text is an explanation of what I’ve done to isolate and solve the problem. It may help the ones among you who hopefully know better. That being said, here come the facts: when installing the driver (from the device Manager) I get the error code 39 about corrupted or missing drivers. So the yellow triangle with exclamation mark beside Arduino Uno won’t disappear and the “Port”-Option in the Tools menu in IDE remains grey. My OS is Windows Vista 64bits and the board is Arduino UNO. I’ve tested cable and board on another Windows 7 computer and the installation worked. So hardware should be ok. I’ve tried various combinations of uninstalling drivers, the whole program, restarting, installing … My setupapi.app Log (see attachment) complains about quite a few things. - Regarding the following critical remarks throughout the log ! sig: Verifying file against specific (valid) catalog failed! (0x800b0109) ! sig: Verifying file against specific (valid) catalog failed! (0x00000057) ! sig: Verifying file against specific Authenticode™ catalog failed! (0x80092003) I checked the validity of the digital signatures of the Arduino catalogues → they’re all valid ! inf: Detected INFCACHE inconsistency !!! inf: Error searching published INFs - likely system corruption! I tried a system file check. The unnormal results (see sfcdetails.txt attached) detect a corrupted settings.ini file. Although Windows support says one can “safely ignore it”, I pass the information on in case it’s relevant. I looked into the setupapi.dev log (that one is also attached) and !!! dvi: Device not started: Device has problem: 0x27: CM_PROB_DRIVER_FAILED_LOAD. may heve relation with usbser.sys driver, which is added a couple of lines before. I had a look into Code Integrity in the Computer Management and in the operational protocol I found a series of warnings (ID 3010) about an unsigned kernelmodule …\usbser.sys was loaded and that some “oemXX.CAT-Katalog” (XX=44, 45, 48) signed by Arduino and Adafruit couldn’t be loaded. An error with Event ID 3004 stated that Windows was unable to verify the image integrity of …\drivers\usbser.sys because file hash could not be found on the system. From the log it seemed to me that usbser.sys was copied from …\FileRepository\mdmcpq.inf_ec18f765 into the Arduino driver-folder. Copying it manually didn’t help. The barriers to substitute a newly downloaded usbser.sys into the File Repository were so high, I didn’t manage to do it and estimated it wise not to mess around with something I don’t really understand. An installation with a disabled driver signature verification was my last shot before deciding to post here… If you need me to run some other check or rerun some of the ones I did to confirm any suspicion, I will do it. I’m thankful for any help you can give me, setupapi.app.txt (7.43 KB) setupapi.dev.txt (22.7 KB) sfcdetails.txt (6.23 KB)
OPCFW_CODE
Saturday Night Live took advantage of guest host Tina Fey and cast member Maya Rudolph’s pregnancies last night with some bump ‘n’ birth comedy. The target of one sketch was the childbirth education class. In it, a group of normal parents (including Tina Fey) sit on mats looking horrified as their sanctimonious bore of a teacher promotes “natural birth” via a low-quality video tape of two icky, weirdo hippies having a kind of wildly over-the-top animalistic home birth. In the end of the video not just one hippie woman gives birth but two hippie women give birth while 69ing so that they can catch each others’ babies. The class walks out when cooking the placenta is mentioned. I really wanted to crack up. I love these performers. And many childbirth instructional videos are ripe for parody. The music, the soft voice over, the dated hair-styles, a moaning mother, a doofy dad and bad lighting. And the crunchy stuff can get ridiculously heavy-handed sometimes. I totally get it. Hey, I talk about vaginas with a room full of strangers every Monday night. I’m not just willing to laugh at all of it, I do. And there’s quite a bit of laughter in the classes, as well. But while I can see that everyone here is being mocked– the gutless, squeamish students, the holier-than-thou teacher, and the orgasmic birth couple– a lof of the humor relies on an assumed disgust with the female body. I don’t know who wrote the sketch (I hope not Tina Fey) but Tina Fey and Maya Rudolph came of age back when women still had some pubes. You’d think they might scoff a little at the full-on hairy bush joke. At the end of watching it I just thought, really? We’re really just going to laugh about how gross pubes and placentas are? There are so many things they could have been done! What about Kristen Wiig as an wannabe Broadway actor-turned-childbirth educator? She’s talking about the uterus and cervix, but uses the class as an opportunity to unleash her “theatrical” talents. Or how about if the childbirth educator has no boundaries and keeps talking about her own life and births? And projecting onto every couple her own marital problems? Or maybe have a depressed middle-aged man teach the class; he does driver’s ed on the weekends and picked up these classes to make a little extra cash. Come on people, fish-out-water, idiosyncratic characters, something other than ha, ha those pubes and placentas! I’d just love to see mainstream comedy about birth take a turn away from this crap about *crazy* women and natural birth bitches. Knocked-Up was a disappointment in this area, too. It’s been a while since Monty Python got it right. But I always have strange reactions to this stuff because I used to write comic screenplays for a living. So while I’d love to watch a send-up of my life’s work right now, I can’t help cringing at all the missed opportunities. What do you think? Funny? Sexist? Who cares, pubic hair is funny, get over it? (Below is the famous Monty Python sketch from the 70s, a different angle)
OPCFW_CODE
How to Activate Windows 7 with Remove WAT V188.8.131.52 If you are looking for a way to activate your Windows 7 without paying for a license key, you may have come across a tool called Remove WAT V184.108.40.206. This tool claims to remove the Windows Activation Technologies (WAT) from your system, allowing you to use Windows 7 as if it was genuine. But what is Remove WAT V220.127.116.11 and how does it work Is it safe and legal to use In this article, we will answer these questions and show you how to use Remove WAT V18.104.22.168 to activate Windows 7. What is Remove WAT V22.214.171.124 Remove WAT V126.96.36.199 is a software that was created by a group of hackers known as Hazar and Co. in 2010[^1^]. It is one of the many tools that can bypass the Windows activation process by modifying some system files and registry entries[^2^]. The tool claims to remove the WAT completely from your system, making it appear as if your Windows 7 is activated and genuine[^3^]. However, this is not true, as the WAT is still present in your system, but hidden from detection[^4^]. How does Remove WAT V188.8.131.52 work Remove WAT V184.108.40.206 works by renaming the slmgr.vbs file, which is responsible for activating Windows, to slmgr.bak on both 32-bit and 64-bit systems[^3^]. It also modifies some registry values related to the WAT, such as setting the SkipRearm value to 1, which prevents the activation countdown from starting[^4^]. Additionally, it disables some services and processes that are involved in the activation check, such as sppsvc.exe and sppuinotify.dll[^4^]. By doing these changes, Remove WAT V220.127.116.11 makes your system think that it is activated and passes the Windows Genuine Advantage (WGA) validation[^3^]. How to use Remove WAT V18.104.22.168 to activate Windows 7 To use Remove WAT V22.214.171.124 to activate Windows 7, you need to follow these steps: Download Remove WAT V126.96.36.199 from a reliable source (be careful of fake or malicious versions) [^1^] [^5^] Disable your antivirus software temporarily, as it may interfere with the tool or flag it as a virus [^3^] Extract the RemoveWAT_2.2.5.rar file using a compression software such as WinRAR or WinZip [^3^] Run the RemoveWAT_2.2.5.exe file as an administrator [^3^] Click on the Remove WAT button and wait for the process to finish [^3^] Restart your computer when prompted [^3^] Enjoy your activated Windows 7! Is Remove WAT V188.8.131.52 safe and legal to use The answer to this question depends on your perspective and situation. From a technical point of view, Remove WAT V184.108.40.206 is not safe to use, as it modifies some critical system files and registry entries that may affect the stability and security of your system [^4^]. It may also cause some compatibility issues with some updates and features that require genuine Windows [^4^]. Moreover, it may expose your system to malware or viruses that may be hidden in the tool or downloaded from untrusted sources [^4^]. Therefore, it is recommended to backup your data and create a system restore point before using Remove WAT V2.2.5. From a legal point of view, Remove WAT V2.2.5 is not legal to use, as 061ffe29dd
OPCFW_CODE
Expiry dates are events that occur on a certain date; may have a forewarn between 0 and 99 days, so that Agenda can warn about those expiry dates that are in date and, if it is activated the forewarning switch, all those whose date is within the period marked by the forewarn. Expiry dates are visible when you press the corresponding tab in the scheduler screen (figure 1). Note: remember that the events shown in any of the tab of the planner, correspond to the dBase of the work area connected at the time (whose nickname appears in the title bar). The window appears empty if not have been created any expiry date in dBase. On contrary, a line appears showing the contents of the fields of each event. The inspection of its content is the usual in Windows applications. You can make horizontal/vertical scrolling by using the corresponding sliding bars to inspect hidden elements. The title bars are resizable, so that you can drag them with your mouse to widen them. At the same time, by successively clicking with the mouse on the title of any column, makes the content of the selected column to be sorted in direct/inverse alphabetical order. The context menu that appears when you click with the right mouse button on the window, includes options for Go home; Go to end; Page forward and Back page. These movements also can be made with the keyboard: - [Home] Position in the first line. - [End] Position on the last line - [Page Up] Back page - [Page Dn] Next page Note that the Memo column indicates the number of characters in the cell (----- if it is empty). The actual content of this field to the line that has hotbed at the time, can be found at the bottom window (this window which indicates <Empty Memo> if it do not have content). At the same time, the left window of the bottom status line (purple) contains the entire field Expiry, which is useful if the text is too long for current width of the column. The maintenance is as usual; by right clicking on the window, you get a context menu. The options that relate to maintenance are three: - This item > Modify - This item > Delete - Create new item These options are also available via the keyboard: - [Insert] Create a new expiry date - [Del] Delete the expiry date marked by the cursor at that time - [Alt]+[M] Modify the expiry date indicated by the cursor. The options for the creation and modification provides a window similar to the one shown in figure 2, which allows you to enter/modify the data. If it is a creation, in principle, the button date shows the date of today. The date and forewarn can be modified through the corresponding buttons.
OPCFW_CODE
- Organize a Pirate Drink evening - Continue working on the ghent manifest - Signatures for elections participation "gemeente" - Concrete actions - 30 nov: Jurgen talk - Berlin pirates videostreaming event - Mailing list - Blog op DWM (de wereld morgen) Temp = Mike as Roel is absent Organize a Pirate Party Drink - We want to organize one? - Doodle wise? - Do, Vrij, Zat - Gouden Mandeke - Around next week? - Sandb will do the organizing and doodle stuff. (-> need to know on Wednesday) Continue working on the ghent manifest - lets look at this every meeting until it is ready - Roel not here, let's look at this next week. - Again: this is a Ghent Crew Manifest - Working (or draft) wiki-manifest, approved manifest on wiki (aka parallel documents) Signatures for elections participation "gemeente" - Can do this up front - What are the rules -> glen has rules, digitally, will post them on wiki - Lets make sure we have a form, and know who to ask when the time is right - proposal no meetings week before and after new year. - learn from succesfull pirates - flyering? (-> 1 voter per 10.000 flyers, not much): not our main focus; go digital; (Nino makes flyer) - how will we get our point across? - don't get ahead of ourselves - Ghent 243.366 (01/01/2010) inhabitants - make an (open) list of all contactable journalists - slogans (appealing + content) - how to get women involved? - example action: 30 nov: Jurgen talk - invite press - let kafka and Jurgen know if and which journalists are coming (Nino->jurgen \ Glenn --> Kafka) - let's prepare manifest - ask everyone to write a paragraph to get content, mail to mailinglist - get together on sunday at the space (0x20), 17:00. - next Tuesday: swarm on Ghent manifest Berlin pirates videostreaming event - Glenn and Nino will contact some Berliners - video conferencing with Germany on a tuesday 20h30? - make archive? - Just asked to do so on the ML - mike will check with Jurgen - reposting service: all posted on - Jurgen did set up a nabble archive as the current provider does not provides this service. - You can click the "Archief" link's on this page. Blog on DWM (de wereld morgen) - First article -> Ghent Manifesto - Roel made a blogaccount Agenda next week - Swarm on Ghent manifest (only) Agenda in two weeks - Presentatie Klaas: Tegenstemmen Live notes on http://piratepad.net/ppgent
OPCFW_CODE
This blog post is the final of three to discuss the Genome Archive (GenArk) assembly hubs. This third post discusses the technical infrastructure of the GenArk hubs, while the first post was about accessing the data, and the second shared examples of using the data. What are the systems behind GenArk hubs? In essence, GenArk hubs are assembly hubs that have been added as Public Hubs. Anyone can build Track Hubs and Assembly Hubs, where finished hubs can then be requested to be published as Public Hubs. UCSC does have a Public Hub Guidelines page to encourage hub developers to document their data fully before submitting for inclusion, this is mainly to ensure users of Public Hubs know who to contact for their data and can understand what they are visualizing. To aid independent groups that do not want to build assembly hubs, when the underlying assembly data is already available at NCBI, our engineers have crafted scripts to build these files automatically. GenArk scripts pull data from NCBI and then programmatically construct all of the binary-indexed files needed to visualize them on the UCSC Genome Browser. Additional special features have been included, especially the ability to generate and provide BLAT and PCR dynamic servers through the pre-generation of special index files. Our engineers have also optimized other elements of these GenArk Hubs by applying the latest available Track Hub features. But what are the internal methods UCSC uses to populate the GenArk hubs? Here at UCSC, an internal process maintains a local mirror image of the NCBI genome assembly resources. In essence, there is first a transfer of data with a rsync request, rsync://ftp.ncbi.nlm.nih.gov/genomes/all/GC[AF]/ to a matching local hierarchy of directories. These matching directory structures and naming conventions for files enable scripting procedures to automatically find and process the source files into the formats the UCSC Genome Browser recognizes to visualize data, mainly byte-range accessible binary-indexed versions of the data. A Perl script, doAssemblyHub.pl, manages all the steps of the procedure (https://genome-source.gi.ucsc.edu/gitlist/kent.git/blob/master/src/hg/utils/automation/doAssemblyHub.pl). So how are gene tracks created for these GenArk hubs? For the assemblies with GCF accessions, the script uses the supplied data of the NCBI gene annotations to create gene tracks that are provided with those files. A specific gene track type called bigGenePred, https://genome.ucsc.edu/goldenPath/help/bigGenePred.html, allows amino acid displays when zoomed-in at the base level. Likewise, a bigGenePred track is made to display Xeno RefGene data which is computed from a selection of best alignments of RefSeq mRNA sequences from many organisms to the genome, using the BLAT (http://www.kentinformatics.com/) algorithm ( blat -noHead -q=rnax -t=dnax -mask=lower target.fa.gz query.fa.gz results.psl). Another bigGenePred track is made using the Augustus gene prediction software (http://bioinf.uni-greifswald.de/augustus/) from the Stanke lab. How are the other GenArk annotation tracks made? GenArk assemblies also have Repeat Masker tracks, which use the data when supplied from NCBI source. Otherwise, the track can be computed with a local installation of the Repeat Masker software (https://www.repeatmasker.org/). The Simple Repeats track is computed with the Tandem Repeats Finder software (https://tandem.bu.edu/trf/trf.submit.options.html) and the Window Masker track is computed with the WindowMasker software included in the NCBI C++ toolkit (https://ftp.ncbi.nih.gov/toolbox/ncbi_tools++/CURRENT/). The CpG Islands are computed with a modification of a program developed by G. Miklem and L. Hillier and the GC Percent track is computed using the ‘kent’ command hgGcPercent (http://hgdownload.soe.ucsc.edu/admin/exe/). Examining the doAssemblyHub.pl script (https://genome-source.gi.ucsc.edu/gitlist/kent.git/blob/master/src/hg/utils/automation/doAssemblyHub.pl), will illustrate more details about how individual steps are run (i.e., hgGcPercent -wigOut -doGaps -file=stdout -win=5 -verbose=0 test ../../\$asmId.2bit | gzip -c > \$asmId.wigVarStep.gz). What if I don’t find my Assembly in the GenArk collection? If you can’t find the assembly you want in the GenArk hub collection, but you do already have the GCA/GCF identifier you can email us at our public mailing-list email@example.com to request we add the assembly to the GenArk collection. This archived mailing-list is searchable from links on our contacts page, http://genome.ucsc.edu/contacts.html. Alternatively, if you don’t want your request to be public, you can email our private internal mailing-list at firstname.lastname@example.org. Also, since this original blog post, we created a new assembly request page, you can find details in this 4th GenArk blog post. What if my assembly doesn’t have a GCA/GCF NCBI accession? If NCBI does not have a GCA/GCF accession for your assembly then our scripts will not be able to pull the data and generate the GenArk hub. You will need to deposit the assembly at NCBI and notify us once the assembly has become available. You can find directions at NCBI for how to submit new genomes: https://www.ncbi.nlm.nih.gov/assembly/docs/submission/ A future manuscript is also in the works to further detail the background of the GenArk hubs. This was the final blog in a three-part series about GenArk hubs authored by Brian Lee. The first post focused on how to discover and access the hubs, while the second blog post provided tutorial examples of using the GenArk hubs, such as the BLAT and PCR tools that are available, or how you can send DNA of any Assembly Hubs to External Tools for processing. If after reading this blog post you have any public questions, please email email@example.com. All messages sent to that address are archived on a publicly accessible forum. If your question includes sensitive data, you may send it instead to firstname.lastname@example.org.
OPCFW_CODE
Every year on January 28, we celebrate the international event of Data Privacy Day. The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. Privacy is a human right. Many people confuse privacy with secrecy and anonymity. While privacy is a human right, anonymity is a choice. Anonymity is one of choices made possible with privacy. I’m very careful about my privacy and I take it very seriously while I’m not an anonymous person. I don’t have anything to hide, except for my personal data, but I’m still very cautious about privacy. Privacy is like free speech. I respect my right even if I have nothing to say, or hide. As Data Privacy Day is about raising awareness about best practices of it, I decided to write a note about one of the ways I keep my personal files secure through encryption. I don’t have a lot of accounts online but for those I have, I always enable two-factor authentication and I keep backup/recovery codes so I will have access to my accounts if I get my authentication program lost, I will still have access to my accounts. Also, I have a backup of my 2FA authentication database and some other files such as my PGP and SSH keys. All these files need to be stored securely encrypted somewhere out of my phone and personal computer. I have four options to store these files. - Encrypting files with GPG/PGP - Encrypting files with AES - Keeping them in an encrypted ZIP archive - Keeping them in an encrypted hard drive I need to be able to decrypt and access my files everywhere, without compromising my security. This will allow me to recover my files even if I lose all my devices and computers. One software I know I have everywhere and/or is easy to access is GNU Privacy Guard (GPG). GPG is a complete and free implementation of the OpenPGP standard and is widely being used. So GPG is a wondeful choice. However, AES (Advanced Encryption Standard) is also a very good encryption standard and is very strong. Many websites, such as mine, use HTTPS (Hypertext Transfer Protocol Secure) which is basically AES and is very secure. So I also should consider that. One way of using AES for encryption is to use a proprietary program which I refuse to use. Another way is to use OpenSSL which is a software library for applications that secures communications over computer networks against eavesdropping or need to identify the party at the other end. OpenSSL is good but can be complicated sometimes. I really don’t want to use my computer’s terminal that much every time I want to encrypt a file so I think I say with GPG for now. Another way to use AES is to store my files in an encrypted ZIP archive using a program that uses AES for encrypting archives. I think 7-Zip is one. That is actually great. I can store all files in an archive and still have them encrypted. However, the problem with that is that I can’t have 7-Zip everywhere and I don’t want to decrypt all my backup files/databases when I only need one. So I stay with GPG. The last option I have in mind is to keep my files plain (unencrypted) but in an encrypted hard drive, which is pretty secure and good but my paranoid brain doesn’t let me do it; I still don’t know why. So I chose to encrypt my files and store them in an encrypted hard drive. Then get a backup of that hard drive and store it in another hard drive and place it somewhere else, other than my first backup/secure drive. When I need to add another file to my drives, I encrypt that file with GPG and then mount my first drive, add it to the hard drive and then again take a backup of that hard drive, encrypt it, and place the hard drive backup file in my second drive. All this process takes about 5 minutes. It’s very easy to do and requires very little computer knowledge. Actually, there’s also plugins and programs with graphical user interface so I/you can easily do it with few clicks. I don’t want to get in the technical part or type the commands or explain where you should click as you can easily learn all these with a simple web search or just read manuals of programs I mentioned. A simple “how to encrypt files with GPG” search will teach you a lot. I highly encourage you to start taking steps for your privacy. A simple step to start with is to encrypt your files. I can help you if you need assistant. Many people in cypherpunk community will help you.
OPCFW_CODE
Today I’m releasing the new Keechma site and I wanted to use this opportunity to share my plans for v1.0.0. Keechma is now slightly older than three months, and in this time I talked a lot with the people who were trying it out, did a lot of experimentation and built a number of smaller apps with Keechma. This gave me some ideas on things that need to be improved before a solid v1 release. One of the first things I did with Keechma was extracting EntityDB and Router from the main project. Although this allows usage of these libraries in the non-Keechma projects, it created some logistic problems. The biggest problem was that the build system (and design) were created with a single repo in mind. That’s why the new site was the first and most important step to v1.0.0. New site is built with bunch of tools (Marginalia, Codox, Make, NodeJS), but the final build is performed by the Lektor CMS which allows me to build the site both from the content generated by the documentation tools, and to add custom content (like this news) in one system. Expect more articles, news and content around Keechma in the future. Future of Keechma When designing and building the first version, I was very careful to avoid adding stuff to Keechma just for the sake of convenience. I believe that adding a convenience layer too early makes it easy to design a bloated system. It resulted with clean, tight core, but it also resulted with an API that is more verbose than necessary. In the future, I want to add a convenience layer on top of Keechma that will remove a lot of boilerplate code from the typical app. This convenience layer will be completely optional and contained in a separate package. I want to keep the amount of code in the main Keechma project minimal. I will share more news about this project when I start working on it, but please let me know if you have any ideas or feedback. Another area I want to focus on is documentation. Keechma is a new project and the community around it is starting to form. I believe that the best thing I can do to kickstart the growth of the community and Keechma adoption is to create more documentation, more tutorials, more content around Keechma. Right now, writing documentation will have bigger impact than writing code, and that's something I'm ready to embrace. To be able to write great docs and tutorials, I need your help. I have some ideas about the improvements that can be made, but more feedback is always welcome. How You can help me: - Try Keechma - you will have questions, and by answering these questions I'll be able to see which parts need better docs - Ask questions - you can find me on Clojurians Slack (in #keechma channel), on Twitter or you can send me an email. - Show me examples of interfaces that were hard to architect - If you have an example of the interface that is hard to architect, send it to me and I'll try to replicate it with Keechma and write a blog post about it. It doesn't matter if it's implemented in a different framework, or if you found it somewhere in the wild. I want to make Keechma really easy to use, and it all starts with good documentation. Your feedback and questions will allow me to write it. Keechma solves a small subset of problems we encounter while building apps. There is a lot of stuff that could be solved in a nicer way, and I'll work on it as need arises. The first project that will be released in the near future (after I write the documentation) is Keechma Forms. It will allow you to model and validate complex forms with ease. Until the documentation is written you can check out the tests to see how the API will look. Like EntityDB and Router, the Forms library is in no way coupled with Keechma. It can be used with any Reagent project but also with any ClojureScript project (with a little effort). I believe that Keechma has sound fundamentals, and in the future, I want to make it the easiest to use and the best-documented framework out there (at least in the ClojureScript world). To do that, I'll need your help, so please let me know if you have any feedback. You can ping on Twitter or you can send me an email.
OPCFW_CODE
db.spec.patch not successful :-( eddie at omegaware.com Thu Feb 19 00:47:37 EST 2004 Skiplist is Cyrus' own DB format that is better for certain types of data access.. (mailbox lists is one of them).. Is the /imap/conf/db directory created? Does it have the correct permissions? (owned by cyrus, group mail)? And you may need to run the cvt_cyrusdb or cvt_cyrusdb_all programs to convert the databases if you had used different formats for the various databases? (ie.. DB4 for the mailboxes instead of skiplist). You need to run these programs as the user cyrus (su - cyrus -c). This is what Simon Matter's start up script does in his RPM package.. echo -n $"preparing databases... " su - cyrus -c "umask 166 ; /usr/lib/cyrus-imapd/cvt_cyrusdb_all > On Wed, 2004-02-18 at 16:44, Ruth Ivimey-Cook wrote: > I applied the db.spec.patch for FC1 db4 to the FC1 source RPM spec file, > rebuilt db4, installed all 3 components (rpm -Uhv), reconfigured and rebuilt > imapd 2.2.3 and installed that on top of my first attempt. I then tried: > % ctl_mboxlist -d /imap/conf/mailboxes.db > fatal error: can't read mailboxes file > In the imapd.log, I get > Feb 18 22:36:58 gatemaster ctl_mboxlist: DBERROR db4: unable to join the environment > Feb 18 22:36:58 gatemaster ctl_mboxlist: DBERROR: dbenv->open '/imap/conf/db' failed: Resource temporarily unavailable > Feb 18 22:36:58 gatemaster ctl_mboxlist: DBERROR: init() on berkeley > Feb 18 22:36:58 gatemaster ctl_mboxlist: DBERROR: reading /imap/conf/db/skipstamp, assuming the worst: No such file or directory > Feb 18 22:36:58 gatemaster ctl_mboxlist: skiplist: invalid magic header: /imap/conf/mailboxes.db > Feb 18 22:36:58 gatemaster ctl_mboxlist: DBERROR: opening /imap/conf/mailboxes.db: cyrusdb error > What's up? Why is it looking for skiplist stuff when it seems it knows to use Edward Rudd <eddie at omegaware.com> Home Page: http://asg.web.cmu.edu/cyrus List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html More information about the Info-cyrus
OPCFW_CODE
what measures you look at the determine over fitting in linear regression? Which of the following is NOT a valid measure of overfitting? Sum of parameters $\left(w_1+w_2+\ldots+w_n\right)$ Sum of squares of parameters $\left(w_1^2 + w_2^2 + \ldots +w_n^2\right)$ Range of parameters, i.e., difference between maximum and minimum parameters Sum of absolute values of parameters $\left(|w_1| + |w_2| + \ldots + |w_n|\right)$ Can somebody try to explain this to me? Given that the parameters in a model literally could have any value, in what sense could you possibly conceive of any of these as being a measure of anything? Is this question perhaps extracted from a context in which a specific model is under consideration and its parameters have somehow been standardized? linear regression model. I think that whuber is saying that without more context (ie, assuming only the $w_i$ are the parameters of a linear regression model fit on any data), we can't hope to interpret the $w_i$ meaningfully. Is this related to linear regression on data in any particular domain? Hint: putting aside for now the issue of whether any of these is a measure of overfitting, consider how these various measures would behave if some parameter values were negative and others were positive. Are you asking this question in the context of linear model regularization? First of all, let me describe the meaning of over-fitting in general. Overfitting means your model not only fits the relationship between the dependent variables and independent variable, but also fits random noise into it. Here is a good example of underfitting, right fitting and overfitting. Fitting such a over fitting model will result in a very low error in predicting your training data (or you can image that you are using the model fitted by the data to predict the same data, of cause the more complex the model the lower the error) but a very high error when you predict NEW data (testing data). Error can be defined as $\sum(\hat{y} - y)^2$, where $\hat{y}$ is fitted value. In general, I don't think any of the methods you mentioned in your question would help you to prevent or detect over-fitting in a linear regression model. For example, if you are fitting a linear model between the area of house (Y, in $m^2$) and the price of the house (X, in $k$ dollars). The model is like $Y = \alpha + \beta X + \epsilon,$ where $\epsilon \sim N(0, \sigma^2)$ Then, for example, the sum of parameter is $\hat{\alpha} + \hat{\beta} + \hat{\sigma}$, if I understand your question correctly. However, if you change the unit of price of the house from $k$ dollars to million dollars, your $\hat{\beta}$ will change to $\hat{\beta}/1000$. Thus the sum of the parameter reduce to $\hat{\alpha} + \hat{\beta}/1000 + \hat{\sigma}$. But you cannot say either of the model is more over-fitting than the other one even the sum of parameters changes. What I usually use to prevent over-fitting is cross validation. Cross validation means that you split your data into several subsets. For each subset, you use it as testing set while using others as training set to fit a model and use it to predict the testing set and calculate the prediction error for this testing set. Then you average your prediction errors among all the testing set, you'll get a cross validation error. Or for simple cases, I would use adjusted $R^2$ from the output of lm in r. Adjusted $R^2$ takes into the complexity of your model into account. Complexity will tend to reduce the adjusted $R^2$. Note that this is a self-study question and should be answered accordingly... @Richard Hardy, what do you mean? Without knowing the field of study, I don't think the four ways of combination of $\omega$'s can be related to overfitting. Are you familiar with the idea of the self-study tag? An answer should help the asker gradually progress with his/her own thinking. See more here. Oh, thanks. I am new to cross validated. I'll edit my answer. If you are new then -- warm welcome to CV! It's a very good answer. Thank you for continuing to improve it!
STACK_EXCHANGE
Recently I became a fan of Kanban boards, see the article about board software. GitLab has boards for a while, and GitHub has them too now. I wanted to try them out, but it turned out to be quite hard to understand at first. After understanding Jira, I have a different mental model for such boards. Basically there are two major ways of having a board: - The board can be the fundamental thing. This is how Trello and Todoist work. You define columns, then you add your issues as cards on that board. The cards don't have intrinsic state, rather their status gets defined by their position on the board. - The board is just a view. Fundamentally the cards correspond to tickets which have certain meta information attached to them. You can create multiple board views from the same underlying pool of tickets. This is how Jira and GitLab issues work. Jira issues have a status, and you map statuses to columns. And in GitLab you specify filters for each column (usually with tags) and then tag the issues such that they appear in the columns. GitHub seems to be neither of that. And I find it really confusing. This is what I could understand about it. When using Agile, one usually has some sort of planning period. You have a “backlog” where you have a virtually unlimited supply of ideas. For each iteration one choses a handful and puts them into the “to do” (or “planned”) column. From there on the developers take issues into “in progress” and eventually discard them in “done”. If there is formal testing in place, there may be “in QA” or something like that. Trello supports this workflow by just letting you have a ton of tickets in a column named “backlog”. Jira supports this by having the tickets in the “open” state (and displaying them in the “backlog” column). You then mark them as “ready” and they appear in the “planned” column. One can also drag them there, and Jira will do the modification to the underlying data. But with GitHub, it feels strange. You have your regular flat list of tickets. And completely separate you have a board, which for instance looks like this: Then you click on “add cards” in the top right and open a drawer with issues from the issues list. You can select an issue from there and turn it into a card, put it somewhere you want. It will then appear on the board. And then you can move it around the columns of the board. In the ticket view, it is marked as being present on one of the boards. You can also see which column it is in. You see the status changes, but the changes on the board don't necesarily propagate back into the issues. Just like with Jira, one can open a sidebar with information about the issue when one clicks on a card. One can also change metadata there. At this point, GitHub boards feel extremely weird. They are neither a view onto the available tickets, but they also aren't the exclusive boards like a Trello board would be. I have the impression that the issue list is supposed to be the backlog, and the boards are then used to plan a sprint. In this case it makes sense, adding cards to the board manually is sprint planning. But I find it odd that tickets have a status of “open” and “closed”, and yet they can be in any column of the any board. A ticket could be in the “done” column of one board, and in the “in progress” column of another board. I imagine that perhaps two different teams (frontend and backend) work on the same issue and one is already done with their part, while the other isn't. But then wouldn't it make sense to have two smaller tickets for that? There are configurable actions that can be triggered when a card is moved from one column to the other. One can also add actions when issues are created to also create cards. This feels highly flexible, and one can likely emulate both the Jira and Trello workflows with that. But it feels really overengineered in the sense that it has so many degrees of freedom that I find it too hard to understand. And when the tool which should manage my work, becomes too complicated to use, it isn't helping. If somebody could enlighten me, I'd be delighted. Until then, I'll just continue to use the flat list of issues that GitHub always had.
OPCFW_CODE
SSMS is just the tool; that version doesn't matter here. Good thing I don't claim to be an expert then, isn't it.I think many on here (myself included, as I saw your presentation at Bits) see you as one. Look at similar issues discussed here: DB cannot be opened because it is version 655. If you have any feedback about my replies, please contact [email protected] Microsoft One Code Framework Reply kilofafeure Member 233 Points 769 Posts Re: Error 948 version error Jun 14, 2011 02:57 Why is SQL 2008 being picked up in my version of 2012? You cannot post HTML code. Join our community for more solutions or to ask questions. All rights reserved. The error indicates that the .\SQLEXPRESS instance you have is SQL Server 2005 Express - maybe you just need to attach the .mdf to the 2008 instance (which has a different This server supports version 655 and earlier 1 Attaching .mdf file error in SQL Server 2005 see more linked questions… Related 1675Add a column, with a default value, to an existing I installed SQL 2012 management studio but still get the same error trying to attach a database created by VS 2012 Express VWD Regards, pitters Friday, May 17, 2013 12:21 PM The Database Cannot Be Opened Because It Is Version 661 This Server Supports Version 612 And Earlier how to check server instance name? –Prince Antony G Jun 23 '12 at 8:22 3 SELECT SERVERPROPERTY('InstanceName'); –Remus Rusanu Jun 23 '12 at 9:12 2 So you have SQL Best regards Anker 0 Question by:Anker74 Facebook Twitter LinkedIn Google LVL 8 Best Solution bycoolfiger 661 sure is earlier than 662, so what seems to be the problem? CREATE DATABASE is aborted. (Microsoft SQL Server, Error: 948) The server supports version 662 and older but not 661 apparently!!?? Msg 948, Level 20, State 1, Line 3 The database ‘SQLAuthority' cannot be opened because it is version 852. http://www.sqlservercentral.com/Forums/Topic1602351-1549-1.aspx You may download attachments. sc.exe. Sql Server Version 706 Is a food chain without plants plausible? This server supports version 612 and earlier. This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Top Experts Last 24hrsThis month CPallini 285 OriginalGriff 255 Karthik Bangalore Oddly, though, the Web Platform Installer indicated that I had Express R2 installed. https://social.msdn.microsoft.com/Forums/sqlserver/en-US/496fc77c-3d37-43ba-b115-f8f179568b1d/sql-server-management-studio-2012-error-948?forum=sqlexpress Compute the Eulerian number How to find positive things in a code review? Sql Server Error 948 Version 661 But there are some disadvantages:If the source database contains lots of data, you will have a large script file.The generated file is a plain text. Sql Server Error 948 Version 706 Gail ShawMicrosoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)SQL In The Wild: Discussions on DB performance with occasional diversions into recoverabilityWe walk in the dark places no others will enterWe Gail ShawMicrosoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)SQL In The Wild: Discussions on DB performance with occasional diversions into recoverabilityWe walk in the dark places no others will enterWe Get More Info What is the 'dot space filename' command doing in bash? "command not found" when sudo'ing function from ~/.zshrc How to deal with a coworker who is making fun of my work? You cannot post topic replies. From the description, I noticed that a SQL Server 2008 R2 version .mdf was trying to attach to the SQL Server 2008 instance. The Database Cannot Be Opened Because It Is Version 661 This Server Supports Version 655 And Earlier Featured Post How to improve team productivity Promoted by Quip, Inc Quip adds documents, spreadsheets, and tasklists to your Slack experience - Elevate ideas to Quip docs - Share Quip docs Solution 1 Accept Solution Reject Solution The error message is pretty clear: you are trying to open a database created in a later version of SQL Server. If you execute it directly so you will get the error 5120. Upgrade Sql Server 2008 To R2 A downgrade path is not supported. For any SQL Server Performance Tuning Issue send email at pinal @ sqlauthority.com . Good luck. You cannot edit other topics. this page Insults are not welcome. I trust the error message more than I trust you. –Remus Rusanu Jun 23 '12 at 8:18 oh that's fine , as per reviews all are mentioned to check up vote 1 down vote favorite I have a problem when attaching the file AdverntureWorksLT2008R2_Data.mdf to Microsoft SQL Server Management Studio (SSMS). But you can only export tables and views only.Series NavigationRestore Database From SQL Server 2008 to SQL Server 2005, Part 2: Generate SQL Server Scripts Wizard >>Share:TweetPrintLike this:Like Loading...Related posts:Restore Database by scripting out database structure and/or data into .sql files and running those on the old system. This Server supports version 655[^] Login problem in asp.net[^] Permalink Posted 13-Jul-12 23:27pm Sandeep Mewara505.4K Comments sandeep nagabhairava 14-Jul-12 6:51am my 5! Permalink Posted 13-Jul-12 23:23pm OriginalGriff1.7M Comments sandeep nagabhairava 14-Jul-12 6:51am my 5! UV lamp to disinfect raw sushi fish slices Players Characters don't meet the fundamental requirements for campaign Would animated +1 daggers' attacks be considered magical? Could not open new database 'dfdfdff'. The database 'DNN' cannot be opened because it is version 661.
OPCFW_CODE
# ATMegaZero Sensors Shield # # You can purchase the ATMegaZero Sensors Shield from the ATMegaZero Online Store at: # https://shop.atmegazero.com/products/atmegazero-sensors-shield-compatible-with-the-raspberry-pi # # For full documentation please visit https://atmegazero.com import time import board import busio import digitalio import adafruit_bme280 import adafruit_ds1307 import adafruit_mpu6050 import adafruit_sgp40 import adafruit_ads1x15.ads1115 as ADS from adafruit_ads1x15.analog_in import AnalogIn i2c = busio.I2C(board.SCL, board.SDA) # BME280 Sensor bme280 = adafruit_bme280.Adafruit_BME280_I2C(i2c) # change this to match the location's pressure (hPa) at sea level bme280.sea_level_pressure = 1013.25 # RealTimeClock(RTC) rtc = adafruit_ds1307.DS1307(i2c) # Accelerometer mpu = adafruit_mpu6050.MPU6050(i2c, address=0x69) # Gass Sensor sgp = adafruit_sgp40.SGP40(i2c) # Create analog-to-digital converter ads = ADS.ADS1115(i2c) # Create single-ended input on channel 0 chan = AnalogIn(ads, ADS.P0) class Temperature_Type: fahrenheit = 0 celsius = 1 def __init__(self, type): assert type in ( self.fahrenheit, self.celsius) self.selection = type class ATMegaZero_Sensors_Shield: def __init__(self, *args): print("Sensors Shield Initialized") def get_date_time(self): t = rtc.datetime hour = t.tm_hour % 12 date_string = "{}/{}/{} - {}:{:02}:{:02}".format(t.tm_mon, t.tm_mday, t.tm_year, hour, t.tm_min, t.tm_sec) print("Date/Time: ", date_string) return date_string def get_current_time_as_tuple(self): t = rtc.datetime return (t.tm_hour, t.tm_min) def get_date_tuple(self): t = rtc.datetime return (t.tm_mon, t.tm_mday, t.tm_year) def get_temperature(self, type = Temperature_Type.fahrenheit): if type == Temperature_Type.fahrenheit: temperature = bme280.temperature * 9 / 5 + 32 else: temperature = bme280.temperature print("Temperature: ", str(temperature)) return temperature def get_barometric_pressure(self): barometric_pressure = "%0.1f hPa" % bme280.pressure print("Barometric Pressure: ", barometric_pressure) return barometric_pressure def get_altitude(self): altitude = "%0.2f meters" % bme280.altitude print("Altitude: ", altitude) return altitude def get_humidity(self): humidity = bme280.relative_humidity print("Humidity: ", str(humidity)) return humidity def get_accelerometer(self): acceleration = "X:%.2f, Y: %.2f, Z: %.2f m/s^2" % (mpu.acceleration) print("Acceleration: ", acceleration) return acceleration def get_gyroscope(self): gyro = "X:%.2f, Y: %.2f, Z: %.2f degrees/s" % (mpu.gyro) print("Gyro: ", gyro) return gyro def get_raw_gass_value(self): raw_gass = sgp.raw print("Raw Gas: ", raw_gass) return raw_gass def get_light_sensor_value(self): print("{:>5}\t{:>5.3f}".format(chan.value, chan.voltage)) brightness = chan.value / 1000 print("brightness: ", brightness) return brightness def set_date_time(self): # year, mon, date, hour, min, sec, wday, yday, isdst t = time.struct_time((2021, 04, 7, 19, 46, 0, 2, -1, -1)) # you must set year, mon, date, hour, min, sec and weekday # yearday is not supported, isdst can be set but we don't do anything with it at this time print("Setting time to:", t) # uncomment for debugging rtc.datetime = t
STACK_EDU
using System.Collections.Generic; using System.Linq; namespace CppSharp.Utils.FSM{ internal class Minimize{ public static DFSM MinimizeDFSM(DFSM fsm){ var reversedNDFSM = Reverse(fsm); var reversedDFSM = PowersetConstruction(reversedNDFSM); var NDFSM = Reverse(reversedDFSM); return PowersetConstruction(NDFSM); } private static NDFSM Reverse(DFSM d){ var delta = new List<Transition>(); foreach (var transition in d.Delta){ delta.Add(new Transition(transition.EndState, transition.Symbol, transition.StartState)); } return new NDFSM(d.Q, d.Sigma, delta, d.F, d.Q0); } public static DFSM PowersetConstruction(NDFSM ndfsm){ var Q = new List<string>(); var Sigma = ndfsm.Sigma.ToList(); var Delta = new List<Transition>(); var Q0 = new List<string> { string.Join(" ", ndfsm.Q0) }; var F = new List<string>(); var processed = new List<string>(); var queue = new Queue<string>(); queue.Enqueue(string.Join(",", ndfsm.Q0)); while (queue.Count > 0){ var setState = queue.Dequeue(); processed.Add(setState); Q.Add(CleanupState(setState)); var statesInCurrentSetState = setState.Split(',').ToList(); foreach (var state in statesInCurrentSetState){ if (ndfsm.F.Contains(state)){ F.Add(CleanupState(setState)); break; } } var symbols = ndfsm.Delta .Where(t => statesInCurrentSetState.Contains(t.StartState)) .Select(t => t.Symbol) .Distinct(); foreach (var symbol in symbols){ var reachableStates = ndfsm.Delta .Where(t => t.Symbol == symbol && statesInCurrentSetState.Contains(t.StartState)) .OrderBy(t => t.EndState). Select(t => t.EndState); var reachableSetState = string.Join(",", reachableStates); Delta.Add(new Transition(CleanupState(setState), symbol, CleanupState(reachableSetState))); if (!processed.Contains(reachableSetState)){ queue.Enqueue(reachableSetState); } } } return new DFSM(Q, Sigma, Delta, Q0, F); } private static string CleanupState(string state){ return state.Replace(",", " "); } } }
STACK_EDU
Couple of days back I got opportunity to attend “IBM Developer Connect”, even held at KTPO, Bangalore. As working in the IT industry and being a developer my self this got me pretty excited. I have been developer for 3 years and now work as QA engineer but designation can’t that coder part away from me. When your code compiles error free and does what it is suppose to do, one of the best feelings in the world. I still develop but I develop to test what others have develop (this got quite confusing). Anyways, I thought that I will share my over all experience of the event. I am not gonna detail out each and everything but summarize on the main points. It was a full day event started around 6:30 AM with registration and refreshments and was till 6:30 in the evening. The actual procession started from 8:30 AM. We reached there around 8:15 after struggling with traffic for about one and half hours which by the way we were not expecting at these early hours. Finally we made it to the venue and it was nice to see that so many people turned up for the event. Most of them were still in queue for registration. They were serving sandwiches and juices at the counter so first we hit that and after getting fueled up we made our way inside. As soon as we entered first we saw “Experience Zone”, where developers can have hands on, live demos of IBM technology. Further ahead was the area where all sessions and demos were to take place. Soon event started with the dance performance of a group of youngsters, to lighten up the mood, even technology people appreciate light entertainment. 🙂 It was followed by the key note from Bob Lord, Chief digital officer, IBM digital. Image below gives the high level view of what this event was focused around. Cognitive – Cognitive computing is the simulation of human thought processes in a computerized model. These platforms encompass machine learning, reasoning, natural language processing, speech and vision, human-computer interaction, dialog and narrative generation and more. It is not enough for machines to just have IQ but EQ as well. There come IBM Watson into picture. Watson is a question answering computer system capable of answering questions posed in natural language. Session was followed by the demo of mobile app “Meeka” – It used watson. Isn’t it nice to that an app can lay hand in dealing with not only with life saving scenarios but also life changing scenarios like a wedding. Mobile – This is an era of mobile technologies and apps. We all have several apps in our mobiles. What is the best way to create your app? where should we focus on? IBM has solution for this as well, Swift@IBM. Swift is programming language developed by Apple. It is the 2nd most loved language by programmers and also is open source. IBM swift sandbox , is an interactive website that lets you write, execute, and share Swift code in a server environment.Even non-Mac user use swift. It offers – - modern programming language constructs - Error detection at compile time - Code re-engineering - Vibrant developer base - Enhanced multi threading and concurrency support This one got my attention the most seems like pretty neat ain’t it. IoT – IoT acronym Internet of Things. Got to the the live demo from Intel Geteway to connect to IBM Watson IoT platform. Intel also announced the Intel Iot Dev Kit 3.5 on the same day that has C/C++, java and java script support also IBM Bluemix Tot foundation integration support. Blockchain – This is operating system for new generation business. - Shared replicated ledger - Consensus – network verified transactions - smart contract Blockchain enables a comprehensive view of all IGF operations. Can be used in the areas of trade finance , syndicated loans, supply chain finance. Cloud – In the field of cloud computing IBM is not behind.Using Bluemix one can have test network up and running without hassle. It supports several programming languages and services as well as integrated DevOps to build, run, deploy and manage applications on the cloud. We also got chance to meet Tanmay Bakshi , a 12 year old developer.I wonder what I was doing when I was this age other than being a moron. He presented demo of his app “Asktanmay”. So over all experience was great. Event was very well managed also had very good food and verities of it during lunch. Throughout the article I have added as many links as I could to help explain more. Looking forward to attend the event next year as well. I am gonna close this one with few more interesting photos of the event and list of all the speakers. - Prashant Bhuyan, Co- Fouder, CEO & CTO Alpha Modus - Sunder Somasundaram, CTO for loT Platforms and Services AT&T - Bob Lord, Chief Digital Officer, IBM Digital - Mike Rhodin, Sr. Vice President, IBM WATSON - Michael Gilfix, Vice President, Mobile & Process Transformation, IBM Coud - Sriram Raghvan Director, IBM Research - Shalini Kapoor, Chief Architect-lot Ecosystem, IBM LoT - Sandy Carter, DM, Developer Ecosystem and Startups + Social Business Evangelist, IBM Cloud - Lee Faus, Technical Sales Director at GitHub - Tanmay Bakshi, WATSON Developer - Pradeep Balachandran, Program Director- Dev Ops Tools and Eclipse Platform Development, IBM Cloud - Promodh Ramesh, Senior Offering Mange, Interaction Services-API Connect, IBM Cloud - Palanivel Kodeswaran, Research Staff Member-Mobile Security, Privacy, Policy Based Systems, IBM Reasearch - Arun Ramakrishnan, Tech lead – Service Provisioning, IBM Digital marketplace, IBM Cloud - Simon Wheatcroft, Ultra marathoner and Runkeeper user - Magesh Rajamani, WATSON Solutions IBM WATSON - Purushothaman K Narayanan, WATSON Implementations, IBM WATSON - Vikas Raykar, IBM Research - Manisha Sharma, Offering Manager- Platform & Industry Solution Provider Ecosystem, IBM loT - Vidyasagar S Machupalli, Developer Advocate- Cloud and Mobile, IBM Cloud - Jeffrey Dare, Engineer – WATSON loT, IBM loT - Thejaswini Ramachandra- Senior Product Manager, IBM Cloud - Vanitha Narayanan, Managing Director, IBM India
OPCFW_CODE
create high-level IF ELSE ENDIF with macros in PIC16 assembly I tried to simulate IF() .... ELIF .... ENDIF in assembly for PIC16F84, but it doesn't seem to work for more than one usage. I tried to use something like this in two places, but it gives some error that a label is duplicated. Shouldn't the parameter from the macro be replaced in the labels too? (name in true_name:) _f macro name btfsc EQUAL,0 goto true_name goto false_name true_name: endm _lse macro name goto next_name false_name: endm _ndif macro name goto next_name next_name: endm ;; usage example _f label1 ... _lse label1 ... _ndif I kinda solved this problem with MPLAB variables, here's an example for testing equality between a register and a literal: _f_equal_literal macro register,literal,name movlw literal subwf register,0 btfss STATUS,2 ;bit indicating result is zero goto _false#v(name) endm _lse macro name goto _next#v(name) _false#v(name): endm _ndif macro name _next#v(name): endm Notice that I didn't use goto _true#v(name) and _true#v(name): label, you'll just have to figure if you need btfss or btfsc. You can have a single _lse and _ndif macro, and have multiple macros for _f statements. GJ's solution doesn't have a next label, so the true branch will execute the false branch. You need to define a variable for each if-else-endif construct. It might even useful if the variable name describes what the if-else-endif is used for. Example: variable testing_something=123 _f_equal_literal some_register,some_value,testing_something ... _lse testing_something ... _ndif testing_something I think one can do a bit better. Here's some if-else-endif macros that can be nested five deep. Unfortunately, I was not able to make the definitions of if1, if2.. as nice as I would like since the assembler does not accept "#ifndef if#v(lvl)" so the macros as they stand limit the nesting level to five deep. These symbols count the number of Ifs at a given nesting level so unique labels can be attached. A nonsense example is included. xIf macro L,R,A #ifndef lvl lvl=0 #endif lvl=lvl+1 #ifndef if1 if1=0 if2=0 if3=0 if4=0 if5=0 #endif if#v(lvl)=if#v(lvl)+1 movf R,A xorlw L bnz _false_#v(lvl)_#v(if#v(lvl)) endm xElse macro bra _end_#v(lvl)_#v(if#v(lvl)) _false_#v(lvl)_#v(if#v(lvl)): endm xEndIf macro _end_#v(lvl)_#v(if#v(lvl)): lvl=lvl-1 endm xIf 123,STATUS,A clrf TMR3H,A xIf 75,STATUS,A clrf TMR3H,A xElse setf TMR3L,A xEndIf xElse setf TMR3H,A xEndIf Do not use jumps from one macro to another, it is dangerous. There is no need to use unique labels. Two ways haw to do this under MPLAB: 1)Case with LOCAL directive _f macro name LOCAL true_name btfsc EQUAL,0 goto true_name goto name true_name: endm 2)Case with $ as current memory address pointer. _f macro name btfsc EQUAL,0 goto $+1 goto name endm I know, it's not a very good idea to have jumps all over the place, but I need the construct to have a more high level understanding of the program. Your solution is only partially helpful, it doesn't have a goto_next, so the true branch won't execute the false branch. I figured out a solution in the answer below @titus The problem is not having jumps all over the place. The problem is that macros are expanded, causing multiple labels with the same name to exist at different parts of the program. Therefore, when you jump to the label with that name, you could be jumping to any one of them, not always the one you intended! It's like this code: someLabel: retlw d'1' someLabel: retlw d'2' when you call someLabel, it's unclear where you're actually jumping. @Wallacoloo i have to take care that the coresponding statements have the same label
STACK_EXCHANGE
package test; import com.nsa.portfoliomanager.priceservice.service.StockPrice; import com.nsa.portfoliomanager.priceservice.yahoo.PriceServiceYahoo; public class TestPrice { public static void main(String[] args) { PriceServiceYahoo ps = new PriceServiceYahoo(); StockPrice sp = new StockPrice("BXE","CANADA"); ps.getInfo(sp) ; System.out.println( sp.getFullName() + " : " + sp.getLastPrice() ); sp = new StockPrice("TER","CANADA"); ps.getInfo(sp) ; System.out.println( sp.getFullName() + " : " + sp.getLastPrice() ); String symbol="PXT.TO"; sp = null; if ( symbol.contains(".TO") ) sp = new StockPrice(symbol.replace(".TO", ""),"CANADA"); else sp = new StockPrice(symbol,"US"); ps.getInfo(sp) ; System.out.println( sp.getFullName() + " : " + sp.getLastPrice() ); } }
STACK_EDU
death to style guidelines Dmitry A. Soshnikov dmitry.soshnikov at gmail.com Fri Aug 27 04:15:30 PDT 2010 On 27.08.2010 14:15, Irakli Gozalishvili wrote: > Please don't be too aggressive in replies, I know it's too ambitious & > may not lead to anything, but I still want to give it a try and > suggest to: > Employ some of the decisions that being made with coffee script & > python in order to over all the wars regarding: 2 space vs 4 space vs > tabs, where to put braces, whether or not use optional semicolons. All > the style guidelines, just hurt developers, who have to switch > mentally every time they work on a project with a different style guides. What exactly are you suggesting to "stop wars"? Though, I think today there are no such wars in more-less professional JS scripting (usually is used combination of Java's style guide with some local changes), majority in general is in agreement of how /current/ JS code should The priority of applying some coding conventions and style guides are: 1. A local style guide used in company (the highest priority; it may shadow an official style guide of a technology if there is useful 2. The official style guide provided by the main official resource of the technology (in this case -- http://ecmascript.org; is a /recommendation/ for a professionally formed code); 3. Local habit (the lowest priority; should be avoided. Used by beginners as a habit from previous technologies, e.g. underscore_case from C vs. camelCase used in JS). And the only way to stop "some" mystic "wars" is to specify point (2), i.e. the official style guide suggested by the official resource of the technology. Many technologies have such recommendations, e.g. Python's "PEP 8" (http://www.python.org/dev/peps/pep-0008/), Java's official etc. Only having this done and /exactly on the official resource/ may be a good reference in arguments when choosing a convention style guide. And since ECMAScript has no such an official style guide, yeah, it would be good to write one. It is even good from a solid position -- other mainstream technologies have their recommendations for professionally written code. Some of them even suggest general structure of file system folders, e.g. Erlang for building their OTP applications, or Ruby (on Rails) which uses "convention over configuration" principle for decreasing a code syntactically and to have general, well-known to Currently, as a variant, the style guide of Mozilla may be treated as an with Mozilla. So, in arguments for choosing a convention style guide you may refer to their style guide too mostly for scripting XUL as I see, and there are doubtful underscores, Not bad (based on Java's style guide and, actually, is its complete "clone"/port) Crockford's description: official document on http://ecmascript.org (2 or 4 spaces -- is needed to be decided though). > Besides if I follow correctly it's being agreed to make backwards > incompatible syntax changes in new version of ECMAScript :) No, only very-very needed incompatible changes (a "migrate tax"?). Here was a thread when I was confused regarding short notation of "funargs" (which is #) and why "fun" cannot be used -- I thought that "fun" will be a new keyword, but no, it's not worth about new keyword. > As a side note, I'd like to also mention that coffee script managed to > reduce verbosity so match, making writing code so much better experience! Yeah, as I mentioned, the good ways of removing all obsolete "syntactic/logical noise/garbage" and "bad parts" of previous versions is either to invent a new language (that is CoffeeScript does regarding incomparable with previous code (let the Web crash with this new version!, if this new version will bring new highly-abstracted, modern and useful things). ES has such version -- it's the ES6 (aka Harmony), but it approves only very-very needed (and well-founded) changes (e.g. mentioned above "fun" won't be accepted because will "break the Web" as many scripts use "fun" as a name of simple "funargs"). Current approach with "use strict" (used now in ES5) is also a good way of, not so radical, but gradual removing of obsolete things (`with`, etc). P.S.: explicit semicolon is also a "syntactic noise". It's just a consequence of some current well-known JS's ASI pitfalls (and minimizaton's issues) that an implicit semicolon was named as a bad practice. In general, it's a "noise" which (if needed) should be done by the machine, but not the human. If implemented well, ASI -- is a good idea. So, all that Ruby, Coffee, etc decrease this noise nicely. Though, I myself (as a habit) use always now explicit semicolon. > Irakli Gozalishvili > Web: http://www.jeditoolkit.com/ > Address: 29 Rue Saint-Georges, 75009 Paris, France > es-discuss mailing list > es-discuss at mozilla.org -------------- next part -------------- An HTML attachment was scrubbed... More information about the es-discuss
OPCFW_CODE
Can not assign a class's instance to it's interface type In OSB(Oracle Service Bus), I used the default report action, but disabled the default reporting provider in the console settings, so no one is reading the message from the default reporting message queue. Now I am trying to use a JMS adapter in OEP(Oracle Event Processing) to get the reporting message from the default reporting queue. I got the message and it is an Object Message(com.bea.wli.reporting.jmsprovider.runtime.ReportMessage), while testing I can only display the messageId by calling objectMessage.getJMSMessageID(), but when I can call objectMessage.getObject() I get this exception in the OEP server console. The code throws the exception is not visible to me. Here is the exception I am getting: java.lang.RuntimeException: java.lang.ClassCastException: cannot assign instance of org.apache.xmlbeans.impl.values.XmlAnyTypeImpl to field com.bea.wli.reporting.jmsprovider.runtime.ReportMessage.metadata of type org.apache.xmlbeans.XmlObject in instance of com.bea.wli.reporting.jmsprovider.runtime.ReportMessage But the Javadoc shows this XmlAnyTypeImpl implements XmlOjbect package org.apache.xmlbeans.impl.values; import org.apache.xmlbeans.XmlObject; import org.apache.xmlbeans.SchemaType; public class XmlAnyTypeImpl extends XmlComplexContentImpl implements XmlObject My understanding about this exception is : I have a Interface A, and a class B implements A now I have a "b" which is a instance of B this exception is saying that I can not assign "b" to a variable which type is A. Is this correct? If my understanding is correct, how could this happen... Some people on the internet saying it could be these are not loaded in the same class loader, I am not quite sure is there a way that I can control that? I have already tried to put this getObject call in a separate OSGi bundle to see if that would force it to use one classloader, but I am still getting the same exception. Are you sure that the XMLObject being implemented by XmlAnyTypeImpl is the same one as org.apache.xmlbeans.XmlObject ? If the same name applies to two different classes with the same name in different packages this could occur. With a generic name like "XmlObject" its not impossible, add imports also to the question.. it seems they are interfaces with same names but different packages I just added the import part, this is from the java doc, I downloaded this xmlbeans jar, I will not be able to change it. This can happen if the interface and implementation come from different class loaders. This is a subtler version of the "cannot cast X to X" problem. @JimGarrison Is there anyway I can could find out how the classes are loaded, and force them to load in the same class loader? Thank you.
STACK_EXCHANGE
Quake Mode multi monitor setup I've been toying with quake mode and noticed some odd behaviors when running on multiple monitors, where one is on a vertical orientation. On first summon quake mode dropped down from my desired "main" monitor. Consecutive summons have it split between two monitors but never on a single monitor. I tried changing the monitor and desktop values of the command without luck. In the attached image you can see that the window dropped down to my left most monitor (which is vertical). It appears the drop down is tracing location of mouse relative to left|right of center in regards to the entire resolution of the combined screens (?). If my mouse is towards left of center it will render on left most monitor, with the right edge meeting center of main screen. If my mouse is to the right of center of main screen it renders on the right "half" of the resolution, with the left edge meeting center of main monitor. Last interesting thing to happen is -- if my mouse is on the left edge of my left most screen it renders off screen. In my case, I got two monitors. On first I have no Windows taskbar, on second I got Windows taskbar located at right monitor edge. When I run Quake mode on monitor 2 it is looking ok: But when I do Quake mode on monitor 1, Windows Terminal is having gap in place where the Windows taskbar is on second monitor: I've been having some issues with the quake mode on multi monitors. (love this new feature btw).. My setup is a 4k landscape monitor above the laptop and a hd portrait monitor next to the that. All 3 monitors have different resolutions and scaling factors. When opening on the laptop screen it works perfectly, full width of the screen. When opening on the 4k screen, it only goes to 3/4 width of the screen. When Opening on the 1080, portrait screen, its width spills over to the 4k screen. Sometimes it gets confused and doesn't know which is the active screen and opens in the previous window, other times it opens in random places on the screen (usually the middle) There is also what feels like screen tear in the slide down effect. Hope this helps in getting the bugs ironed out. If you want I provide screenshots. Brent Welp, that's weird. I've been using QM for a while with multi-mon, but never ran into anything like this. Maybe the vertical/horizontal orientation is important here. I'll stick this on 2.0 for now, but if anyone who's actually hitting this could help debug, that'd definitely be appreciated. Ok I think I can reproduce my issue now. Like I said before, I have two monitors, both have the task bar at the bottom, and both are using the regular orientation. Open QM terminal with hotkey Hit winkey+shift+[left or right arrow key] to move it to the other monitor The terminal now has the wrong size in the second monitor It is already wrong now, but if you move it back to the first screen (With either the QM terminal hotkey or winket+shift+[left or right arrow key] ), it is also wrong on the first monitor (It ends up like my screenshot in the comment above) notes before I leave for the weekend: WM_WINDOWPOSCHANGING is probably not the right place to do this. It gets called way too often, we don't need to try and enterQuakeMode that many times For whatever reason, doing enterQuakeMode there does calculate the right size, but then the window still ends up at the wrong dimensions. Maybe this is getting followed up with a WM_SIZING or something. Where is the right place to determine that we've moved? WM_MOVING? I suppose that might work IM having a similar issue. I have horizontal screens layout but they're different resolutions and scaling. My secondary screen is 4k@150% scaling and my primary screen is 1440p. When the terminal opens on the secondary screen it doesn't go all the way across. PS - Also really appreciate this quake console mode. I was doing it manually with AHK and then it wasn't working properly and realized you guys backed the feature in using the same hotkey 😅 Actually, is there any way to just turn this off? I want my own implementation back and WT won't let me use this keyboard shortcut for my AHK script anymore. I thought it was cool but Quake mode is a hindrance I want it gone.
GITHUB_ARCHIVE
Python Jumpstart by Building 10 Apps Transcripts Chapter: App 5: Real-time weather client Lecture: What you will learn 0:00 What are we gonna learn in this chapter? Well, a ton of amazing things. The primary focus, the thing that we're going to get the most out of with 0:09 this chapter, is working with external packages or libraries. There's a place called PyPi, or Python Package Index, 0:16 and we're going to use this tool called "pip" to access all the libraries there, and we can just plug those right into our application, 0:23 and there's literally hundreds of thousands of libraries out there that we can use, from 0:29 computer vision to machine learning to working with stuff on the Internet and calling API's and so on. 0:34 So that's what we're really going to focus on, using these external libraries to make working with this weather data on remote servers 0:41 super easy and straightforward. Python is often referred to as having batteries included. That means it has so much contained within it. 0:49 Rather than writing a bunch of functionality, you could just grab little pieces and put it together. 0:54 A lot of times that means working with something from the standard library. 0:58 But even more than that, this PyPi place with hundreds of thousands of libraries are also right at your fingertips. 1:05 So if you need to build almost anything, there's probably a library that either does that, or it will help you much more easily build that 1:13 thing. When we talk to our weather service, we're gonna create an http client, that's basically a major part of our application, to go 1:20 out there and do a web request. We're gonna use API endpoint. That API is gonna return a certain type of data called JSON, which is very, 1:28 very popular in API's. So we're gonna work with JSON and Python. In order to install our external packages correctly 1:35 and isolate them on a per application or per project basis, we're gonna work with these things called "virtual environments". 1:41 The actual library that we want to use to call our API, to build the HTTP client is something called "requests". 1:47 It's one of the most popular packages in the world, in the Python space. I believe it's downloaded about seven million times a month, 1:55 so yeah, no joke. It's very popular. And finally, we're gonna be passing multiple pieces of data from one function to another 2:01 and we'll see that there's this concept of tuples, these data structures that hold more than one thing. 2:07 We'll be able to pass multiple things around using these tuples. However, they're a little bit clumsy, 2:12 so we can use this more improved version that comes from the Python standard library called 2:17 "named tuples". You're going to get a ton out of this chapter, and it's gonna be a lot of fun to see how all these pieces fit together.
OPCFW_CODE
2023 Workshop: HPC on Heterogeneous Hardware (H3) The HPC on Heterogeneous Hardware (H3) Workshop is intended as an in-person event in Hamburg, Germany. It does so by providing a platform for pioneering work on algorithmic research, software library design, programming models, and workflow development for increasingly heterogeneous hardware. In the workshop context, such hardware spans from ARM processors featuring long-vector extensions through GPU-accelerated systems to architectures deploying special function units, FPGAs, or deep learning processors. The workshop will compose of a well-balanced mix of invited talks, peer-reviewed conference contributions, and a panel bringing together worldwide experts in heterogeneous computing. The DOE Report on Productive Computational Science in the Era of Extreme Heterogeneity identified 8 areas that would affected by the inevitable arrival and eminence of heterogeneity: programming environments, O/S, SysOps, productivity metrics/tools, software methodology, I/O, workflows, and modeling. These themes rang particularly true at our own Heterogeneity Panel featured recently at SC21 with hybrid online/virtual attendance of about 180 participants. Very few companies, now including only Intel, Samsung, and TSMC, manage to mass produce CMOS device at the single-nm scale: a somewhat whimsically called Angstrom era of lithography entering more convenient unit of measurement Angstrom for atomic-scale transistor features. With Dennard Scaling long gone and Moore's Law at cross-roads, heterogeneity became the prevailing paradigm to maximally exploit the efficiency of the on-chip transistors at the 10s of Angstroms scale in what now may be considered new era of chip design. Perhaps the most challenging aspect is to limit the workshop's scope to the very few thematic areas that currently dominate the efforts of the community. This year, these include the following topics of interest: - Heterogeneity in programming approaches including language solutions and DSL-friendly middleware libraries. - Heterogeneous workloads that rely on convergence of scientific modeling, data analytics, and scientific AI/ML data models - Heterogeneity in data representation including hierarchical, randomized, compressive, and mixed-precision methods Topics of Interest A more specific list of topics of interest to focus the submissions and draw specific speakers and invite broad participation of attendees will be the following: - Heterogeneous algorithms that scale not just in terms of the system size but across diverse hardware kinds. - Heterogeneity in data approaches that incorporate mixed-precision storage and compute include data compression as well as hierarchical and randomized projections. - Software systems and libraries that support heterogeneous compute hardware and networking. - Programming models and tools that incorporate heterogeneity of both on-node compute and cross-node networking. - Paper submission: March 21, 2023 (AOE) - Notification to authors: April 4, 2023 (AOE) - Workshop date: May 25, 2023 - Final Presentation Slides: May 26, 2023 (AOE) - Camera-Ready Workshop with Proceedings: June 22, 2023 (AOE) - Hartwig Anzt, University of Tennessee, USA - Bilel Hadri, King Abdullah University of Science and Technology (KAUST), Saudi Arabia - Hatem Ltaief, King Abdullah University of Science and Technology (KAUST), Saudi Arabia - Piotr Luszczek, University of Tennessee, USA - Andrey Alekseenko, KTH Royal Institute of Technology, Sweden - Qinglei Cao, University of Tennessee, USA - Pedro Diniz, University of Porto, Portugal - Alfredo Goldman, São Paulo University, Brazil - Mehdi Goli, Codeplay, UK - Jiali Li, University of Tennessee, USA - Neil Lindquist, University of Tennessee, USA - Ravi Reddy Manumachu, University College Dublin, Ireland - Max Melnichenko, University of Tennessee, USA Format of the Workshop H3 Workshop is initially meant as a half-day workshop to enable a selection of a good set of contributed manuscripts and talks. We would also like to ensure a programmatically balanced program and maintain a reasonable burden on the reviewers in order to provide sufficient quality of the paper reviews and informative feedback for the authors. The workshop will be held as a in-person event. A sample schedule of the afternoon workshop to accommodate in-person attendees in Hamburg, Germany, and potential online attendees across multiple time zones will follow this general outline: - May 25, 2023 - 14:00 - 14:10 | Introduction (Anzt, Luszczek) - 14:10 - 14:50 | Keynote talk: Mixed-precision scientific computing with Tensor Cores on NVIDIA GPUs: Exceeding the performance characteristics of single precision while maintaining numerical accuracy by Harun Bayraktar, Director of Engineering, Math & Quantum Computing Libraries, NVIDIA [Abstract] - 14:50 - 15:00 | Q&A session - 15:00 - 15:20 | Talk 1: GEMM-Like Convolution for Deep Learning Inference on the Xilinx Versal by Jie Lei - 15:20 - 15:40 | Talk 2: OpenACC unified programming environment for multi-hybrid acceleration with GPU and FPGA by Taisuke Boku - 15:40 - 16:00 | Talk 3: Towards Quantum Acceleration of a Classical MCAE Application by Sophia Kolak - 16:00 - 16:30 | Coffee break - 16:30 - 16:50 | Talk 4: Observed Memory Bandwidth and Power Usage on Intel FPGA Platforms with oneAPI: A Comparison with GPUs by Chris Siefert - 16:50 - 17:10 | Talk 5: Exploring the Use of Dataflow Architectures for Graph Neural Network Workloads by Sanjif Shanmugavelu - 17:10 - 17:30 | Talk 6: An Investigation into the Performance and Portability of SYCL Compiler Implementations by Steven Wright - 17:30 H3 Workshop concludes Paper Submission and Publication Papers should be submitted to the workshop with EasyChair. All papers must be original and not simultaneously submitted to another journal or conference. They will be reviewed and should include abstract, keywords, the e-mail address of the corresponding author, and must not exceed 12 pages, including text, tables, figures, and references at a main font size no smaller than in LNCS style. Submission of a paper should be regarded as a commitment that, should the paper be accepted, at least one of the authors will register and attend the workshop to present the work. Accepted papers will be published in a Springer LNCS volume (SCOPUS indexed). The format must be according to the Springer LNCS Style. Initial submissions are in PDF but the authors of accepted papers will be requested to provide source files. Extra page allotment will be provided to accommodate reviewers comments. Any inquires should be directed to organizers. - Mixed-precision computing leveraging Tensor Cores on GPUs can exceed numerical and performance characteristics of IEEE754 single precision for scientific computing by Harun Bayraktar - Recent increases in computational throughput of GPUs have come from reduced and mixed-precision matrix multiplication units known as Tensor Cores primarily targeting applications in artificial intelligence. This has motivated the development of mixed-precision algorithms that leverage these capabilities while preserving a level of accuracy required by applications in science and engineering. The HPL-MxP TOP500 benchmark based on the iterative refinement method is a notable example of this. These methods, while often useful, are not universally applicable due to their numerical sensitivity and inability to guarantee convergence. The goal of this work is to address such shortcomings by developing a "drop-in" replacement for single-precision matrix multiplications and tensor contractions that leverage Tensor Cores to meet or exceed the numerical accuracy of IEEE754 based implementations while delivering significant performance benefits. In addition to explaining the mixed-precision techniques used, we demonstrate numerical accuracy claims through error analysis and two scientific applications: weather forecast and quantum computing simulations.
OPCFW_CODE
Working on the Zone Management Tool (ZMT) and NFSRODS. Paid internship with the North Carolina Business Committee for Education's Ready Set App team. I'll be organizing the competition and mentoring teams as they build a fully functional app from scratch. Researching identifying provocative at NC State University. We created a 7000-sentence dataset and trained 18 machine learning models on our dataset to determine the best method for provocative sentence detection. Head of IT for NCSSM's Student Government. I'm responsible for the maintenance of the Student Government website, the management of SG social media pages, and other projects as directed by the SG Executive Board. A fully immersive summer program in which I'll be designing, prototyping, and pitching a new product. A six-week, virtual research program, where I'll be working alongside a faculty member from the School of Business and assist them in their research. Chapter of Hack Club! We build fun and practical applications using code. So far, we've built personal websites, URL shorteners, wordle, and a neural network from scratch. Verste is a 501(c)(3) nonprofit organization that aims to make technical education more accessible through simplified research papers. Leading a team of 13 Web Development Interns to create a website for The Coding Foundation CyberUnicorns strives to engage the NCSSM community with cybersecurity through education, capture the flag challenges, and possible scholarships. I'm currently working with Bit Project, a non-profit organization, make technical education more accessible to all students. I'm a junior at NCSSM, a public residential high school. I plan on delving deeper into the world of computer science during time at this school.Aug 2021 Created MaskUp, a web app that features live mask detection, data visualizations, and Covid-19 symptom tracking. Summer 2021 intern on University of North Carolina Renaissance Computing Institute's Helping to End Addiction Long-term Team (HEAL). My first internship! Working with Oppti has both helped my interpersonal and communication skills. I've learned both marketing and business skills during this. I used to be terrified of speaking up in meetings, and this internship has helped me work better with others, and be more outgoing! My first job during my sophomore year was at Chick-Fil-A. I made sandwiches, nuggets, fries, cookies, brownies. Additionally, I trained new team members about Chick-Fil-A's policies. During fall of my sophomore year, I started the Panther Creek Aviation Club, with a mission of guiding fellow students into a career in aviation. I started my freshman year of high school at Panther Creek. Some classes I took include Python Programming I and Microsoft Excel.Aug 2019 ≈ Ganning Xu © 2023
OPCFW_CODE
How do you parse context-sensitive C-code? One issue I ran into was that C must be context-sensitive and cannot be parsed with one token of lookahead. For example int main1; int main() {} That's the simplest example I can think of in which both a function definition and a variable declaration start with the same token type. You'd have to look all the way ahead to the left paren or semicolon to determine what to parse. My question is, how is this accomplished? Does the lexical analyzer have some tricks up its sleeve to do the lookahead and emit an invisible token distinguishing the two? Do modern parses have many tokens of lookahead? "Context sensitive" has nothing to do with how much lookahead you need. "Context sensitive" means that you have to examine other parts of the input to figure out what your code means. So "x * y;" could mean "multiply x by y" or it could mean "declare variable y with type x*" -- you cannot figure that out without knowing whether x is a value or a type -- this is the "context" in "context-sensitive". Lookahead is something else. Fair enough. I suppose I used the term context-sensitive incorrectly. I know that GCC uses a recursive descent parser and I'm just wondering how they deal with this ambiguity. If you are interested in compilers, the field has a very rich set of underlying theories and I recommend getting a book. "Compilers: Principles, Techniques, and Tools" by Aho, Sethi, and Ullman is a good choice, even if you never work on a compiler. You should read up on LR or shift-reduce parsers. They assemble the parse tree bottom-up. In the case of the main function it goes like: shift int into the stack as a TYPE terminal token shift main into the stack as an IDENTIFIER terminal token shift ( into the stack shift ) into the stack remove the ( and ) and replace with an ARGLIST non-terminal token shift { into the stack shift } into the stack remove those and replace with a STMT_BLOCK non-terminal token remove the TYPE, IDENTIFIER, ARGLIST, and STMT_BLOCK tokens, and replace with a FUNCTION_DEF token. Of course, every time it does a replacement, it builds a new fragment of parse tree and attaches it to the new token. (I made up those token names.) It works under control of a finite state machine that recognizes the pattern of tokens on the stack and, together with the next (single) token of input, decides whether to shift the next token in, or apply one of the grammar rules to reduce a group of tokens on the stack into a single one. The FSM is built by the parser generator from a list of the grammar rules. It is called LR because it reads input tokens from the left, but applies grammar rules from the right. It is distinct from LL, or recursive-descent, which applies grammar rules from the left. Pascal is an LL(1) language. C is not LL(1), so requires an LR(1) parser. It allows C, for example, to embed almost anything in nested parentheses without confusing the parser. I hope that helps you see what is going on. You have an interesting point. However, I've heard that GCC switched over to a recursive descent parser at some point. Clearly they MUST run into this problem, right? @Scott: That's what Wikipedia says. If so, I would think it's pretty tricky, as in parsing (((*((int *f())[])((some_expression))))) or some such. When you dive into the parens you don't know if you're parsing a type, or an array of type, or an expression (and whether it's l-value or r-value), so you have to somehow postpone that decision. There are a number of errors in this post. 1: C is not in LR(1). LR(1) is strictly context insensitive, and C is context sensitive. However, you can hack together a C parser by adding a symbol table to a simpler (LALR or LL) parser, postponing certain decisions, and hand-writing your tokenizer. 2: Almost nobody uses LR(1) parsers, it is much more common to see LALR(1). LR(1) parsers are harder to generate. Continued... Continued: 3: LL is not the same thing as recursive descent. LL is a class of languages, recursive descent is one possible way to parse LL languages. 4: The machine used by a shift-reduce parser is called a "pushdown automaton", which is not an FSM. A finite state machine can only parse regular languages, and a pushdown automaton has a potentially unbounded amount of state. 5: Just because it's not LL(1) doesn't mean LR(1) is good enough. Some languages are LL(2) or LL(k) or even more complicated. @Dietrich: Just checking the dragon book, it looks like it's not completely incorrect to call it a FSM, because a PDA is an FSM that can read it's stack as well as the input. Anyway, thanks for your clarifications. (For me, it's been a while.)
STACK_EXCHANGE
If you remove schema binding, the indexes will be dropped. Error in set list in UPDATE clause. Not the answer you're looking for? Use of CONVERT function might be unnecessary. http://free2visit.com/arithmetic-overflow/arithmetic-overflow-error-converting-expression-to-data-type-money.php Least Common Multiple Moment of selecting a target from an ability of a planeswalker How can I easily find structures in Minecraft? Data type mismatch - no conversion possible. Invalid or missing expression near Error in ORDER BY clause. Table is not in the query definition. Cannot insert into this expression. See more: SQL-Server SQL-Server-2008 Hi experts, As per my understanding NUMERIC(18, 10) column would take 18 decimal digits to the left of the decimal point and 10 to the right. Invalid identifier Identity seed must be a number containing Error in GROUP BY clause. Arithmetic Overflow Error Converting Expression To Data Type Datetime In Sql Server 2008 Rate this: Please Sign up or sign in to vote. It should seem simple, but I'm just not seeing it. Hot Network Questions I lost my jury summons, what can I do? If you save the view encrypted, you will no longer be able to alter the view definition. Terms of Service Layout: fixed | fluid CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100 12,511,494 members (56,856 online) Sign in Email Password Forgot your Arithmetic Overflow Error Converting Expression To Data Type Int. Count You are about to paste n rows. Arithmetic Overflow Error Converting Expression To Data Type Int Sql Server Personal Open source Business Explore Sign up Sign in Pricing Blog Support Search GitHub This repository Watch 141 Star 588 Fork 138 Azure/node-sqlserver Code Issues 67 Pull requests 9 Projects 0 Function argument count error. Comment exceeds n bytes. (Visual Database Tools) Connection failure. (Visual Database Tools) Conversion between the specified data types is not supported on the connected database server. Data type error in expression. http://free2visit.com/arithmetic-overflow/arithmetic-overflow-error-converting-numeric-to-data-type-money.php Arithmetic overflow error converting int to data type numeric. - How to convert Int to Numeric how to solve Arithmetic overflow error converting varchar to data type numeric Arithmetic metic overflow There's one field that needs to either be blank or null. Arithmetic Overflow Error Converting Expression To Data Type Nvarchar Sql Server SQL text cannot be represented in the grid pane and diagram pane. You’ll be auto redirected in 1 second. DECLARE @acc_no NVARCHAR(MAX) = N'' DECLARE @symbol NVARCHAR(MAX) = N'' DECLARE @loop int = 0 DECLARE @loop2 int = 0 SELECT @symbol += N'' + acc_no + ',' FROM sav_transaction GROUP Syntax error in table reference: The Query Designer does not support the critical ODBC APIs. If you store the result into a money type, then it is going straight from a money to a money... Invalid view name. Path name too long. If you need to return a numeric value, you can cast it to decimal(38,0) SELECT CASE WHEN ((BarCode IS NOT NULL) AND (ExternelBarCode IS NULL)) THEN CAST(BarCode as decimal(38,0)) WHEN ((BarCode Cannot open encrypted The specified item is open in an editor.
OPCFW_CODE
With the release of PostgresML 2.0, this documentation has been deprecated. New installation instructions are available. A PostgresML deployment consists of two different runtimes. The foundational runtime is a Python extension for Postgres (pgml-extension) that facilitates the machine learning lifecycle inside the database. Additionally, we provide a dashboard (pgml-dashboard) that can connect to your Postgres server and provide additional management functionality. It will also provide visibility into the models you build and data they use. Install PostgreSQL with PL/Python PostgresML leverages Python libraries for their machine learning capabilities. You'll need to make sure the PostgreSQL installation has PL/Python built in. We recommend you use Postgres.app because it comes with PL/Python. Otherwise, you'll need to install PL/Python manually. Once you have Postgres.app running, you'll need to install the Python framework. Mac OS has multiple distributions of Python, namely one from Brew and one from the Python community (Python.org); Postgres.app and PL/Python depend on the community one. The following versions of Python and Postgres.app are compatible: |PostgreSQL version||Python version||Download link| |14||3.9||Python 3.9 64-bit| |13||3.8||Python 3.8 64-bit| All Python.org installers for Mac OS are available here. You can also get more details about this in the Postgres.app documentation. Each Ubuntu/Debian distribution comes with its own version of PostgreSQL, the simplest way is to install it from Aptitude: $ sudo apt-get install -y postgresql-plpython3-12 python3 python3-pip postgresql-12 EnterpriseDB provides Windows builds of PostgreSQL available for download. Install the extension To use our Python package inside PostgreSQL, we need to install it into the global Python package space. Depending on which version of Python you installed in the previous step, use the corresponding pip executable. --database-url option to point to your PostgreSQL server. sudo pip3 install pgml-extension python3 -m pgml_extension --database-url=postgres://user_name:password@localhost:5432/database_name If everything works, you should be able to run this successfully: psql -c 'SELECT pgml.version()' postgres://user_name:password@localhost:5432/database_name Run the dashboard The PostgresML dashboard is a Django app, that can be run against any PostgreSQL installation. There is an included Dockerfile if you wish to run it as a container, or you may want to setup a Python venv to isolate the dependencies. Basic install can be achieved with: - Clone the repo: git clone https://github.com/postgresml/postgresml && cd postgresml/pgml-dashboard - Set your echo PGML_DATABASE_URL=postgres://user_name:password@localhost:5432/database_name > .env - Install dependencies: pip install -r requirements.txt - Run the server: python manage.py runserver Join our Discord and ask us anything! We're friendly and would love to talk about PostgresML. Try It Out Try PostresML using our free managed cloud. It comes with 5 GiB of space and plenty of datasets to get you started.
OPCFW_CODE
Configuring Metamask Network This page includes instructions on how to setup your chainlet network information in your Metamask wallet As we saw in the previous page, in order to optain the chainlet network RPC connection details, you will have to run the following command: sagacli chainlet apis <chain id> $ sagacli chainlet apis pacman_1681169675474000-1 Name Endpoint Status ---- -------- ------ ws pacman-1681169675474000-1.ws.sp1.sagarpc.io Available jsonrpc pacman-1681169675474000-1.jsonrpc.sp1.sagarpc.io Available explorer pacman-1681169675474000-1.sp1.sagaexplorer.io Available We can see in the above CLI output the jsonrpc and explorer endpoints that we will need to be able to configure our chainlet network in Metamask. Note: make sure the status for all of the endpoints is "Available", otherwise you will risk running into issues with using them in your metamask wallet config. The fastest way to configure your chainlet network settings into your metamask browser plugin is by visiting your chainlet's dedicated block explorer page and clicking on the Add <chainlet name> link present in both the header and footer menus. You can also see how this looks like in the image included below: In order to do so, we will need to navigate to Metamask -> Settings -> Networks -> Add Network The settings screen will look something like this: From here, click on "Add a network manually" and type in the following settings in the required fields: - Network Name: Pacman Chainlet (or you can chose a different name for this network) - New RPC URL: This will be your chainlet's jsonrpc endpoint from the chainlet output above - Chain ID: Here you need to input the middle numerical part of the chainid listed in the SagaCLI - Currency Symbol: This is your chainlet currency symbol. (You can get that also by calling the sagacli chainlet get <your chainlet alphanumerical id> - sagacli chainlet get pacman_1681169675474000-1ChainId Name StackName StackVersion Launcher Mantainers CurrencySymbol Status------- ---- --------- ------------ -------- ---------- -------------- ------pacman_1681169675474000-1 pacman sagaevm 1.0 saga1rdssl22ysxyendrkh2exw9zm7hvj8d2ju346g3 [saga1rdssl22ysxyendrkh2exw9zm7hvj8d2ju346g3] pac STATUS_ONLINE - Block Explorer URL Everything together will look similar with what you see in the following image below:
OPCFW_CODE
package com.gelvt.gofdp.facade; /** * 计费服务 * @author: Elvin Zeng * @date: 17-7-4. */ public class BillingService { public static final String BILLING_TYPE_PROCESS_IMG = "billing_type_process_img"; /** * 收费 * @param userId 用户id * @param billingType 收费类型 * @return 实际收费值(单位人民币) */ public Double charge(Integer userId, String billingType){ // 在这里执行一堆业务逻辑,记录好这本次收费记录。 Double actualCharge = 0.03; // 返回一个模拟的值 System.out.println("对用户[" + userId + "]收费" + actualCharge + "元,收费项目:" + billingType); return actualCharge; } // 这里还有一堆其他的函数 }
STACK_EDU
Learn how to develop for Windows Phone 8 With the recent launch of the Windows Phone 8 devices, a lot of customers are asking about what does it take to develop to the new devices. There’s a lot of blog articles and individual sessions throughout Channel 9; however, two that I would recommend are the Windows Phone 8 Training Kit and Windows Phone 8 Jumpstart Videos with Andy Wigley and Rob Tiffany. Windows Phone 8 Training Kit Just like the other training kits we’ve produced, the Windows Phone 8 Training Kit includes Powerpoint decks and Hands-On Labs that help you learn the most important parts of developing for Windows Phone 8. It’s about a 158MB download and you will need to be on Windows 8 in order to develop Windows Phone 8 apps. You will also need the Windows Phone 8 SDK. One thing to mention about the Windows Phone 8 SDK is that it’s not just for developing apps and games for Windows Phone 8 but also for the Windows Phone 7.5 devices. You can download the Windows Phone 8 Training Kit here. Windows Phone 8 Jumpstart Videos I have always been a visual learner so it’s great to have some videos to watch to get familiar with the technology. The Windows Phone 8 Jumpstart series is available through Channel 9 and through the Microsoft Virtual Academy. This series has 21 video modules each approximately 60 minutes in length. Here’s a full outline of the modules: - Mod 01a: Introducing Windows Phone 8 Development Part 1 - Mod 01b: Introducing Windows Phone 8 Development Part 2 - Mod 02: Designing Windows Phone 8 Apps - Mod 03: Building Windows Phone 8 Apps - Mod 04: Files and Storage on Windows Phone 8 - Mod 05: Windows Phone 8 Application Lifecycle - Mod 06: Background Agents - Mod 07: Tiles and Lock Screen Notifications - Mod 08: Push Notifications - Mod 09: Using Phone Resources in Windows Phone 8 - Mod 10: App to App Communication in Windows Phone 8 - Mod 11: Network Communication in Windows Phone 8 - Mod 12: Proximity Sensors and Bluetooth in Windows Phone 8 - Mod 13: Speech Input in Windows Phone 8 - Mod 14: Maps and Location in Windows Phone 8 - Mod 15: Wallet Support - Mod 16: In-App Purchasing - Mod 17: The Windows Phone Store - Mod 18: Enterprise App Architecture - Mod 19: Windows Phone 8 and Windows 8 Cross Platform Development - Mod 20: Mobile Web If you follow the links from the outline above, they will take you directly the Channel 9 videos. If you want to track your progress and accumulate points as you take part in the learning, I recommend you take do it through the Microsoft Virtual Academy. There are a ton of different topics that you learn at the Virtual Academy and the more you learn, the more points you earn (bragging rights). If you haven’t heard of the Virtual Academy, check out the main web and check out all the topics you can learn about.
OPCFW_CODE
[LayoutReader] Training loss is low but inference performs terrible Describe Model I am using (UniLM, MiniLM, LayoutLM ...): LayoutReader Hi @zlwang-cs I am using layoutreader to predict layout-only data like this, which the order is from right to left, top to bottom. However, I encountered some problems and hope you can share some insights. When I tried to train the model, an error occurs at line 671, said that the two summed tensor dimensions are not aligned, and when I remove the self.bias, I can train normally https://github.com/microsoft/unilm/blob/cd2eb8ade8b6e475aefa9b769ced2eefc4245a3e/layoutreader/s2s_ft/modeling.py#L669-L673 The training process is normal, and the initial loss is around 30 and can drop to 0.01. However, when I test with thetest set (shuffle rate=1.0), the results can be very poor. A lot of boxes are missing and the ARD is around 20.1. When I pre-sort the inputs in the test set with rules (simply orderd by x coordinate), and feed them into the network. But the network just predicts the exact same reading order as the inputs. What's more, when I tested with the training set, the results were also very poor Hi, thanks for your interest in our paper. I'd love to help you to fix the issues. For the first question, I am not sure what the problem is without any more detailed information. I would recommend you to use stop points or other debugging tools to see what the shape of tensors looks like in this step. For the second question, I guess the problem may be from the reading order of your dataset. You can see that the data you use is quite different from the original settings in our paper. The pre-trained weight is based on the left-to-right and top-to-bottom reading order so such a pre-training setting may be an obstacle in your experiment. Considering this, I don't think the poor performance is surprising. If you would like to continue solving this reading order setting, maybe you need to collect enough data to pre-train the model again or resort to other approaches. Another possible way is to turn the image 90 degrees counterclockwise so that it will be similar to the common reading order settings. Hi @zlwang-cs Tks for the prompt reply. I'll try to debug this and see if I can locate the problem. Rotate the image 90 degrees counterclockwise seems to be quite reasonable and I'll try this too. There is one thing that still puzzles me. You mentioned that LayoutReader loads pre-trained weights during training. Is this pre-trained weight based on word-level or textline-level? It seems that I need textline-level pre-trained weight here Hi @Mountchicken, the pre-trained model is based on the word-level. Unfortunately, I cannot help you with the textline-level pre-training. Hi @zlwang-cs Thanks for the reply. BTW, how to load the pre-trained weights and finetune it on my own dataset? I downloaded layoutreader-base-readingbank.zip from this link and got config.json, pytorch_model.bin after unpacking it. Please refer to some related documents and I am sure you can find the right answer. And I see you are using the run_seq2seq.py which is for training, but the weight you downloaded is actually for decoding. I guess that is the reason why you are confused.
GITHUB_ARCHIVE