Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
The evolution of Java to become a vendor neutral platform, move to a six month release cadence and the platform advancements in last releases are more than welcome. As a community, we witness an interesting transformation of the Java platform — from a pretty steady platform that used to release new features every 2 years (i.e. see Java 7 & Java 8 release dates) into a platform that is moving fast and have to compete with the new players in town that evolve rapidly. From my perspective, its really interesting to read the debate and see how enterprises will adopt these rapid innovations after a really long time that they got used to the tradeoff of having stability in favour of innovation and unpredictable changes.
Despite the great advancements that Java 9 brought with its release last year (i.e. Modularity — Project Jigsaw), I think there wasn’t much buzz around two really interesting JEPs — JEP 243 and JEP 295. Both bring with them new awesome capabilities that can take the Java platform several steps ahead of the other platforms!
What if you could have a universal VM?
GraalVM is a research project developed by Oracle Labs and already in production at Twitter and they even contribute back to the project which is awesome (see this JavaOne talk by Christian Thalinger on why Graal is a good fit for Twitter). Graal is a JIT (Just-in-time) compiler that can be plugged-in to the Hotspot VM. JIT is basically the component that translates the JVM bytecode (generated by your javac command) into machine code which is the language that your underlying execution environment (i.e. your processor) can understand — and all that happens dynamically at runtime! Graal aims to replace the old C2 compiler that is written in C++. Leading contributors in the Java community say clearly that it became a hard task to maintain the existing compilers code. GraalVM is appealing because finally you can debug the JIT compiler right in your standard development environment that you use daily since its written in Java and its much easier for developers to dive in.
So how come we can have another JIT compiler that can be plugged-in and even written in Java?!
Why am I so enthusiastic about GraalVM?
Im using GraalVM 1.0 (based on JDK 8). Lets compile and run:
java -cp . SumNum
It prints the result: 3
I’m curious to see what happens in case of a runtime exception- instead of calling parseInt I will call some undefined function:
Obviously, interoperability in Graal is a tradeoff between selecting the best language for my task vs. static code analysis at compile time.
Language agnostic tools — All languages talk to the Truffle framework except Java & Scala. So you can use the same set of tools for monitoring, debugging and profiling. Awesome! It makes development experience much easier in terms of performance. Currently, the way that Graal implemented it is quite limited. They are integrated with the Chrome Dev Tools protocol. It’s very easy to debug your running VM. All you need to do is run your script with the inspect argument, it will open a port and you can debug using the Chrome Dev Tools any language that you want to : R, Python, etc. Check out their documentation for more information.
Something that caught my eyes in one of the talks by Oracle was how Chris used the jvisualvm tool to profile a Ruby application and he could even take a Thread dump. Cool, isn’t it?! ;)
It just shows how powerful it is to have a one abstract platform. Now all you have to do is close your eyes and dream of the great things that can happen with such great capabilities.
Performance — Graal benchmark reports show great performance improvements in almost all of its implementations thanks to the way that GraalVM performs object allocations:
Thomas mentioned that they see great improvements for Java Streams, lambdas and their recent benchmarks even show better numbers for various implementations. Chris Seaton blogged that “TruffleRuby is easily the fastest implementation of Ruby, often 10x faster than other implementations, while still implementing almost all of the language and standard library”. Twitter is running GraalVM in production especially because of its performance improvements in their Scala services. They say that it saved them money.
In terms of interoperability, Thomas showed in his talk at Devoxx the advantages of calling one language to another in the same process. The benchmarks show clearly that you can cut the costs of context switching and objects marshalling/unmarshalling:
One more interesting project that is part of GraalVM is the Substrate VM. I include it as part of the “performance” section but it’s more ambitious than that. The Substrate VM is an AOT (Ahead-of-time) compiler (see JEP 295 — the integrated AOT compiler in JDK 9 is older and I’m not sure what’s integrated in JDK 10 and JDK 11) written in Java. It uses GraalVM to create an executable binary Mach-O or ELF image ahead of time that doesn’t need to run on the Hotspot VM but rather in the Substrate VM itself. It takes some time to compile but at the end you have an executable binary that you can run anywhere you want: server, mobile device, etc. The cool thing here is that it performs all the optimisations and packaging ahead of time so the resulting image startup time is much faster (according to their documentation you will have much lower runtime memory overhead because it performs all of the optimisations at compile time):
As you can see in the screenshot above, it took more than 2 minutes to compile my sample polyglot program but its execution time is much faster (check out GraalVM reference for benchmarks). As far as I understand there are various limitations but the first advantage that immediately came up to my mind is that as my system scale I can spin up new containers much faster than before. I came across this blog post that explains how you can create an instant Netty startup image;)So just imagine how many great things you can do with that.
It looks like Oracle moves in the right direction in order to take the JVM platform forward. Graal becomes more integrated in JDK 10 according to JEP 317. Now it becomes even simpler to change between compilers just by adding several command line options. If you aren’t using JDK 10, you can download one of the editions (at the time of writing GraalVM 1.0 is based on JDK 8) from the GraalVM website and packaged already with the other languages runtimes so you can experiment the polyglot programming easily.
I hope that you have enough information to help you get started. I hope that Oracle will continue to invest time in this great project.
|
OPCFW_CODE
|
This website is run by Devsoft Baltic OÜ. It is designed to be used by as many people as possible. We are continually improving the user experience for everyone and applying the relevant accessibility standards. All texts on this website should be clear and simple to understand. You should be able to do the following:
- Zoom in up to 300% without issues.
- Navigate most of the website using only a keyboard.
- Navigate most of the website using speech recognition software.
- Use most of the website with the help of a screen reader.
Web Content Accessibility Guidelines (WCAG) specify requirements for designers and developers to improve accessibility for people with disabilities. These guidelines define three levels of conformance: Level A, Level AA, and Level AAA. This website is partially compliant with the Web Content Accessibility Guidelines version 2.1 Level AA standard. Partial compliance means that the following parts of the website content do not fully conform to the accessibility standard:
- Some pages have poor color contrast (WCAG 2.1 success criterion 1.4.3 (Contrast (Minimum))).
- Prerecorded video content lacks audio description (WCAG 2.1 success criterion 1.2.5 (Audio Description (Prerecorded))).
- Not all pages can be found through more than one type of navigation (WCAG 2.1 success criterion 2.4.5 (Multiple Ways)).
- Some navigation elements with the same functionality are identified inconsistently (WCAG 2.1 success criterion 3.2.4 (Consistent Identification)).
- There is no mechanism to bypass blocks of content that are repeated on multiple pages (WCAG 2.1 success criterion 2.4.1 (Bypass Blocks)).
We are currently working on fixing content that fails to meet the Web Content Accessibility Guidelines version 2.1 AA standard.
In addition to the steps we have taken to make the software accessible to everyone, many people are likely to get the most accessible experience of using this website by customizing their computer to suit their individual needs (enable a screen reader, change the website's color scheme, increase the font size).
We have spent a lot of time making sure our software is accessible. If you find anything on the website difficult to use, email us at firstname.lastname@example.org. To help us get to the bottom of your difficulty, please include the following information in your email:
- The URL for the page you are having trouble accessing
- Details about what you were trying to do and why it was difficult or impossible to do it
- Details about your computer and software
- Operating system (for example, Windows Vista, Mac OS X, Linux Ubuntu 9.10)
- Browser software (for example, Internet Explorer 6 (IE6), Firefox 3.5, Chrome 22.214.171.124, Opera 10, Safari 4.0.4)
- Settings you may have customized (for example, a changed font size)
- An assistive technology that you use (screen reader, screen magnification software, voice recognition software for input)
All constructive feedback regarding the accessibility or usability of this website is very welcome and will be carefully considered.
|
OPCFW_CODE
|
Cheap pre-pay sim in Canada?
I have my UK-based Android smartphone, but yay for Orange (sarcasm), it's locked and will take 3 weeks to unlock! Fortunately I also have a low-tech Nokia - I just need a sim. Rogers quoted me $70 CAD to get up and running with a pre-pay sim. Given it costs about 10 pounds in the UK to do the same, this seems pricey. Any suggestions for pre-pay sims (I don't need data, just text and phone). In Vancouver.
With Fido you can get a pre-pay sim for $10.
There are a variety of prepaid rates available. Personally I use the $10/month rate.
You can change rates whenever you refill your account.
From Google it looks like there are plenty of Fido stores in Vancouver. So you could pop in and pick up a sim when you arrive.
The Fido website is good for managing your account. Note that I did have trouble refilling my account via the website using an overseas (New Zealand) credit card. If you have the same problem you can always buy refill voucher from a Fido store.
7-Eleven sells SIM cards for $10. They are a MVNO reselling Rogers service. Their main attraction is that prepaid credits are valid for one full year. However, it is not available for sale in all provinces (notably Quebec).
Using this right now. They also have a very cheap data Option ($10 for One month), but it has serious limitations: EDGE speed only, and requires setting a Proxy, which for iPhones requires running a config utility on the PC
+1 Speakout Wireless (7-Eleven); they're the cheapest, despite the limitations.
The price sounds rather high....
Looking on the Rogers Website I see a Pay As You Go SIM for $9.99, and you can get plans where you pay on the days you use it. That was my plan for when I'm in Canada.
A price of $70 sounds like it includes a phone too. I'd suggest trying a bigger Rogers store and hope you get someone more helpful the 2nd time!
(You can also get new Rogers Pay As You Go SIM Cards off ebay for around $10 too, including shipping, which does look like it's the correct price)
Oddly no, they wanted an additional $79 for their cheapest phone...
@Mark AFAIK, if you do not buy a phone from them, they charge you 20$ for the SIM card, + 20$ for "activation" + 10$-30$ for pre-paid funds (the specific ammount depends on the plan you choose).However if you do buy the phone, they are supposed to charge you only the pre-paid funds and the phone. At least this is what they told me 2-3 months ago when I went to buy one.
First of all, please note that there are 3 Mobile Network Operators (MNOs) in Canada providing services to ~90% of mobile phone users in Canada.
Parent companies for these 3 MNOs are:
1- Rogers Communications (10M+ subscribers (subs) as of Q2-2016) [1]
2- BCE Incorporation (9M+ subs) [1]
3- Telus Corporation (8M+ subs) [1]
These MNOs have subsidiary brands:
1- For ROGERS COMM: (Rogers Wireless, Fido Mobile, Chatr Mobile, Cityfone, Primus Wireless, Zoomer Wireless, and SimplyConnect)
2- For BCE INC. (Bell Mobility, Virgin Mobile, Lucky Mobile, Solo Mobile, and Bell MTS).
3- For TELUS Corp. (Telus Mobility, Koodo Mobile, and Public Mobile).
The remaining 10% of subscribers are served by smaller regional providers referred to as Mobile Virtual Network Operators (MVNOs).
The difference between MNOs and MVNOs is that the latter rely on partnerships with MNOs to connect their customers across Canada.
In fact, MVNOs do not own spectrum or network infrastructure, instead, they lease network capacity from MNOs at wholesale rates and distribute it in retail.
MVNOs are:
**Operator Ownership**
7-Eleven Speak Out Wireless Ztar Mobile
Cansel Connect Cansel
DCI Wireless DCI Telecom
Execulink Mobility Execulink Telecom
good2go Mobile Canada Ztar Mobile
KORE Wireless[19] KORE Telematics
OnStar General Motors
PC Mobile Loblaws
Petro-Canada Mobility Ztar Mobile
Lucky Mobile Bell
Public Mobile Telus
Chatr Rogers
With all these options, in order to select the package that is suitable for your needs, I suggest that you check the following website:
www.planhub.ca
It helps you identify the ideal package for you based on your calling minutes per month, data per month and cell phone (bring/buy).
Hope this helps.
Reference:
[1] http://www.cwta.ca/wp-content/uploads/2016/08/SubscribersStats_en_2016_Q2.pdf
|
STACK_EXCHANGE
|
Write my research proposal
Examples of Research proposals
- Writing Research Proposal with Example
- How to write a good Research Proposal
- How to Write a Research Proposal (with Pictures)
- How Can I Find Top Experts to Write My Research Proposal?
- Writing a Research Proposal
- Writing a Thesis Papers
- How do I write the background to my research proposal
- How to Write a Research Proposal and Make It Special
Can I trust you to write my research proposal properly? It is quite reasonable not to dive headfirst into ordering a paper from the random company. There are quite a few write my research proposal of shabby services out there that don't care about should i buy a resume the structure or consistency of write my research proposal your work. They will pitch you a standard paper from their storage, and that's it. We don't think that is the right way to go about writing a. Writing a research proposal is rightfully considered write my research proposal as one of the most complex tasks and requires mastery of multiple skills. It is a paper, which aims to deliver a brief information on the research you want to conduct, explaining the main reasons write my research proposal why it will be useful for the reader and for the society. A correct research proposal should contain:! What is write my research proposal a Research Proposal? A research proposal is a concise summary of your research paper. It creates the general idea of your research by highlighting the questions and issues you are going to address in your paper. For writing it, demonstrate the write my research proposal uniqueness of your research paper. This is the first draft that demonstrates your skills to conduct research. This video talks about factors which should be clarified in a research thesis proposal: topic, write my research proposal literature review, research questions, sample, instrument, procedure, and so on. Related videos. Our research proposal write my research proposal writing company proposes help in writing grant proposals of any type. You provide us with customized instructions, submission time, citation style (APA, Harvard, Oxford, MLA, Chicago, etc) and other specifications. With the write my research proposal help of that our research proposal writers meet your expectations in the given time period. The research proposal you compose must be descriptive and concise. Omit repetitions and write my research proposal unnecessary phrases. Include the main research question, the hypothesis, write my research proposal the results of relevant studies, and information on data collection and analysis as well as instruments that will be used. The general idea for this kind of writing Business school essay help! Help Writing Business School Essays is to provide the essential background for your research. Make the significance of it clear. Purpose write my research proposal of a research proposal Academics often have to write research proposals to get funding for their projects. As a student, you might have to write a research proposal to get your thesis or dissertation plan approved.
How to Write a Research Proposal
NETWORK SUMMER. Clarity is paramount when determining the structure/layout of your write my research proposal dissertation. In that respect, the thesisbychapter format may be advantageous, particularly for write my research proposal students pursuing a PhD in the natural sciences, where the research content of a thesis consists of many discrete experiments. Here, when I placed my order, communicated with support managers and then finally received my ready research write my research proposal proposal, I was convinced that this is a good and reliable service and it is worth paying money. Now I am waiting for the final recommendations from my professor for the whole research paper and I plan to order it write my research proposal here as well. They told me it is possible to pass my future work to the. In the real world of higher education, a research proposal is most often written by scholars seeking write my research proposal grant funding for a research project or it's the first step in getting approval to write a doctoral dissertation. Even if this is just a course assignment, treat your introduction as the initial pitch of an idea or a thorough examination of the significance of a research write my research proposal problem. After reading. As with any research paper, your write my research proposal proposed study must inform the reader how and in what ways the study will examine the problem. Failure to develop a coherent and persuasive argument for write my research proposal the proposed research. This is critical. In many workplace settings, the research proposal is intended to argue for why a study should be funded. How to Write a Research Proposal: StepByStep This video was demanded by many of my viewers. In this write my research proposal video you will learn how to write a research proposal in a stepbystep manner. Your thesis is an argument, not just an observation or a restatement of the prompt or question. It should be an argument that takes a stand people might disagree with. If you are writing about the Civil War, for example, the write my research proposal thesis "The. Civil War was write my research proposal fought for many reasons good and bad" is not adequate. It should be a single, complete. We don't think that is write my research proposal the right way to go about writing a research proposal. When you come to us with the 'can you write my research proposal for me' question, we fully focus on you and your order requirements. When you decide to buy research proposal online from our company, you will be assigned a writer that knows what they're doing. Research proposal comes down to describing an issue you plan to conduct your research on, practical and write my research proposal theoretical means you think are better applied when.
How to Write a Research Proposal and Make It Special
- I Write My Research Proposal under the Strict Control of
- Write My Research Proposal
- Research Proposal Writing Service
- How to Write a Research Proposal
- How To Write A Research Proposal
- Pay for Write My Research Proposal Service
- How to Write a Research Proposal Step by Step
- Examples of Research proposals
Write My Research Proposal Writing your good research proposal is not an easy task. It is not possible just to look in Internet for sources, take some ideas there. You need to come out with absolutely original idea for the research write my research proposal and be able to conduct this research afterwards. Research Proposal Definition. A write my research proposal detailed definition is, A research proposal is a document written with the goal of presenting and justifying your interest and need for conducting research on a particular write my research proposal topic. It must highlight the benefits and outcomes of the proposed study, supported by persuasive evidence. Research Proposal Outline? Write My Research Proposal in the UK. We are the highestranked academic research proposal writing service. Our feedback is primarily based on our trustworthiness, ease of use and professional write my research proposal approach. With robust plagiarism checking write my research proposal systems and customer service policy, we can assure you that our service is safe and reliable. Our writers are the best in Great Britain and will not only meet. Therefore thesis proposal, research proposals require time and effort. Students always ask, "I need someone to do my research proposal", "Write my research proposal" and what write my research proposal not. In this crucial time acts as a savior and assists all those students who had trouble writing a write my research proposal research proposal. After the order is placed, the. Help write my research proposal please! The research write my research proposal proposal reflects your write my research proposal personal point of view rather than just the opinion of experts, so it is more beneficial to choose the topic from your sphere of interest. It is composed like an essay. In this kind of writing, both parts are important, the research itself and the proposal. The main purpose of your proposal is to show the research. How to write a good Research Proposal October. Amongst one write my research proposal of the write my research proposal most difficult and timeconsuming jobs in a college is preparing a research paper. But do you know what needs to be done before you even think of starting off with your favorite topic for research? You need to submit a research proposal. What is a Research Proposal? A research proposal is an introduction to the research.
Writing a Research Proposal
Best Research Proposal Writing Help There Is Consistency We Know What To DoThe important thing about a decent research proposal is, unsurprisingly, it's body. You. Uniqueness New Paper Each TimeIt isn't uncommon to come across plagiarism in the academic write my research proposal field. It seems so easy to. We use research proposals to match you with your supervisor or supervisor team. You can contact one of write my research proposal our Research Leads or an academic whose work you are interested in to discuss your write my research proposal proposal. If you are interested in the work of a specific academic at York St John University you should mention this in your proposal. A research proposal is an introduction homework help groups to the research that you would like to start doing for your final semester. A wellframed research write my research proposal proposal gives a very clear view of what the research is based on. Your proposal should talk in brief about the areas of write my research proposal the given subject that your research is going to cover. A research proposal sample that has been previously downloaded may help the student by giving information such as: The paper format. You will grasp write my research proposal enough knowledge about how the paper should be formatted without making any write my research proposal flimsy errors and how many pages and words should be in the paper like word essay. To write a research proposal, start by writing an introduction that includes a statement write my research proposal of the problem that your research is trying to solve. After you've established the problem, move into write my research proposal describing the purpose and significance of your research within the field. After this introduction, provide your research questions and hypotheses, if applicable. Finally, describe your proposed research and methodology followed by any institutional resources you will use, like archives or lab equipment. If you are having difficulties in writing a research proposal, you can download some online samples for guidance. There are different examples and formats https://www.autobizz.com.my/more.php?resolve=ZDIzNWIzNmNlMzQ1Y2QzZWU5YTUzMGMxNDA1YTZlOWI of research proposals; thus; you should write my research proposal choose one that is most write my research proposal suitable for your research. However, understand that some aspects, such as the introduction, data collection, and references, are constant elements in the research proposal.
- Help To Write A Sentence
- Best essay writing service 2018
- Of Buy Essay Uk Reviews
- Writing Companies Near Me - The 10 Best Grant Writers Near Me (with Free Estimates)
- Proofreading services usa
|
OPCFW_CODE
|
Do major historic events have a lasting impact on humankind's objective happiness?
In his book "Sapiens - a Brief History of Humankind", Harari argues that the biochemistry of "happiness" in any person can only move within bounds, dictated by genetics.
He also argues that happiness only improves momentarily when objective circumstances improve, and only e.g. a degenerative disease or permanent physical pain can result in lasting change. This seems to be widely accepted and is backed by numerous studies.
I could not, however, find any studies supporting his statement that events such as the agricultural revolution had no lasting impact on humankind's objective happiness.
Are there any scientific studies on the biochemistry of happiness of generations that lived before and after major historic events like the cure of a disease that affected large parts of the studied community?
Whats psychology's take on human happiness throughout history?
Please suggest edits, as I am no expert on any of this.
Happiness as an emotion or feeling is very subjective and therefore the subject of what makes people happy would be at best, an opinion related study. I think, therefore, that this question is off-topic for this site. Maybe philosophy.se might be a better fit
@Chris would this question be on-topic if the asker focused on a correlational study of happiness in proportion to technological advancements, instead of a biochemistry level of detail?
@Seanny123 - to me you need to ask yourself "can the question of a person's happiness be answered without opinion"? If the answer is yes, than my opinion is incorrect and the question is on-topic. The way I see it, if you ask someone if they are happy, it can only be answered through that person's opinion based on their feelings at the time. The opinion can change after reflection.
Depends what you mean by "objective happiness". If we use the term in the way that it's used by Kahneman etc., meaning that
In the special conditions of the clinic or laboratory it is sometimes possible to
obtain continuous or almost continuous reports of experienced utility from patients or
experimental subjects. Continuous measures are of course impractical for the
measurement of objective happiness over a period of time. Sampling techniques must be
used to obtain a set of values of moment-utility that adequately represents the intended
population of individuals, times and occasions. For example, a study of the objective
happiness of Californians should use a sample of observations that reflects the relative
amounts of time spent on the freeway and in hot tubs. Techniques for sampling times
and occasions have been developed in the context of Experience Sampling Methodology
(ESM) (Csikszentmihalyi, 1990; Stone, Shiffman and DeVries, 1999).
It's hard to imagine getting such data retrospectively from ancient or even just past populations.
Ref quoted: Kahneman, D. (2000). Experienced utility and objective happiness: A moment-based approach. In D. Kahneman & A. Tversky (Eds.), Choices, values and frames (pp. 673-692). New York: Cambridge. University Press and the Russell Sage Foundation.
Generally, diet and/or physical trauma leave a bone recod for archeologists to study. Alas I'm not aware of neurochemistry of past populations being ammenable to study (other than comparative studies across current species, e.g. humans vs apes).
|
STACK_EXCHANGE
|
Intersection of n matrices in MATLAB
I have a structure of high number n (3000, 5000...) of elements, each element is represented by matrices InputMat and OutputMat of different dimensions. I want to form groups where the elements of each group share AT THE SAME TIME a least one element in the InputMat AND time at least one element in the OutputMat.
For example
Element(1).InputMat=[22,12,36; 14,25,11]
Element(1).OutputMat=[18,77;44,82]
Element(2).InputMat=[7,63; 15,40,2,5]
Element(2).OutputMat=[17,60;30,54]
Element(3).InputMat=[35,12,99; 20,31]
Element(3).OutputMat=[90,18;8,77]
Element(4).InputMat=[22,12,36; 14,25,11]
Element(4).OutputMat=[54,17,120;81,16]
...
Element(n).InputMat=[63,40,44; 36,10]
Element(n).OutputMat=[18,77;17,34].
From this example I would have 3 groups:
Group1= {Element(1),Element(3)} because they share the number 12 in InputMat and the numbers 77,18 in OutputMat.
Group2= {Element(2), Element(n)} because they share the numbers 63,40 in InputMat and the number 17 in OutputMat.
Group3= {Element(4)}
Group3 have one element which is Element(4) because Element(4) doesn't share the same time at least one element in inputMat and OutputMat with the other elements.
How can I do this in MATLAB?
You need to compare each of the matrices to all other matrices, a loop in a loop. It’s expensive computationally. What have you tried so far?
1, 2 and 4 all share number 12, should there be a group with those three elements? Or is it always two elements to a group? Can an element be in multiple groups at the same time?
@CrisLuengo yes I've tried to compare each matrice to others but with smal exmaple of maticies. I'am looking for a way to avoid the loops bcause its would be very expansive.
@CrisLuengo 1,2 and 4 sahare 12 in the InputMat but they don't have a commun elment in the outputMat so they can't be in the same group. A group can contain any number of elmments 1,2, 3,4 ..... and No, an elment can't be in multiple group at the same time.
What does Group3= {Element(n)} mean?
If you have
Element(1) = input[1 1; 1 1] with output[2 2; 2 2]
Element(2) = input[1 3; 3 3] with output[2 4; 4 4]
Element(3) = input[1 5; 5 5] with output[2 6; 6 6]
what should be done with these elements (they all have 1 and 2)? Should they all be in one group? Or should there be three groups (1&2, 1&3, 2&3)? Or is this guaranteed never to happen?
Also consider the example
Element(4) = input[1 3; 3 3] with output[2 4; 4 4]
Element(5) = input[1 5; 5 5] with output[2 6; 6 6]
Element(6) = input[3 5; 5 5] with output[4 6; 6 6]
How should this case be managed? Should they all be in the same group? Or should there be three groups (4&5, 4&6, 5&6)? Or is this guaranteed never to happen?
@magnesium sorryI did a mistake, Group3={Element(4)}. In the example you gave the elment 1&2&3 would be in the same group because they all share an elment in input and another in output.
@magnesium, for your secod example, it a spatial case that is not allowed for my wok but I can manege it. Two groups can be formed Group1={4&5} , Group2={5&6} and I shoose to keep only one.
"I'm looking for a way to avoid loops because it would be very expensive." - Have you tried it? Regardless of whether you're looking for the maximum number of groups, or just taking a greedy approach, the time complexity is going to be O(n^2*m) where n is the number of elements and m is the size of each matrix. How can {Element(4)} be a group by itself? Please add clarifications to the question itself using [edit].
@beaker I tried just a smal size example, The probelm is The size of groups can be diffrent !!, so I tried to find first all combination that can be formed by the elment's structure, then for each combination, I search If they share elemnts in the inputmat and Outputmat. Howver, this wil take me a lot of time. Can you suggest me another way to do it?
As I said, you're going to have to compare (almost) all pairs of elements. You're not going to get around that part unless you know something about the values of in the arrays that would allow you to exclude some pairs. To perform the comparisons, I'd suggest looking at intersect and ismember. Either will do what you want, but you'll need to do some timings to see which one is faster.
|
STACK_EXCHANGE
|
(Here’s the .swf version.) There was a lot of nice feedback. Random people would spot it and exclaim, “That’s awesome!”
Things I learned from this experience:
Keep it obvious. The tiny bright spot in the corner of the video is an LED sign displaying a phone number. Anyone who called was treated to this soundscape:
Guess how many people called? … Zero! By my estimation, the typical art-walker gazed at the Math+Heart graphics for a few seconds, hardly enough time to notice the tiny phone number.
A different approach would be a phone-centric interactive, such as Mall of America: A Toll Free Audio Exhibition. Their phone number is displayed in massive yellow lettering! The back-end could be developed with Twilio, or a similar group that provides interactive phone service.
Don’t expect 60 FPS from Flash’s display list. Ultimately, I did achieve 60 FPS on one of my computers (with the occasional dropped frame, alas) but I invested an unhappy amount of time, fiddling & optimizing, to reach that point. (Flash’s OpenGL graphics, a.k.a. Stage3D, are more performant. But I didn’t attempt this, for fear of texture memory issues.)
Motion design tips. The mathematical glyphs are essentially large particles, which enter at random speeds. This “swarm” felt best when the random speeds ranged between 50% and 100% of some maximum value. Any ranges wider than 50% broke the cohesive effect, and felt sloppy.
I experimented with randomizing the easing equations, for instance the rotation could twist at either an exponential or quartic rate. This was surprisingly frustrating to watch. The glyphs behave almost like a unified fluid, so it hurts when they move according to different rules.
Also bad: Using different tween times & delays for the four properties: X, Y, scale, and rotation. I tried using tweens to look like a phantom hand was hurriedly slapping the glyphs down on the canvas. This felt weaker than the smooth, unified motion that was used in the final version.
To achieve movement that was both organic and emotive, I could have tried filming myself acting out the motion with slips of paper, then traced that motion. (Or developed rules to mimic that motion. It could work! Maybe!)
Simple variations can be very effective. The space packing algorithm has two modes. It switches between packing space that’s visible on the screen, and packing a larger space including a region below the bottom edge. When it switches to the second mode, it suddenly has a tall, uncluttered space to fill. The result is a texture that constantly varies between tight square-shaped glyphs, and taller ones. The contrasting modes look good!
|
OPCFW_CODE
|
I'm having a tough time searching for this, sorry if it's been asked many times. I have an event that carries a few time-based fields. I'm trying to search to determine if any of those times fall within the last 7 days. Here's an example event:
Tue, 16 Nov 2010 13:21:33 -0500 client_id=8035016 shost=WWILSON2 src_ip="192.168.1.120,192.168.56.1" dns_name=wwilson2 os="Win7 6.1.7600" status="Fixed" issuer="bfadmin" issue_time="Tue, 14 Sep 2010 15:10:15 -0500" start_time="Sat, 01 Jan 2011 16:06:09" end_time= fixlet_id=6071005 fixlet_name="Mozilla Firefox 3.5.12 Available (Superseded)" fixlet_site="Updates for Windows Applications" action_id=177 action_name="Mozilla Firefox 3.5.12 Available" reapply=True restart_required=True stopper="bfadmin" time_stopped="Tue, 14 Sep 2010 15:32:34 -0500" bigfix_server=BESCORE soap_url=http://bescore:80/?wsdl soap_user=bfadmin
And here's the search I'm using:
sourcetype=actions (end_time=* OR time_stopped=*) | dedup action_id, host, bigfix_server | convert timeformat="%a, %d %b %Y %H:%M:%S" mktime(start_time) as start mktime(end_time) as end mktime(time_stopped) as stop | eval ended=if(end > relative_time(now(), "-700d"), "Completed", if(stop > relative_time(now(), "-700d"), "Stopped", "None"))
In this case, I've modified the search to look back 700 days in order to catch the event listed above. The field "ended" ends up always being populated with "None"
What am I doing wrong here?
Are you sure that your convert clause is working correctly?
You can test it out like so:
sourcetype=actions (end_time=* OR time_stopped=*) | dedup action_id, host, bigfix_server | convert timeformat="%a, %d %b %Y %H:%M:%S" mktime(start_time) as start mktime(end_time) as end mktime(time_stopped) as stop | stats count by start end stop
just to see the kinds of values you're getting for start, end and stop.
Sometimes the things I miss.... You're correct, I was re-using the search and missed that the time format changed from one to the other. The stop time includes tz info. Doh. Thanks!
|
OPCFW_CODE
|
To fix this, it seems like I have to reinstall my eM Client? What will than happen to my calendar, contacts and mails?
I take it you are getting this message when you start eM Client, and it does a database check. Is that correct?
Hello again, Gary;-)
Seems you are my saving angel out here;-)
Yes, that is correct. Right now it started up normal again - do not know why!! - but the downloading messages seems to go and go and not stop.
While you are able to access eM Client, make a backup. Do that now!
One thing you can do to help prevent problems with the database is to close eM Client a few seconds before you shut down Windows.
Reinstalling eM Client will not fix a malformed database. That is stored separately from the application, so will remain behind after uninstall or reinstall. If you get persistent problems with this, the best option is to delete the database and start again. Deleting the database will delete any data stored in local folders. If your account is setup with IMAP, your emails will be synced with a server, so they will be safe. If you are using a service like GMail or Outlook.com, then the chance is that your contacts and calendar will also be synced online.
Which email provider are you using?
I will make a back up, now!
…and in the future close eM Client a few seconds before Windows. I did not know that one.
Than I will hold the reinstalling for now. How do I know about IMAP?
The one email system I use is eM Client.
There is small problem with shutdown, but it has been fixed and will be in the next release. Until then, closing eM Client in this way reduces the risk.
To see if you have IMAP, go to Menu > Tools > Accounts. To the right of your account you should see a tab IMAP.
Who is your email provider? (i.e. what is after the @ in you email address)
My IMAP says: imap.domeneshop.no Port 993 Use SSL/TLS
My email adress is firstname.lastname@example.org
IMAP works by synchronizing your email with a server. So deleting your database, or losing it because it is malformed, will not delete your emails. If all goes wrong, you can just delete the database and start again, and they will be there.
You can always see where the calendar and contacts are in eM Client by looking at the folders.
If they are in Local Folders, they are not synced with a server and could be lost if the database becomes corrupt, unless you have a backup.
So to be safe, make regular backups by using the Automatic Backup option in Menu > Tools > Settings > General > Backup
Thank you, Gary.
I now have a automatic backup.
I see that my calender is in a local folder, as are my contacts. If I understand you right, they will not be lost even if the database becomes corrupt, when I have a backup?
Thank you again, and Marry X-mas;-)
Yes, if the database becomes corrupt you can just restore, and everything including your contacts and calendar will be there.
|
OPCFW_CODE
|
Summary: Locomizer used geo-enabled tweets to pinpoint user segments with a high affinity for eating/drinking and fast food in order to extrapolate them on a whole population of central Madrid for a fast food mobile ad targeting campaign. As the result, Twitter data proved to drive the click through and conversion rates by 40% and 30% correspondingly.
At Locomizer, we experiment with all kinds of data that contain a geo element such as a set of lat/lon records. When we had a chance to run a live targeting trial with our DSP partners for a fast food brand, we decided to give Twitter data a try. Our concerns about using Twitter were mainly down to two points: 1) geo-enabled tweets are still a tiny fraction of 500 million tweets generated daily worldwide; and 2) Twitter user base is not fully representative – it skews more male and younger age brackets. However, the second point had turned out to play more in favour of the campaign objectives as the results showed.
Drive mobile coupon ad click-through and download rates by pinpointing audiences on the map with eating/drinking and fast food interests or intents that make them receptive to fast food ads.
Step 1: Aggregate geo-enabled tweets from central Madrid area
Step 2: Identify user behaviour patterns based on the location of historic tweets
We pinpointed tweets on the map for each and every of 70 thousand users we had in our data set.
Step 3: Translate that data into user geo-behavioural interest profiles
Using our proprietary database of points of interest as a complimentary input, Locomizer’s algorithm translated each user’s location history into a distinctive user interest profile by calculating an affinity score for key activities, including eating/drinking, fast food and coffee shop categories.
Step 4: Form user segments with high affinity scores for key categories
We matched user profiles by their similarity to form distinctive target user segments that had high affinity for eating/drinking, fast food and coffee shop categories, collectively named “fastfood” sample.
Step 5: Extrapolate “fast food” sample on the whole population of central Madrid area
After aggregating and extrapolating the “fast food” sample on the whole population of central Madrid area, we developed API to integrate with our trial partner’s hyper-local self-serve ad targeting portal. The API was feeding lat/lon records of 500mX500m polygons with an affinity score for “fast food” categories. To visualize the API, a heat map was created showing polygons with varying affinity scores – the darker polygon is, the higher its affinity for fastfood. We also enabled a filtering option that could show how the affinity score changes by hour.
Step 6: Campaign launch
Fastfood brand’s marketing manager used our partner’s hyper-local self-serve ad serving portal to plan and run a targeted ad campaign by making data-driven decisions of WHEN & WHERE to send mobile ads based on Locomizer’s extrapolated view of footfall by fastfood interest and time. Our partner bought audience (any mobile audience available in that areas, not limited to Twitter users) in the specified polygons and delivered the ads.
Overall, the campaign was a great success as Locomizer has outperformed the industry standard CTR benchmarks for similar location-targeted campaigns. Locomizer pinpointed areas and time slots with audience highly receptive to fastfood’s ads, driving up CTR by 40% in comparison with CTR in areas blindly targeted by fastfood ads.
Locomizer analytics drove the coupon conversion rate up by 30% among customers who clicked on the ad, that’s an incremental increase in footfall of 7,000 customers in fastfood restaurants in one month.
- Twitter geo-data can be successfully monetized despite its small share out of all tweets (indirect monetization)
- People interested in fastfood are not always or necessarily can be found in close proximity to fastfood venues: knowing how footfall interests change with time can significantly increase the effectiveness of your targeting campaign.
- Locomizer is breaking the traditional location-based targeting to move to a more effective dynamic geo-behavioural model.
For questions and enquires please email us: info [at] locomizer.com.
|
OPCFW_CODE
|
package br.com.fastshop.cotroller;
import java.sql.SQLException;
import java.util.List;
import br.com.fastshop.model.Cliente;
import br.com.fastshop.service.ClienteService;
public class ClienteController {
public static void main(String[] args) throws ClassNotFoundException, SQLException {
Cliente cliente = new Cliente("gordo", "98176897272");
ClienteService service = new ClienteService();
service.salvar(cliente);
List<Cliente> listaDeClientes = service.listaDeClientes();
for (Cliente c : listaDeClientes) {
System.out.println("Cliente: "+c.getNome()+" cpf: "+ c.getCpf());
}
}
}
|
STACK_EDU
|
Bitcoin Optech newsletter #121 is here:
- describes the disclosure of two vulnerabilities in LND
- summarizes popular questions and answers from the Bitcoin StackExchange
- notes changes to popular Bitcoin infrastructure software
Survey of thoughts of .. some people on activation.
So, if this youtube-dl thing ends up with distributions such as #debian being pressured to remove it, I hope they consider this:
The RIAA would not have brought this action without Google's backing. Google is not a friend of your distribution.
Perhaps your distro should IDK, change its default search engine.
google, privacy, e-mail, from
Google started editing people's e-mails in GSuite, replacing links with a link through google.com:
This means that Google will track a click on a link *in e-mail* even if you're using an external client.
I am *guessing* this is under the pretext of phishing protection, but it actually *creates* additional phishing risk for text-only clients, since now all links are google.com links.
Coinbase is beginning our search for at least two Bitcoin development grant recipients starting today. If you'd like to apply or nominate a Bitcoin core developer to be sponsored, read more here and fill out the form. https://blog.coinbase.com/coinbase-will-sponsor-two-bitcoin-core-developers-with-first-crypto-community-fund-grants-cf55a3a520a3
The meeting log of today's sixth taproot session is up. Thanks to all the participants and to @jfnewbery for hosting! https://bitcoincore.reviews/19953
Bitcoin Optech newsletter #119 is here:
- relays LND security warning
- summarizes LN upfront payments discussion
- describes taproot bech32 addresses thread
- links to proposal for alternate way to secure LN payments
- details the signet PR Review Club
Shellcheck saved me so much time.
Try it in your IDE if dealing with any bash code: https://github.com/koalaman/shellcheck#in-your-editor
Interesting comment from Gregory Maxwell about an additional risk for companies holding large amounts of #bitcoin on their balance sheets. https://bitcointalk.org/index.php?topic=5280898.msg55346592#msg55346592
We are running our first ever survey of current, former, and future Qubes users. We invite you all to lend us 10-15 mins of your time to participate. Help us gather data on user needs to inform how we prioritize and shape the future of Qubes: https://survey.qubes-os.org/index.php?r=survey/index&sid=791682&lang=en
New EFF job in the house! If you're a web dev and want to do good in the world, please apply: https://www.eff.org/opportunities/jobs/web-developer
Note that this has a good chance of being a remote-friendly job.
And here is the September edition. Enjoy!
Bitcoin Optech newsletter #117 is here:
- describes a compiler bug that casts doubt on the safety of secure systems
- explains a technique that can be used to more efficiently verify ECDSA signatures in Bitcoin
- popular Q&A from the Bitcoin StackExchange
Interesting proposal for a new fee-bumping mechanism: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
Haven't read the mailing list responses, but bitcoin-optech has a nice quick summary: https://bitcoinops.org/en/newsletters/2020/09/23/
It seems to me that this mechanism is a little worse for privacy (it will publicly link UTXOs that are otherwise potentially unlinked), but this seems to be a small consideration. Probably worth that risk in many use-cases, if it mitigates tx-pinning attacks.
Bitcoin Optech newsletter #116 is here:
- describes proposed soft fork for a new type of fee bumping
- summarizes research into scripts that cannot be spent since they require satisfying both timelocks and heightlocks
- updates to services/client software
A well written article on the extremely disappointing situation with Mozilla:
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!
|
OPCFW_CODE
|
GOING BEYOND SEARCH:
ADVANCING DIGITAL COMPETENCES IN LIBRARIES AND RESEARCH COMMUNITIES
Funded byNordPlus, project No. NPHZ-2019/10075
- National Library of Latvia (coordinator Anda Baklāne, firstname.lastname@example.org)
- Humlab, Umeå University (Stefan Gelfgren)
- Institute of Literature, Folklore and Art, University of Latvia (Sanita Reinsone, Jānis Daugavietis)
- National Library of Estonia (Jane Makke)
- National Library of Lithuania (Giedrė Čistovienė)
The initiative seeks to foster the dissemination of digital competences in the ALM (archive, library, museum) and research communities and strengthen networks among ALM, education, and research sectors in order to improve practices of working with digital content and data analysis tools. Over past decades libraries have accumulated large collections of digital resources, however, their potential for research and education is not fully deployed. Libraries lack necessary infrastructures, skills, and knowledge that would enable them to provide digital research services while researchers do not have access to library collections to use their digital tools for metadata mining, text analysis, application of GIS, and data visualization. This project targets collaboration and knowledge transfer between library professionals, academic researchers and IT specialists in the form of a "library lab" or competence centre that helps to develop new services for education and research.
The project comprises of series of intensive workshops and summer schools rotating among participant countries - Sweden, Estonia, Latvia, and Lithuania. The skills and services developed as a result of workshops will be piloted in research projects that will be based on the collections of national libraries of Baltic states. As a result of the project, new online teaching and learning materials for the in-depth digital exploration of digital heritage content will be developed in the national libraries of Estonia, Latvia, and Lithuania and disseminated to the target audiences.
- Project meeting. Online, 27 June 2019
- Project meeting. Riga, 22 July 2019
- Baltic Summer School of Digital Humanities: Essentials of Coding and Encoding. Riga, 23-26 July 2019. See video lectures
- Project meeting. Online, 13 December 2019
- Project meeting. Online, 8 May 2020
- Project meeting. Online, 12 June 2020
- Project meeting. Online, 14 August 2020
- Online workshop "Creating Corpora and Text Mining in Digital Libraries". 28-29 October 2020. National Library of Latvia. See video lectures
- Online/onsite workshop "Digital Humanities and Digital Archives". 4 November 2020. National Library of Estonia. Conference programme
- Project meeting. Online, 10 February 2021
- Project meeting. Online, 14 April 2021
- Online workshop "Analysing Metadata". 8-9 June 2021. Humlab, Umeå University.
- Project meeting. Riga, 14 July 2021
- Baltic Summer School of Digital Humanities 2021: Digital Methods in Humanities and Social Sciences. 23-26 August 2021. Tallinn, Estonia.
- Project meeting. Online, 12 October 2021
- International conference "Theatrum Libri: The Press, Reading and Dissemination in Early Modern Europe". Vilnius, 1-3 December 2021. Conference programme Tarptautinė mokslinė konferencija „Theatrum libri" - YouTube
Baltic Summer School of Digital Humanities page:
BSSDH2019 video lectures:
The Digital Humanities Series and other DH videos on the digitalhumanities.lv YouTube channel:
The Digital Humanities Series: Video Stories on Digital Humanities and Digital Scholarship
Latvian Prose Counter
|
OPCFW_CODE
|
"Verifiable Credentials," or "VCs" are standardized, cryptographically-signed documents that attest information about an entity. They provide an interoperable way to attest and authenticate any kind of data in an IDtech application.
A verifiable credential is a set of tamper-evident claims and metadata that cryptographically prove who issued it. - W3C VC Data Model
Trinsic currently uses a verifiable credential format that complies with the W3C Verifiable Credential Data Model, but we’re watching competing standards as they evolve as well (e.g. IETF ACDCs, ISO 13018-5, Anoncreds, etc). For more details on the standards we use, see Standards.
Verifiable credentials are unique from other kinds of digital documents because they enable you to verify the following things:
- The original issuing entity (the source of the data)
- It was issued to the entity presenting it (the subject of the data)
- It hasn't been tampered with (the veracity of the data)
- Whether the issuer revoked the credential as of a particular point in time (the status of the data)
Components of a credential¶
To break down the components of a credential, we'll use a digital driver's license as an example.
Attributes, or Data¶
The most important part of a credential is the data inside it.
In its simplest form, attributes are key-value pairs in a JSON object. These attributes are populated at issuance on a per-credential basis, based on a template.
Verifiers use attributes to request only the data from credentials that they need—for example, an age-checking verifier may only request
date_of_birth from a driver’s license, instead of the entire credential.
Credentials are issued from templates, an abstraction provided by Trinsic that makes getting started and ongoing management easy, and enables tighter integration with other features in the Trinsic platform such as governance.
When you understand how templates work in Trinsic, you will get the benefits of semantic interoperability and governance without needing to understand the nuts and bolts of schemas, credential definitions, JSON-LD contexts, credential restrictions, and more. See our page on templates to learn more.
Each verifiable credential is cryptographically signed by an issuer. The signature, along with the issuer’s identifier, strongly identify the issuer, ensuring anyone can verify the source of the data in the credential.
In order to be trustworthy, the issuer’s identifier needs to be resolvable. We use decentralized identifiers (DIDs), a W3C standard, for this purpose. Trinsic offers providers a number of choices of DID methods for the issuers in their ecosystems. To learn more, read more about the standards we use and Decentralized Identifiers .
|
OPCFW_CODE
|
Which a set of questions or exercises evaluating skill or knowledge as they would not ever; at no time in the More Bonuses or future deem to be a canonical hour that is the ninth hour of the day counting from sunrise. 0 we the browse around these guys of conscious choice and decision and intention fail to do something; leave something undone to the fact of being aware of visit that is known to few people a a special offering (usually temporary and at a reduced price) that is featured in advertising. Is not yet they should be an indefinite quantity of something having a specified value ask.
What It Is Like To Teas Test Prep Quizlet 2020
On top of trying something to find out about it a person whose occupation is to serve at table (as in a restaurant) 4x pdf the. Net php exot_1 2 939 78 2 1. 99 0 9 are something that can be done without the a particular branch of scientific knowledge.
How To Teas Practice Test For Dental Hygiene Like An Expert/ Pro
Too why not try these out easy; requiring great physical or mental effort to accomplish or comprehend or endure to trying something to find out about it the branch of linguistics that deals with syntax and morphology (and sometimes also deals with semantics) a customary way of operation or behavior is to. To quantifier; used with either mass nouns or plural count nouns to indicate an unspecified number or quantity of those the words that are spoken and i make plain and comprehensible. An on a regular route of a railroad or bus or airline system writing that provides information (especially next of an official nature) a particular branch of scientific knowledge a state of difficulty that needs to be resolved 9 17 99.
3 Simple Things You Can Do To Be A Teas Test Practice Nursing
1 5 4 cup 30 a written account of what transpired at a meeting to give a summary (of). A mood (grammatically unmarked) news represents the act or state as an objective fact of the everything that exists anywhere your overall circumstances or condition in life (including everything that happens to you) and the systematic investigation to establish facts. S y of the new tab which is.
1 Simple Rule To Ati Teas Exam Discount Codes
Any piece of work that is undertaken or attempted can weight to be borne or conveyed them to the the supreme effort one can make way. Earlier in time; previously the a detailed critical inspection so you do this is. When it to a productive insight my an item of information that is typical of a useful site or group put into service; make work or employ for a particular purpose or for its inherent or natural purpose by.
|
OPCFW_CODE
|
The authoritative DEITEL LIVE-CODE creation to home windows, .NET, web and world-wide-web programming in visible easy .NET This intriguing new moment version of the Deitels best-selling visible simple textbook conscientiously explains the way to use visible easy .NET - a ultimate language in Microsofts new .NET initiative - as a general-purpose programming language and the way to software multi-tier, client/server, database-intensive, net- and Web-based .NET purposes. Dr. Harvey M. Deitel and Paul J. Deitel are the founders of Deitel & affiliates, Inc., the internationally-recognized corporate-training and content-creation association focusing on visible simple .NET, C SHARP, visible C++ .NET, Java, C++, C, XML, Python, Perl; web, internet, instant, e-business and item applied sciences. The Deitels are the authors of numerous around the globe no.1 programming-language textbooks, together with Java the way to application, 4/e, C++ easy methods to software, 3/e and net & world-wide-web the best way to application, 2/e. In visible easy .NET the way to application, 2/e, the Deitels and their colleague, Tem. R. Nieto, speak about subject matters you must construct whole .NET, Web-based purposes, together with: *.NET Introduction/IDE/Debugger *Contr
By Scott Seely
Themes lined in developing and eating net providers in visible uncomplicated comprise: "Quick begin" that steps clients via developing and eating internet prone utilizing VB.NET; an summary of the way to transform legacy purposes to an online companies platform; safeguard, availability, nation upkeep, and synchronous vs. asynchronous processing matters with regards to internet providers; and complicated themes similar to the cleaning soap specification, VB6 and Soap-on-a-Rope, and troubleshooting counsel.
By Evan Tick
A distinctive examine how object-oriented VBA may be used to version advanced monetary structures
This consultant is helping readers triumph over the tough job of modeling advanced monetary buildings and bridges the space among specialist C++/Java programmers writing construction versions and front-office analysts development Excel spreadsheet types. It finds tips on how to version monetary constructions utilizing object-oriented VBA in an Excel atmosphere, permitting desk-based analysts to quick produce versatile and powerful types. full of in-depth perception and professional suggestion, it skillfully illustrates the artwork of object-oriented programming for the categorical function of modeling based items. Residential personal loan securitization is used as a unifying instance during the text.
By Ed Blankenship, Martin Woodward, Grant Holliday, Brian Keller
Authoritative consultant to TFS 2010 from a dream group of Microsoft insiders and MVPs!
Microsoft visible Studio workforce origin Server (TFS) has advanced till it truly is now a vital software for Microsoft?s program way of life administration suite of productiveness instruments, allowing collaboration inside of and between software program improvement groups. by means of 2011, TFS will substitute Microsoft?s top resource keep watch over approach, VisualSourceSafe, leading to a good larger call for for info approximately it. specialist group beginning Server 2010, written via an complete staff of Microsoft insiders and Microsoft MVPs, offers the thorough, step by step guide you should utilize TFS 2010 efficiently?so you could extra successfully deal with and carry software program items in an enterprise.
- Provides a extensive assessment of workforce beginning Server for builders, software program undertaking managers, testers, company analysts, and others in need of to learn how you can use TFS
- Gives TFS directors the instruments they should successfully display screen and deal with the TFS environment
- Covers center TFS services together with venture administration, paintings merchandise monitoring, model keep an eye on, try case administration, construct automation, reporting, and more
- Explains extensibility suggestions and the way to jot down extensions for TFS 2010
- Helps certification applicants organize for the Microsoft staff beginning Server 2010 certification examination (Exam 70-512)
The transparent, programmer-to-programmer Wrox kind of specialist group beginning Server 2010 will quickly have you ever completely as much as speed.
By Dave Sussman
What's this publication approximately? entry 2002 is the middle database program in the place of work XP suite. utilizing VBA (Visual easy for Applications), the person can create his or her personal courses in what's primarily a subset of the visible uncomplicated programming language. utilizing VBA with entry is a greatly strong method, because it helps you to create nice person interfaces (like types or experiences) as a entrance finish to real facts garage and manipulation in the database itself. What does this booklet conceal? This booklet is a revision of the best-selling starting entry 2000 VBA, remodeled to supply a wealthy educational to programming entry 2002 with VBA. New fabric covers the improved techniques in entry 2002 for publishing info to the internet, dealing with XML, integrating with SQL Server computing device Engine, etc. who's this publication for? This e-book is for the entry consumer who already has an information of databases and the fundamental gadgets of an entry database, and who now desires to the best way to software with VBA. No earlier wisdom of programming is needed.
Build real-world programming skills—and arrange for MCP assessments 70-310 and 70-320—with this reputable Microsoft® learn advisor. paintings at your individual speed in the course of the classes and hands-on workouts to profit easy methods to construct XML internet prone and server parts utilizing visible Basic®.NET and visible C#™ .NET. Then expand your services via extra skill-building workouts. As you achieve functional adventure with crucial improvement projects, you’re additionally getting ready for MCAD or MCSD certification for Microsoft .NET.
- Creating and dealing with Microsoft home windows® prone, serviced parts, .NET remoting items, and XML internet services
- Consuming and manipulating data
- Testing and debugging
- Deploying home windows providers, serviced parts, .NET remoting gadgets, and XML internet companies
YOUR package INCLUDES:
- 60-day overview model of Microsoft visible Studio® .NET specialist variation improvement software program on DVD
- Testing instrument that generates timed, 50-question perform assessments that includes eventualities and case stories for either visible easy .NET and C# programmers, plus automatic scoring
- Comprehensive self-paced learn advisor that maps to MCP examination pursuits and objectives
- Learn-by-doing workouts for talents you could follow to the job
- Fully searchable eBook
A observe concerning the CD or DVD
The print model of this publication ships with a CD or DVD. For these consumers buying one of many electronic codecs during which this publication is out there, we're happy to supply the CD/DVD content material as a unfastened obtain through O'Reilly Media's electronic Distribution providers. To obtain this content material, please stopover at O'Reilly's website, look for the name of this booklet to discover its catalog web page, and click the hyperlink lower than the canopy photo (Examples, significant other content material, or perform Files). be aware that whereas we offer as a lot of the media content material as we're capable through loose obtain, we're occasionally restricted by way of licensing regulations. Please direct any questions or matters to firstname.lastname@example.org.
By Craig Utley
A Programmer's creation to visible easy .NET is helping present visible simple builders determine and comprehend a number of the significant alterations are among visible uncomplicated and visible simple .NET. This booklet additionally explores why builders may still stream to visible uncomplicated. find out about the .NET framework, VB .NET VB .NET inheritance, VB .NET internet companies, VB .NET internet functions, VB .NET home windows companies, .NET Assemblies, ADO.NET and ASP.NET. extra issues comprise:
- Building sessions and Assemblies with VB.NET;
- Building home windows providers with VB.NET;
- Upgrading VB6 tasks to VB.NET;
- Performance defense;
- Configuration and Deployment.
By Tod Golding
The ability and style of general kinds have lengthy been said. Generics permit builders to parameterize info forms very similar to you'll parameterize a mode. This brings a brand new measurement of reusability for your kinds with no compromising expressiveness, type-safety, or potency. Now .NET generics makes this energy on hand to all .NET builders. through introducing familiar techniques without delay into the typical Language Runtime (CLR), Microsoft has additionally created the 1st language-independent generics implementation. the result's an answer that permits favourite forms to be leveraged via all of the languages of the .NET platform.This e-book explores all points of the .NET generics implementation, protecting every little thing from primary general recommendations, to the weather of widely used syntax, to a broader view of ways and in the event you may well follow generics. It digs into the main points linked to growing and eating your personal established sessions, buildings, equipment, delegates, and interfaces, analyzing the entire nuances linked to leveraging every one of those language constructs. The e-book additionally appears to be like at guidance for operating with regular kinds, the functionality earnings accomplished with generics, the hot ordinary box libraries (BCL and 3rd party), and key points of the underlying .NET implementation. For these transitioning from C++, the booklet offers an in-depth examine the similarities and variations among templates and .NET generics. It additionally explores the syntactic adaptations linked to utilizing generics with all of the .NET languages, together with C#, visible uncomplicated, J#, and C++.
By Matthew MacDonald
The publication of visible uncomplicated 2005 is a entire advent to Microsoft's most modern programming language, visible easy 2005, the following generation of visible easy .NET (Microsoft has dropped the .NET within the title). an entire revision to the hugely acclaimed Book of VB .NET, the publication is geared up as a sequence of lightning excursions and real-world examples that convey builders the VB 2005 manner of doing issues. excellent for old-school visible simple builders who have not made the bounce to .NET, this ebook is usually invaluable to builders from different programming backgrounds (like Java) who are looking to lower to the chase and quick the best way to application with VB 2005.
Mei Nu rogu bijolog E-books 2017 | All Rights Reserved
|
OPCFW_CODE
|
Our project was inspired by hacking attempts made on modern day devices that people purchase every day.
What it does
Revelio looks up CVE information of products that the user inputs into the UI. They can either enter a serial number or manually choose what the product is from a searchable list.
How we built it
We used python TKinter to make the UI, and python flask to make the server.
Challenges we ran into
When we started the project we wanted to make a desktop web app that could scan product barcodes to get the serial number to then use to locate the CVE. After many hours of working with html and electron to try to get a barcode scanner to work, I had made 0 headway on anything functional to use as a user interface. I discussed with the team what we could do and we decided that our strengths lie in coding with python so we switched to TKinter for the UI. I had never used TKinter before but it was very easy to look up how to use it as it was all in a familiar format of python. Instead of scanning a barcode we switched to a dropdown list where the user can choose what product they are looking for.
Accomplishments that we're proud of
We are proud that we could get a python server running and that we could access the server from Nova Scotia to Texas. We are also proud that we could run commands off the server and use output in our own separate programs.
What we learned
We learned how to use python TKinter to make UI and how to run a python server using flask.
What's next for Revelio!
We would definitely want to try to implement the barcode scanner in the future so people can use the product in store and check items before they buy them. We might want to switch it to a mobile app so people can scan barcodes using their phones as well.
Some additional features that we wanted to fully implement into this project was automatically detecting the serial number and detecting devices from network scans.
As for automatically detecting the serial number, we wanted to associate it with the proper manufacturer to receive the make and model of the device, but due to time constraints and product availability we could only incorporate functionality for netgear devices.
The network detection of devices posed a large issue as we had initially used MAC addresses to try and figure out the type of device, but we were limited to only knowing the manufacturer of the network adapter used. We had looked into using the device name to try and detect the type of device, but we concluded that a lot of devices would not display their make and model under their device name so it would not be a reliable method. Without the time to research other methods we ultimately dropped development for this version of the product.
Log in or sign up for Devpost to join the conversation.
|
OPCFW_CODE
|
While the statistical tools presented in this book are applicable to data from medicine, biology, public health, epidemiology, engineering, economics, and demography, the focus here is on applications of the techniques to biology and medicine. I Other references I David G. Kleinbaum and Michael Kline (2005). Textbooks: Survival Analysis: Techniques for Censored and Truncated Data, by John P. Klein and Melvin L. Moeschberger (2003, 2nd Ed.) Techniques for Censored and Truncated Data. Web Site for Book; Preface; Data Sets (Under Survival Analysis Techniques for Censored and Truncated Data) SAS Macros (Under … Survival Analysis: Techniques for Censored and Truncated Data. Survival analysis is the analysis of time-to-event data. Kleinbawn: Survival Analysis: A Self-Learning Text Kleinbadein: Logistic Regression: A Self-Learning Text, 2nd ed. 2nd Edition, Springer, New York. Survival analysis. Survival Analysis: Techniques for Censored and Truncated Data PDF I used this book for a class in survival analysis (a graduate level biostats course) and I found it very useful. Historically, issues of this nature were investigated by researchers studying mortality, so the name “survival analysis” is used as an umbrella term to cover any sort of “time-to-event” analysis, even when the event has nothing to do with life or death. Applied statisticians in many fields must frequently analyze time to event data. I Mara Tableman and Jong Sung Kim (2003). Introduction to Survival Analysis - Stata Users Page 1 of 52 Nature Population/ Sample ... are known as survival data and special techniques are required for their analysis. Klein, J. and Moeschberger, M. (2003) Survival Analysis: Techniques for Censored and Truncated Data, 2nd ed. ), Springer. Survival analysis is a branch of statistics for analyzing the expected duration of time until one or more events happen, such as death in biological organisms and failure in mechanical systems. Klein, J.P. and Moeschberger, M.L. Applied statisticians in many fields frequently analyze time-to-event data. For example, individuals might be followed from birth to the onset of some disease, or the survival time after the diagnosis of some disease might be studied. xv + 536 pp . KIeidMoeschberger: Survival Analysis: Techniques for Censored and Truncated Data, 2nd ed. Survival Analysis: Techniques for Censored and Truncated Data (2nd Edition), Springer. Get FREE 7-day instant eTextbook access! are related to survival analysis: techniques for censored and truncated data (HARDBACK) book. Data sets and functions for Klein and Moeschberger (1997), "Survival Analysis, Techniques for Censored and Truncated Data", Springer. The application of the Weibull distribution in the modeling and analysis of survival data has also been described extensively by Mudholkar et al. While the statistical tools presented in this book are applicable to data from medicine, biology, public health, epidemiology, engineering, economics and demography, the focus here is on applications of the techniques to biology and medicine. This is just one of the solutions for you to be successful. Complete Data Analysis Solutions Learn by doing - solve real-world data analysis problems using the most popular R packages [Intermediate] Spatial Data Analysis with R, QGIS… Become an Open source GIS Guru and Tackle Spatial Data Analysis Using R, QGIS, GRASS & GOOGLE EARTH; Data Mining with R: Go from Beginner to Advanced Learn to use R software … Survival Analysis Using S: Analysis of Time-to-Event Data, Chapman and Hall/CRC. Second Edition John P. Klein Medical College of Wisconsin, Melvin L. Moeschberger The Ohio State University Medical Center. While we provided a brief overview of survival analysis in Python, other languages like R have mature survival analysis tools. Division of Epidemiological Methods and Etiologic Research Bremen Institute for Prevention … This book will be useful for investigators who need to analyze censored or truncated life time data, and as a textbook for a graduate course in survival analysis. Survival Analysis: Techniques for Censored and Truncated Data (Statistics for Biology and Health) by Moeschberger, Melvin L., Klein, John P. and a great selection of related books, art and collectibles available now at AbeBooks.com. Use the link below to share a full-text version of this article with your friends and colleagues. Solution to Exercise 3: Survival analysis Key points: • In this simplified example, there was no censoring, people either had the even or they didn’t • Survival analysis requires two variables, the time when the event occurred or the endpoint of observation, and whether there was an event or not Our solutions was launched using a want to work as a total on-line electronic local library that Much of the first several chapters are fairly quick relative to many graduate statistics texts and focuses on application with less emphasis on theory. Springer-Verlag ISBN: 038795399X Data: Datasets contained in Appendix A of the Kalbfleisch & Prentice book, except for Dataset V, can be downloaded in Excel format from the public ftp site, linked here. Survival Analysis Techniques For Censored And Truncated Data Statistics For Biology And Health Yeah, reviewing a book survival analysis techniques for censored and truncated data statistics for biology and health could add your close associates listings. Data sets from Klein and Moeschberger (1997), Survival Analysis. Such data describe the length of time from a time origin to an endpoint of interest. I Paul D. Allison (2010). Includes bibliographical references (p. -525) and indexes. Doubly truncated data appear in a number of applications, including astronomy and survival analysis. Ingo Langner. SURVIVAL ANALYSIS. Textbook Examples Survival Analysis: Techniques for Censored and Truncated Data by John P. Klein and Melvin L. Moeschberger. Reference Books: The Statistical Analysis of Failure Time Data by John D. Kalbfleisch and Ross L. Prentice (2002, 2nd Ed.) $95.00/€96.25 , ISBN 0‐387‐95399‐X . Survival Analysis: Techniques for Censored and Truncated Data (Statistics for Biology and Health) - Kindle edition by Klein, John P., Moeschberger, Melvin L.. Download it once and read it on your Kindle device, PC, phones or tablets. 9th, 2020Action Analysis For Animators PDF - Book LibraryAction Analysis For Animators Action Anatomy: For Gamers, Animators, And Digital Artists The Nine Old Men: Lessons, Techniques, And Inspiration From Disney's Great Animators The Animator's Survival Kit, Expanded Edition: A Manual Of Methods, Principles And Formulas For Classical, 3th, 2020ÉCONOMÉTRIE DES DURÉES DE SURVIE … Quantity * SKU: TBSMODEL.
|
OPCFW_CODE
|
We want to create a weekly newsletter that is 99% automated.
We want to do a very similar thing to [login to view URL] but specifically for our niche.
Ideally the newsletter is automatically created with the articles, or we pick from a list of articles.
Please add what stack you will use to build this.
We want to iterate on this long term so we'd prefer a simple product at the start that we can add complexity to over time (including ML).
Add "kk99" to your proposal so I know you've read it.
18 freelances font une offre moyenne de 908 $ pour ce travail
Hi, I hope you are doing fine. I have almost 10 years of experience in machine learning algorithms. I can implement various types of artificial intelligence algorithms including yours with Matlab, Python and etc. I hav Plus
Hello there Is there any other features or functions that you may have not mentioned? Can you share designs if you have? We are a group of experienced full-stack developers that can build Mobile apps, websites, and b Plus
Dear Employer, "kk99" I have read the requirements of your project, which can be achieved in python. Being an experienced software engineer, I am committed to ensure that you get value for the services you pay for . Plus
Hello! Nice to meet you! I will summarize all lectures and provide quality notes. I have read your requirements carefully. I have a enough experiences about reporting. My clients who have worked with me want to work wi Plus
Hi there I read your post and I really want to work with you, if it is possible. I work as a AI engineer, you can check my profile to proof. I worked a lot of ML/DL projects before. If you want to see, I can show th Plus
hello there. how are you today? I have read your project requirement carefully and I am very interested in your project. I can help you. I am not going to explain that I am good at this and that , like other developers Plus
kk99 Hi there, ★★★ Python (Flask / Django) Expert ★★★ 8+ Years of Experience ★★★ I've read requirements and ready to create a weekly newsletter. Websites we built: ✔ [login to view URL] ✔ [login to view URL] Plus
kk99, I am a web automation expert, and I have done before a similar task using beautifulsoup and NLP to get all the articles about a topic, after that that I will use machine learning to choose the best topic. Let’s d Plus
kk99 Hi..., My availability: 40+ hours/week. Ready to start work immediately. I read your project post of Python Developer for machine learning project. I am fullstack Python developer having skillsets in Python Djang Plus
"kk99" Greetings to you! I am an experienced(6 years) Python developer. Sharp skilled in Python Website Framework(Django, Flask, Dash etc), AI, DL, ML, Web Scraping, Data Scraping. If I get awarded, it is my responsibi Plus
Hello There! This is Ayesha Siddiqua. Nice to meet with you. I am an M.B.B.S doctor. I have completed my graduation from North Bengal Medical College under Rajshahi University. A versatile and professional Web & Mob Plus
kk99 - what you have explained here is a database driven script that works on the paradigm of permutation and combination. However, the process of automation in this case purely rests on the theory of randomization an Plus
Kk99 Hi. I'm machine learning engineer and python developer. I had some experience in getting Persian news and create a social newsletter. I can do it for you using python and SQL. I can extract features from news an Plus
"kk99" My understanding of the work is what makes me an excellent candidate to hire as a freelancer. I have worked on a project very similar to this before, and I am sure that I can replicate its success while working Plus
|
OPCFW_CODE
|
In this blog post I’ve consolidated several resources to get you on your way with using Python to trade at Robinhood through the ‘Robinhood API’ (more to follow on that topic). Robinhood’s democratization of investing for small time investors has been a great boon for bringing complex investing tools to the masses.
Being able to trade using algorithms, Machine Learning (ML) and Artificial Intelligence (AI) is the next big step towards unleashing the full potential of the Stock Market to Individuals… just keep in mind that being successful using these tools will still requires a considerable amount of work.
There are no shortcuts, but these resources should get you well on your way to getting to where you want to go.
In this Article
Trading Stocks at Robinhood with Python Tutorial
Trading stocks using python requires a few things to be set up with your account that aren’t typical. First you will need to be able to generate a key thereby utilizing Multi-Factor Authentication. Then you can pass your normal credentials via Robinhood remotely.
I cover all these topics in the article, ‘How to Use Python to Trade Stocks at Robinhood.’ I give you all the code necessary and use a free instance of Google Colab to conduct several trades with just a few lines of python code. This is a basic proof primer… it will show you that it is possible and exceptionally easy.
Robinhood API Resources
An API allows you to pragmatically interact with a service or website. Robinhood doesn’t have an official API… but that doesn’t mean that others haven’t found a way to engineer something that acts as an API on their behalf. Here are several resources to get you started:
- Robinhood API – Complete Guide – This guide walks you through how to use the robin_stocks python library which is a solid choice to interface with your account and make trades through python.
- Github of Unofficial Documentation of Robinhood Trade’s Private API – This is an incredibly intriguing resource of information. Up until late 2020 this coder was providing this resource free for the opensource community. He ended up stopping due to being tracked and stalked by random people on the internet. Despite that, the work he had completed up and unto that point is still excellent and is great background if you are a developer.
Robinhood Python Libraries
I will keep this section updated as more python libraries become available but there are only two worth mentioning. Many python libraries are out of date and will no longer be able to even get you past authentication much less trading. As mentioned above, Robinhood’s API isn’t necessarily public so a lot of work goes into trying to understand software and configuration changes Robinhood makes periodically.
Robin_stocks – Available from pypi, Robin_stocks is a library that, ‘provides a pure python interface to interact with the Rboinhood API, Gemini API, and TD Ameritrade API.’
This is the package that I personally use to conduct trades when needed. The added bonus is the additional trading platforms that are available as well. At time of publication, the package seemed well updated and functional.
Fast_arrow – Also available through pypi, this package combined with fast_arrow_auth will enable you to conduct similar trading activity as with Robin_stocks. This package is not updated as much but still functions… but for how long is up in the air.
Algorithmic and ML Trading Resources
Assuming that the reason for using Python in the first place to conduct trades with Robinhood is to in turn be using some type of algorithmic trading methodology I have included some appropriate resources of varying media types.
These sources are valid for any type of ML / AI implementation using Python… not just for Robinhood.
Additionally, you may want to consider using a data source independent of Robinhood (sources below) since pricing data can be slow and somewhat limited to the spot price anyway when using only Robinhood.
- Machine Learning for Algorithmic Trading (Book, 2nd Edition) – I consider this book the holy grail for quickly considering numerous trading methodologies using python. Code is provided that is easily modifiable for your own purposes. The models presented are not simple or easy… if you want the whole kit’n kaboodle then this it.
- Financial Data Sources for Machine Learning and Trading Algorithms (Blog Post) – I wrote this article to go over the numerous ways to pull in accurate and timely date. To build your models, especially with ML and AI, you will need to train them. Training requires data. Additionally, for purely algorithmic trades you will want to conduct backtesting… these data sources will get you what you need. Both free and paid sources are available.
- Jacob Amaral’s Youtube Channel – This channel only has 20k subscribers at the time of writing, however the quality is pretty high in terms of thought and usable resources. All his videos that show code have accompanying GitHub repositories. His channel covers various approaches around Machine Learning and also shows how he used Robinhood at one point specifically to conduct trades.
Information You Should Know about Robinhood
Robinhood really took a foothold in the retail investor world in early 2020. After several meme stock fiascos the press (including testimony before Congress) hasn’t been kind to how Robinhood conducts business. Here are several resources to learn more about what that may mean for your trading activity… using python or otherwise
- How 0% Commission Brokers Make Money – Robinhood’s claim to fame is that they don’t charge a commission. They are a 0% Commission Broker. Understanding how their business model works is important to ensure that whatever trading system you implement on the platform makes sense for you.
- Possible Risks if Trading is Halted (YouTube) – Back in 2020, Robinhood halting trading during historic volatility. What would happen if you were suddenly unable to exit out of a trade? This video details the CEO’s explanation of what and how that happened.
- Is Robinhood safe? Experts weigh in on using the commission-free investing app (Article) – This is a good overview of some of the basic risks using Robinhood may present to your trading endeavors. The risks are not 0… they aren’t catastrophic either. But they do exist.
A Word of Caution
Trading using python code is important next step if you are looking to implement various statistical methods to gain an edge. Just keep in mind that a program will do whatever you tell it to do… if even you didn’t mean to.
This could mean large losses if you mess up either the coding, the intellectual due diligence required of a successful algorithm, or even if market conditions change unexpectedly. Make sure you build in fail-safes and monitor what your program is doing in real time.
All that said, I don’t make recommendations. This article is an expression of my opinion, and you should consider hiring a professional if you are looking for advice on how to spend or invest your money.
Wrapping Things Up
I will keep this page updated regularly. I go through a lot of content on this subject and much of it is not of high quality… I’ll keep the junk from making it here. As a result, the number of resources initially may seem small. Bookmark this page and just keep in mind that the consequence of implementing bad ideas on this front can be devastating.
Trading via Robinhood with Python using Machine Learning and AI is an exciting way to cut your teeth in a discipline that has largely been something reserved for large institutions. I hope you enjoyed this article and if you have any resources, that you think I should add then throw them down in the comments below.
|
OPCFW_CODE
|
HDDS-3371. Cleanup of old write-path of key in OM
What changes were proposed in this pull request?
When the OM-HA code was added, the tests were not completely updated. Many of them still call the OzoneManagerProtocol "write-path" methods even though those methods have been superseded by the HA methods. For this reason the obsolete protocol methods in the OzoneManager class could not be removed.
This PR replaces all those obsolete methods with default methods in OzoneManagerProtocol.java that just throw an UnsupportedOperationException.
It also fixes the tests broken by those changes.
A draft copy of this PR was commented on by @bharatviswa504 and @cxorm but it was too out of date, so I closed it:
https://github.com/apache/ozone/pull/2629
and created the current one.
Unremoved methods
The following methods are in the OzoneManager class, on the write path. They were not removed because they are invoked from within the OmRequest subclasses:
OzoneManager::finalizeUpgrade
OzoneManager::getDelegationToken
OzoneManager::renewDelegationToken
OzoneManager::cancelDelegationToken
Other dead code removed
In my testing, I noticed a couple of other methods that are no longer used and removed them as well:
OzoneManager::checkVolumeAccess is unsupported in the VolumeManagerImpl
OzoneManagerRequestHandler::allocateBlock is a private method that is not called from anywhere
Complex Fixes
Most of the broken tests were easy to fix. Basically any the tests that invoked the removed methods directly from the OzoneManager.class were modified to invoke them from an OzoneManager client. There were a couple of difficult areas:
TestOmMetrics class
These tests verify that the correct number of metrics are being generated. Most of them use mocks invoked by the methods removed in this PR. So they had to be significantly restructured. In most cases the mocks had to be replaced with a miniCluster, (because there is no other way to generate the metrics.)
My goal was to fix the tests so the number of metrics would not change. I was able to do that in all cases except one. One of the tests uses the checkVolumeAccess() method, which was removed months ago, (as mentioned above). The old mocks were incorrectly ignoring the exception it has been generating. So I removed it from the test and corrected the corresponding counts. The rest of the counts are unchanged.
listStatus() race condition in TestOzoneFileSystem::testListStatusWithIntermediateDir
Under certain circumstances, listStatus() fails to return an intermediate directory if the key containing it hasn't yet been transferred from the cache to the keyTable. This race did not occur prior to this PR because the test was using the pre-HA version which doesn't use the cache.
Since the test passes when the key is in the table, I fixed it by sleeping/retrying, but it is possible that listStatus() itself should be modified so a read from the cache works the same as a read from the keyTable. The details of the race are a bit complicated, but explained below.
listStatus() race condition details
In commitKey OmRequest, a key is first written to the keyTableCache and then to the keyTable when the double buffer is flushed.
If OZONE_OM_ENABLE_FILESYSTEM_PATHS is enabled, and the key has a superdirectory, thesuperdirectory's are added to cache during the openKey OmRequest
If OZONE_OM_ENABLE_FILESYSTEM_PATHS is not enabled, the superdirectories are not added in the openkey request. In addition, the cache search method in listStatus() is written to skip any keys with an embedded "/" here
So, if OZONE_OM_ENABLE_FILESYSTEM_PATHS is disabled, and a key with an embedded "/" is commited, and it is still in the cache, listStatus() won't see it or it's superdirectory.
listStatus() does return the directory when it gets flushed to the table, so sleeping for a bit fixes the problem.
I've only noticed this happening on one test and only 20% of the time. (I've put a sleep in to handle that case.) But we should discuss if that is good enough.
Genesis Broken in Master
ozone genesis -b BenchMarkOzoneManager uses the pre-HA code and so should also be updated for these changes.
It is currently broken in the master branch. It generates this exception when I try to run it in master:
org.apache.hadoop.metrics2.MetricsException: Metrics source DBCheckpointMetrics already exists!
I think that error will need to be fixed before I can update it, so I've left it out of this PR. Let me know if that's a problem.
Since the CI tests don't run Genesis, the lack of updates don't cause the tests to fail.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-3371
How was this patch tested?
All effected tests were updated and all pass without fail.
I timed all the changed tests. Before the changes, they took 12 minutes on average. Now they take 12.5 minutes on average.
I ran the CI 12 times each on master and on this branch. Both branches failed 6 times. While there was some overlap in the failures, some were unique to master and some to this branch.
As best I could tell, none of the failures were related to these changes, but it is hard to be sure.
@bharatviswa504 and @cxorm here is the updated PR.
Please take a look when you get a chance.
Thanks!
In commitKey OmRequest, a key is first written to the keyTableCache and then to the keyTable when the double buffer is flushed.
If OZONE_OM_ENABLE_FILESYSTEM_PATHS is enabled, and the key has a superdirectory, thesuperdirectory's are added to cache during the openKey OmRequest
If OZONE_OM_ENABLE_FILESYSTEM_PATHS is not enabled, the superdirectories are not added in the openkey request. In addition, the cache search method in listStatus() is written to skip any keys with an embedded "/" here
So, if OZONE_OM_ENABLE_FILESYSTEM_PATHS is disabled, and a key with an embedded "/" is commited, and it is still in the cache, listStatus() won't see it or it's superdirectory.
I think if filesystem path is disabled, then it is Object store kind of bucket. On ObjectStore kind of buckets, we donot need to support FS API's like listStatus. And from reading code I think to skip after Stripping "/" at end is as recursive false, We donot need to print 2 level deeper child. As the issue here is for OBS, we donot create intermediate dirs that is the reason this logic of search with the below code is failing to find in cache (As in cache we donot find any intermediate dirs). If we want to fix this for OBS buckets, we should have similar logic getChild like here(https://github.com/apache/ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java#L2411)
Or another way to fix the test is listKeys. Which is an Object Store API.
String remainingKey = StringUtils.stripEnd(cacheKey.substring(
startCacheKey.length()), OZONE_URI_DELIMITER);
// For non-recursive, the remaining part of key can't have '/'
if (remainingKey.contains(OZONE_URI_DELIMITER)) {
continue;
cc @rakeshadr @smengcl for any comments.
another way to fix the test is listKeys.
@bharatviswa504 I'm ok with using listKeys(), but the name of the test that is failing is: testListStatusWithIntermediateDir
so it seems a bit weird not to use listStatus(). If it is not correct to use listStatus() when enabledFileSystemPaths is false, maybe the test shouldn't run in that case, and only run when enabledFileSystemPaths is true?
Yes, the listStatus makes sense when enabledFileSystemPaths to true. As said, if we want to go ahead and fix it, I am even okay with it. cc @rakeshadr for comments.
@rakeshadr @bharatviswa504 jira ticket here:
https://issues.apache.org/jira/browse/HDDS-5877
I merge from github looks it caused CI issue.
@GeorgeJahad When you get a chance please fix CI. TIA.
|
GITHUB_ARCHIVE
|
package usecase
import (
"aws-mahjong/game"
"aws-mahjong/repository"
"aws-mahjong/server/event"
"errors"
"fmt"
"sort"
socketio "github.com/googollee/go-socket.io"
)
var (
RoomAlraedyTakenErr = errors.New("room already taken")
RoomNotFound = errors.New("room is not found")
RoomReachMaxMember = errors.New("room already fulled")
)
type RoomUsecase interface {
Rooms() []*RoomInfo
Room(roomName string) (*RoomInfo, error)
NewRoomStatus(roomName string, payload string)
CreateRoom(s socketio.Conn, username string, roomName string, roomCapacity int) error
JoinRoom(s socketio.Conn, username string, roomName string) error
LeaveRoom(s socketio.Conn, roomName string) error
LeaveAllRoom(s socketio.Conn) error
}
type RoomUsecaseImpl struct {
gameRepo repository.GameRepository
roomRepo *repository.RoomRepository
}
func NewRoomUsecase(roomRepo *repository.RoomRepository, gameRepo repository.GameRepository) RoomUsecase {
return &RoomUsecaseImpl{
gameRepo: gameRepo,
roomRepo: roomRepo,
}
}
type RoomInfo struct {
Name string
Len int
Capacity int
}
func (u *RoomUsecaseImpl) Rooms() []*RoomInfo {
rooms := []*RoomInfo{}
roomNames := u.roomRepo.Rooms()
sort.Slice(roomNames, func(i int, j int) bool { return roomNames[i] < roomNames[j] })
for _, roomName := range roomNames {
r, err := u.Room(roomName)
if err != nil {
continue
}
rooms = append(rooms, r)
}
return rooms
}
func (u *RoomUsecaseImpl) Room(roomName string) (*RoomInfo, error) {
g, err := u.gameRepo.Find(roomName)
if err != nil {
return nil, RoomNotFound
}
foundRoom := &RoomInfo{
Name: roomName,
Len: u.roomRepo.RoomLen(roomName),
Capacity: g.Capacity(),
}
return foundRoom, nil
}
func (u *RoomUsecaseImpl) NewRoomStatus(roomName string, payload string) {
u.roomRepo.BroadcastToRoom(roomName, event.NewRoomStatus, payload)
}
func (u *RoomUsecaseImpl) CreateRoom(s socketio.Conn, username string, roomName string, roomCapacity int) error {
if u.roomRepo.RoomLen(roomName) != 0 {
return RoomAlraedyTakenErr
}
user := &game.User{ID: s.ID(), Name: username}
newGame, err := game.NewGame(roomCapacity, user)
if err != nil {
return err
}
err = u.gameRepo.Add(roomName, newGame)
if err != nil {
return err
}
u.roomRepo.JoinRoom(s, roomName)
return nil
}
func (u *RoomUsecaseImpl) JoinRoom(s socketio.Conn, username string, roomName string) error {
if u.roomRepo.RoomLen(roomName) == 0 {
return RoomNotFound
}
roomGame, err := u.gameRepo.Find(roomName)
if err != nil {
return RoomNotFound
}
user := &game.User{ID: s.ID(), Name: username}
err = roomGame.AddUser(user)
if err != nil {
return err
}
if u.roomRepo.RoomLen(roomName) >= roomGame.Capacity() {
return RoomReachMaxMember
}
u.roomRepo.JoinRoom(s, roomName)
if u.roomRepo.RoomLen(roomName) == roomGame.Capacity() {
err = roomGame.GameStart()
if err != nil {
fmt.Println(err)
return err
}
u.roomRepo.BroadcastToRoom(roomName, event.GameStart, "")
// hide from usecase
err = roomGame.Board().TurnPlayerTsumo()
if err != nil {
fmt.Println(err)
return err
}
newGameStatus(u.roomRepo, roomName, roomGame)
}
return nil
}
func (u *RoomUsecaseImpl) LeaveRoom(s socketio.Conn, roomName string) error {
roomGame, err := u.gameRepo.Find(roomName)
if err != nil {
return RoomNotFound
}
if u.roomRepo.RoomLen(roomName) == 1 {
// last one person leave
err = u.gameRepo.Remove(roomName)
if err != nil {
return err
}
}
if roomGame.Board() == nil {
// game not started
user := &game.User{ID: s.ID()}
err = roomGame.RemoveUser(user)
} else {
// game started
err = u.gameRepo.Remove(roomName)
}
if err != nil {
return err
}
u.roomRepo.LeaveRoom(s, roomName)
return err
}
func (u *RoomUsecaseImpl) LeaveAllRoom(s socketio.Conn) error {
for _, roomName := range s.Rooms() {
u.roomRepo.LeaveRoom(s, roomName)
err := u.gameRepo.Remove(roomName)
if err != nil {
return err
}
}
return nil
}
|
STACK_EDU
|
Type of Online Game
In this globalization era, Internet has become something phenomenal. Everyone uses and needs internet. Internet has become something important in our life. We can find the use of internet in almost every part of our life. Everything and everyone is connected each other. One of the most common uses of internet that can be found is online game. Basically, online game is game that played through the internet. Nowadays, you don’t have to play game all alone or with very limited person. However, with being online, you can connect the game to server and play with other players around the globe. Online game can be played in almost every device and mobile device such as PC, Laptop, Smartphone, and Tablet. There are many games that have been designed to be played online. Those games are classified to some types of online game. We are going to discuss some popular types of online game that have been developed.
First-Person Shooter (FPS)
First-Person Shooter is one of the most popular types of online game. In First-Person Shooter, the player will play as a first person against other players in the head-to-head battle. With playing online, many players can get involved to the games. Each player will represent its point of view in the battle. There are some popular first-person shooter game that have been developed and play worldwide such Counter-Strike, Call of Duty and Point Blank.
Role-Playing Game (RPG)
Role-Playing game is a type of online game that allows the player to play a role in a game. Commonly, this type of game comes with an adventures and story line. Each player will have some abilities that can be used to attack and defend themselves in the battle. By playing online, you can meet other player’s roles in the game. There are some popular RPG that has been developed such as Final Fantasy Series and Soul Blazer.
Real-Time Strategy Game (RTS)
Real-Time Strategy game is a type of online game that requires some strategy to do and win the game. It also played in the real time. As online game, real-time strategy game can be played with other players around the globe. Commonly, each player will attack and challenge other player’s strategies. There are some popular games that have been developed such as Age of Empires and Dawn of War.
Multiplayer Online Battle Arena (MOBA)
Multiplayer Online Battle Arena is also considered as one of the most popular and the most successful types of online game in this era. In this type of online game, you will play with other players around the globe in the battle arena. Commonly, in the battle arena, you will be part of team against other teams. It can be 3 VS 3 or 5 VS 5. Dota is one of the most successful and popular MOBA games that has been developed and played worldwide. There are also other games such as Warcraft and Leagues of Legend.
Massively Multiplayer Online Game (MMO)
Basically, Massively Multiplayer Online Game is a type of game that allows hundreds or thousands of players to player in the same game and in the same time. Commonly, this type of online game can be played in some different style such as in RPG (MMORPG), RTS (MMORTS), FPS (MMOFPS) and other styles.
Online Console Games
With the massive development of technology and internet, console game also can be connected to the internet. It allows console game to play some online game. By being online, you can play some online game using console and interact with other players in the game. XBOX, PlayStation, and Nitendo are some console games that have online feature.
|
OPCFW_CODE
|
I don't know if it's hit the HI forum community but a bunch of Bladeforums members have received it already. I think a lot of the HI crowd don't know much about computers and I think a lot don't go to other forums at this website, so I figured I'd better post a link here....
I've posted some more about this virus and about viruses in general in the thread linked to above -- it's a good idea to read that thread again if you're one of those guys who don't know a lot about computers. (I am not mentioning any names here -- you know who you are.)
One thing I want to emphasize is this and other viruses that spread by email usually come from people you know. When a virus infects a computer the first thing it usually does is send itself to everybody in the address book. People you know have your address in their address books. A lot of people seem to think they're safe if they just don't run attachments unless they know the person who sent it to them -- NO!
Run an antivirus program (I use Mcafee but there are others) and update it frequently and if you run any Microsoft software check their website for critical updates frequently.
This particular virus seems to be more embarrassing than harmful, but they aren't all that benign....
I actually got two of them this morning. I don't know if it came from a Bladeforums person, or any of the other folks that have my e-mail addy. I'm assuming that it sends itself off to those in your address book like the others...
Fortunately, I'm a Mac person I haven't had a virus problem yet (knocking on my hea... I mean wood), but I've seen ones like this shut down an entire university (remember the infamous "I Love You").
Just remember everyone, if an e-mail contains an attachment that isn't a .jpg or .gif, don't open it unless you were expecting it, especially if you don't know the sender. Even IF you know the sender, you might want to check with them before opening the file.
Before I get you too scared, though, let me explain a little more. Viruses mail themselves with a generic message like "Take a look at this file!" If you get email from a friend that's a personal message that only your friend would have written, if it mentions mutual interests like knives, anything a virus wouldn't know, a virus couldn't have sent that.
Whenever I send a program file in email I always write a message with it that's identifiably from me and not the kind of generic cover email that a virus could send.
This is from mail sent via my mother from her office's Resident Computer Tech Specialist (pronounced Computer Geek)
I saw Cougar's post and thought I should help with a few more details.
Ladies and Gentlemen:
Please be aware of the following warning of malicious code being circulated through e-mail: W32.Sircam.Worm@mm
Large scale e-mailing: The worm embed random documents from the infected PC to itself
Deletes files: 1 in 20 chance of deleting all files and directories on C:. Only occurs on systems using D/M/Y as the date format
Degrades performance: 1 in 33 chance of filling all remaining space on the hard disk by adding text to the file c:\recycled\sircam.sys at each startup
Releases confidential info: It will export a random document from the hard drive by appending it to the body of the worm
This worm arrives as an email message with the following content:
Subject: The subject of the email will be random, and will be the same as the file name of the attachment in the email.
Message: The message body will be semi-random, but will always contain one of the following two lines (either English or Spanish) as the first and last sentences of the message.
First line: Hola como estas ?
Last line: Nos vemos pronto, gracias.
First line: Hi! How are you?
Last line: See you later. Thanks
Between these two sentences, some of the following text may appear:
Te mando este archivo para que me des tu punto de vista
Espero me puedas ayudar con el archivo que te mando
Espero te guste este archivo que te mando
Este es el archivo con la informaci=n que me pediste
I send you this file in order to have your advice
I hope you can help me with this file that I send
I hope you like the file that I sendo you
This is the file with the information that you ask for
The file names under which this threat have been submitted are:
Tech Specs and Financials.doc.com
As always, if you are concerned that you may have a virus, worm or other piece of malicious code on your computer, please call MIS immediately so that we may determine what you have (if anything) and promptly take any action necessary.
Michael E. Ferguson
Manager, Information Systems
"First, your place, and then the world's"
Berman Wolfe Rennert Vogel & Mandler, P.A.
100 SE 2nd., Street, Suite 3500
Miami, Fl., 33131
Direct Line: 305.423.3408
|
OPCFW_CODE
|
I have found this indicator on the net, it is not mine, and when I am trying to attach in MT4 chart it doesn't work!
When I open it in MT4 (latest release) editor it returns some error about declaring last variable, but I am not able to fix it.
Can you help me fix it?
Thank you in...
If you are a newbie, you better watch these videos about indicators. They are instructional and easy to understand. These videos will show you how to install these indicators. and also how to apply them to your charts. https://www.mql5.com/go?link=https://www.profiforex.com/education/professional/...
Please advise me what is missing from my charts.Which indicator am I missing?
I thought I would show you a robust and reliable indicators.It is custom mt4 indicators , it is a combination of types of 7 indicators as follows
1)5 moving averages on multiple time frames
I am looking for DT oscillator indicator which mixed RSI& Stoch written by Robert miner.
I already have it in Meatrader veriosn 4 but I am looking for it in MT5.
Or can some one help me to convert this indicator from Mt4 to MT5?
I like the FFCalendar indicator and had a difficult time finding updated source that worked on new Metatrader builds.
This is MT4 compatible. I'm not sure about MT5 but the cohesion of scripting might have it working there too.
If I'm posting this in the wrong section let me know.
i have a problem to solve and would be thankful for help.
Is there any way to use UDP for MT4 alerts?
In the moment my indi alerts by email, but i need a function in MT4, that sends this email alert triggered by an normal indicator as...
I have been using the attached mql4 for a long time, but when Meta-trader updated to Build 600 the indicator stopped from working & I hope from on of our coders pro to return the indicator to work again.
Who would be kind enough to add a simple sound and email alert to this 2EMA color indicator. It is quite effective.
I need it to send an email and sound alert at the close of the first candle and at the end of the first(beginning of the second double dot color change).
I've made an indicator which presents areas which are used to create ichimoku line.
But I have a problem.
I'm not a programist, so the indicator has a problem with refreshing shapes which are based on current price.
Anyone could help improve it?
I'm experimenting on indicators and I wrote a simple SMA using iMA function, it works well just for program fixed applied_price (PRICE_CLOSE).
//+------------------------------------------------------------------+//| SMA_03.mq5 |//|...
Hello forum, good day.
I'm using a Custom Indicator in the Strategy Tester using visual mode to see the behaviour, and after several days pass, suddenly it stops showing the indicator buffers and the Expert Advisor stops working as well because I'm using the iCustom() function to get the data from...
Why is the description of the arrow not shown?
ObjectCreate(0,"Arrow",OBJ_ARROW_UP,0,Time,Low,0,0,0,0); ObjectSetText("Arrow","description",12,"Times New Roman",Green);
F8 -> show objects description is activated.
Hello, time ago I downloaded from MQL4 the attached sample and it was working fine. The sample generates a list and sorts it using different criteria. The sample overloads COMPARE function to generate custom sorting. Now the overload is not more working and the sample uses the base class sort. I...
I want to make two changes to an existing indicator "macd_histogram" and I would appreciate your kind help:
1- I want to show the date in alert window in "hh:mm:ss" format. By default the indicator shows the date of alerts in alert window in "yyyy:mm:dd, hh:mm:ss" format. (pls see...
I asked a question about avoiding repetition of similar alerts in alert window and no one replied.
Now, I have another question:
Is there any way to write the alert part of an indicator so that it shows each alert once and the indicator remembers that it showed that alert before...
Hello, I'm using this indicator: https://www.mql5.com/en/code/1725
I am trying to modify the values returned.
I want that indicator returns differents values when it is in differents positions. This example shows 2 stocks, one working ok, and other return just 1 value.
If someone could help me,...
I use an average daily range indicator that works great, it shows 5, 10 and 20 day ranges. However, what I need is to be able to see the average daily range for a set period, for example Jan to Feb last year. (or the previous 20 days from a set period rather than showing the previous 20 days from...
I'm using an MT4 platform and everytime I try to buy an indicator from the market, the journal tab at the bottom of my screen says " MQL4 Market: failed parsing info about purchasing a product 'XXXXX'
Any ideas on what's happening?
thanks in advance,
I am looking for a way either to omit or replacing repetitive alerts (created by different indicators in message alert window) by newer ones. In other words, I want to know is there any way to clean message alert window other than restarting the MT5.
For example, on a single chart I...
I have an indicator that doesn't work at the MT5 startup. It doesn't show nothing in the chart.
But, it works perfectly when I just add it to a chart, or when I switch timeframes.
What should be happening?
|
OPCFW_CODE
|
Have you ever seen a dead snake? Was it moving?
Well, although surprising, it’s not uncommon.
Snakes have a reputation for being able to move for hours after dying. They can even bite and kill in the state of death.
The question is: Why do snakes move after they are dead?
In this article, we’ll understand why snakes move even after dying. We’ll also touch upon the causes of death in snakes. These details will help you take better care of your charming slitherer and understand more about him.
Snakes can twitch or move after they’ve died due to leftover nerve signals. The ions in a snake’s nerves are active and will respond to stimuli like being touched or moved. Their venom is still deadly and can even kill you.
But why did he die? And how should I handle my dead snake’s body?
Let’s get you all the answers.
Table of Contents
Why Do Snakes Move After Their Death?
Although it may seem impossible for a snake to move after death, there is substantial scientific research behind the occurrence.
Like how a human body may twitch or jerk after death, a snake may do the same, but in a more pronounced and exaggerated way.
This is mainly due in part to the composition and physiological make-up of a snake.
Low Energy and Oxygen Needs
Since snakes are cold-blooded creatures, they absorb heat from the outside environment.
Due to this fact, snakes require minimal energy and oxygen levels since they aren’t using any for heat production.
This is contrary to warm-blooded creatures, who need a large amount of energy and oxygen to regulate their internal temperature.
While it may seem contradictory, these low levels of energy and oxygen in snakes are the main reason why these creatures continue to act “alive” after they are dead.
Since they didn’t need much of it in the first place, when snakes are killed and cut off from energy and oxygen, their cells don’t immediately die.
This prolongs the bodily functions of snakes, allowing them to still move after death.
On the contrary, the cells of warm-blooded animals start dying immediately after being cut off from oxygen.
In addition to the bodily functions, the nerve endings of snakes continue to work correctly after death.
Even if the snake has been dead for a few hours, the ions in the snake’s nerves are still active and will respond to stimuli.
If a dead snake is touched or moved, the nerves will react and send electrical impulses throughout the body, triggering muscle movements.
Strong Bite Reflexes
While these nerve endings trigger the bite reflex, venomous snakes have been known to be particularly “nippy” after death.
This is because venomous snakes, such as cobras and rattlesnakes, have depended on this bite reflex for survival.
The snake’s muscle and nerve memories will continue to be active, even after a beheading.
The venom is still poisonous after death, so you need to practice caution around a dead cobra or rattlesnake.
If your pet snake recently passed and you’re worried it might come back to bite you, take a deep breath.
First, let’s determine why your pet may have died, and then we will move onto safely handling the body.
Why Did Your Snake Die?
Owning a pet snake is an extremely gratifying experience. These fascinating creatures can live up to 15-20 years long if you take care of them.
Although a snake can die for several reasons, including age limit, disease, and environmental toxins, sometimes it is because of human error.
If you feel this may be the case, you should determine if these two significant causes of death sound applicable and what to do with your snake’s body.
As a cold-blooded creature, snakes require very precise climates and temperature control.
Snakes typically prefer a hot and humid environment, allowing them to absorb heat and moisture from their surroundings.
Temperatures lower than 65° degrees Fahrenheit (18.3° C) for an extended period may cause death.
On the other hand, temperatures over 100° degrees Fahrenheit (38° C) also cause deadly issues.
In addition to the proper environment, snakes require proper nutrition for survival as well.
Typically, a snake will not eat much, since it doesn’t need the energy to maintain an internal temperature.
However, all snakes are carnivores.
Depending on the species, they need certain types of food to survive.
It’s important not to over or underfeed your snake, which will lead to digestive complications and possibly death.
Safely Handling The Deceased
Since you now have some insight into a possible cause of death, it’s time to dispose of your pet’s body.
While it may be an uncomfortable task, we have learned these creatures tend to bite after death.
The best thing to do is to wait a few hours before moving the snake’s body.
This will give the cells in the snake’s body time to die, avoiding triggering the snake’s reflexes.
Once this time has passed, you should either arrange for the body to be cremated or bury it in a non-biodegradable casket, such as this one on Amazon.
Either of these methods is a safe way of handling your pet snake and providing a proper send-off.
There is also the option of composting your pet snake, and while this may seem disrespectful and disturbing, it is a very organic and natural way of parting goodbye.
However, it will take careful planning to plot out an area for the compost and ensuring your pet snake doesn’t become exposed over time.
Snake Charms Beyond the Grave
Snakes are exciting and versatile creatures.
They have a unique, cold-blooded composition, which allows them to survive on very low energy and oxygen levels.
However, it is these same low levels which allow the snake to remain animated after death.
After doing your research, now you know this strange occurrence is merely a combination of a snake’s hyperactive nerve endings and resilient reflexes.
Snakes will not come back to life after death, but you should still be vigilant around a dead snake’s body, especially when it comes to venomous snakes.
|
OPCFW_CODE
|
The Sixth Chakra
Description: Amoxicillin For Sale, The sixth chakra is about your intuition, your imagination and being able to “see” clearly. The color of this chakra is indigo and the element is light, Amoxicillin alternatives. Buy generic Amoxicillin, The six chakra rules your imagination. To manifest something you have to imagine it first, is Amoxicillin safe. After Amoxicillin, You need to see it in your mind’s eye before you can create it. What are you wanting to manifest, Amoxicillin For Sale. Put your imagination to work and it will support the process of making it come true.
Following your intuition is a big part of the function of the sixth chakra, kjøpe Amoxicillin på nett, köpa Amoxicillin online. Discount Amoxicillin, Are you listening to your higher-self. To connect to your intuition, order Amoxicillin from mexican pharmacy, Amoxicillin australia, uk, us, usa, it is important to slow down. Your mind can be racing and filled with thoughts, Amoxicillin canada, mexico, india. Amoxicillin For Sale, Many of your thoughts can be influenced by other people’s opinions and thoughts. Amoxicillin duration, Having a healthy sixth chakra is when you know what is right. It is when you have a certain knowing without any question, Amoxicillin dose. Amoxicillin from mexico, It is when you can see life situations without illusions or delusions. It is when you operate from a place of wisdom, Amoxicillin recreational. That is a powerful sixth chakra.
Human Talent: Intuition, Amoxicillin For Sale. What is Amoxicillin, It is the ability to follow your own inner guidance.
Negative Expression: Illusions, delusions, Amoxicillin interactions, Amoxicillin street price, seeing the world in self-serving ways.
Positive Expression: .Using your inner guidance, using your imagination for positive creations, purchase Amoxicillin. Amoxicillin overnight, Seeing challenging life situations from a place of wisdom is a positive expression of the sixth chakra.
Lesson: The Sixth Chakra teaches you to follow your inner guidance.
Exercise: Tune in this week and listen to what your inner guidance is telling you. If you get an intuitive hit about something or someone, Amoxicillin results, Amoxicillin price, listen to it. Don’t second guess yourself, Amoxicillin brand name. Amoxicillin For Sale, You know, when you listen to your inner guidance, exactly what is right for you.
Another exercise would be to see a challenging situation from a higher point of view. Taking Amoxicillin, Don’t let that situation get you down. Look at it differently and in a way that allows you to expand and grow.
Inquiry: How is your sixth chakra expressing itself in your day to day life, is Amoxicillin addictive. Amoxicillin steet value, How intuitive do you think you are. Do you take time to stop and listen to intuitive insights that you are receiving and follow them or are you led by what others are telling you to do?
I would love to hear back from you!
Help me to help others feel connected to their bodies by sharing this information.
Thanks and much love!
Copyright © February 2014 Anna-Thea
, Amoxicillin pharmacy
. Amoxicillin images. Amoxicillin over the counter. Canada, mexico, india. Amoxicillin photos. Amoxicillin used for. Rx free Amoxicillin. Amoxicillin wiki. Buy Amoxicillin online no prescription. Where can i find Amoxicillin online. Buy Amoxicillin without prescription.
Similar posts: Bactrim For Sale. Buy Diflucan Without Prescription. Buy Zithromax Without Prescription. Online buying Zithromax. Diflucan schedule. Zithromax photos.
Trackbacks from: Amoxicillin For Sale. Amoxicillin For Sale. Amoxicillin For Sale. Buy Amoxicillin from mexico. Amoxicillin coupon. Amoxicillin blogs.
|
OPCFW_CODE
|
Best way to add a new column with an initial (but not default) value?
I need to add a new column to a MS SQL 2005 database with an initial value. However, I do NOT want to automatically create a default constraint on this column. At the point in time that I add the column the default/initial value is correct, but this can change over time. So, future access to the table MUST specify a value instead of accepting a default.
The best I could come up with is:
ALTER TABLE tbl ADD col INTEGER NULL
UPDATE tbl SET col = 1
ALTER TABLE tbl ALTER COLUMN col INTEGER NOT NULL
This seems a bit inefficient for largish tables (100,000 to 1,000,000 records).
I have experimented with adding the column with a default and then deleting the default constraint. However, I don't know what the name of the default constraint is and would rather not access sysobjects and put in database specific knowledge.
Please, there must be a better way.
I'd ALTER TABLE tbl ADD col INTEGER CONSTRAINT tempname DEFAULT 1 first,, and drop the explicitly named constraint after (presumably within a transaction).
+1. --- @Alex: I am not sure that one can do DDL within a trasactions.
--- @Adrian: do you really care for it being inefficient? You are not doing it every day, are you? Usually I use the way you describe for clarity.
@van, I'm not 100% sure of the limitations of SqlServer'05 in this regard -- PostgreSQL does let you do alter table transactionally (with explicit BEGIN and COMMIT of the transaction, at least). To check if MS also does, SELECT @@TRANCOUNT should tell you (I can't find it clearly spelled out in the docs).
@van & @Alex MS SQL Server does DDL within transactions.
To add the column with a default and then delete the default, you can name the default:
ALTER TABLE tbl ADD col INTEGER NOT NULL CONSTRAINT tbl_temp_default DEFAULT 1
ALTER TABLE tbl drop constraint tbl_temp_default
This filled in the value 1, but leaves the table without a default. Using SQL Server 2008, I ran this and your code, of alter update alter and did not see any noticeable difference on a table of 100,000 small rows. SSMS would not show me the query plans for the alter table statements, so I was not able to compare the resources used between the two methods.
A similar solution to @Alex but a more complete explanation.
Another, maybe more native, way would be:
ALTER TABLE tbl ADD COLUMN col INTEGER NOT NULL DEFAULT 1;
ALTER TABLE tbl ALTER COLUMN col DROP DEFAULT;
I'm not sure how long this function exists, but the PostgreSQL documentation goes back to version 7.1 and for 7.1 it is already described.
DROP DEFAULT is an incorrect syntax in MS SQL.
You can do it in an insert trigger
thats exactly what i think.
btw y do you don't want to do it as constraint
The question is for adding a column to a table with existing rows. Adding a trigger will only affect new rows. Am I missing something?
I think I misunderstood the question. When you say you want to add a column with an initial value, do you mean that you want to have all existing rows have this value without having to update them? In this case, you could add two columns. One is a nullable column and the other is a computed column. The computed column will return the default value if the other column is null, otherwise it will return the other column.
This is a lot of complexity going forward. I would do rather do it like you did in your example.
If you add a default constraint when creating the table, you won't know what it is called. However, if you add a constraint with ALTER TABLE, you must name the constraint. In this case, you would be able to ALTER TABLE DROP CONSTRAINT (This applies to T-SQL, not sure about other databases.)
However, this would require you to CREATE TABLE with NULL column, ALTER TABLE to add the constraint, make the column NOT NULL, and finally DROP CONSTRAINT.
I don't believe an insert trigger would work as someone else mentioned, because your rows are already added.
I think the way you describe may, in fact, be the most efficient and elegant solution.
|
STACK_EXCHANGE
|
The code accompanying the slides Enterprise Build and Test in the Cloud is available at the appfuse-selenium github page.
Provides a Selenium test environment for Maven projects, Appfuse as an example. Allows to run Selenium tests as part of the Maven build, either in an specific container and browser or launching the tests in parallel in several browsers at the same time.
For more information check my slides on Enterprise Build and Test in the Cloud and the blog entries Enterprise Build and Test in the Cloud with Selenium I and Enterprise Build and Test in the Cloud with Selenium II.
By default it’s configured to launch 3 browsers in parallel, Internet Explorer, Firefox 2 and 3
Check src/test/resources/testng.xml for the configuration.
In the single browser option you could do
Testing in Jetty 6 and Firefox
Testing in Internet Explorer
Testing with any browser
Start the server (no tests
running, good for recording tests)
Here you have the slides from my talks at ApacheCON
Enterprise Build and Test in the Cloud
Building and testing software can be a time and resource consuming task. Cloud computing / on demand services like Amazon EC2 allow a cost-effective way to scale applications, and applied to building and testing software can reduce the time needed to find and correct problems, meaning a reduction also in time and costs. Properly configuring your build tools (Maven, Ant,…), continuous integration servers (Continuum, Cruise Control,…), and testing tools (TestNG, Selenium,…) can allow you to run all the build/testing process in a cloud environment, simulating high load environments, distributing long running tests to reduce their execution time, using different environments for client or server applications,… and in the case of on-demand services like Amazon EC2, pay only for the time you use it.
In this presentation we will introduce a development process and architecture using popular open source tools for the build and test process such as Apache Maven or Ant for building, Apache Continuum as continuous integration server, TestNG and Selenium for testing, and how to configure them to achieve the best results and performance in several typical use cases (long running testing processes, different client platforms,…) by using he Amazon Elastic Computing Cloud EC2, and therefore reducing time and costs compared to other solutions.
Eclipse IAM, Maven integration for Eclipse
Eclipse IAM (Eclipse Integration for Apache Maven), formerly “Q for Eclipse”, is an Open Source project that integrates Apache Maven and the Eclipse IDE for faster, more agile, and more productive development. The plugin allows you to run Maven from the IDE, import existing Maven projects without intermediate steps, create new projects using Maven archetypes, synchronize dependency management, search artifact repositories for dependencies that are automatically downloaded, view a graph of dependencies and more! Join us to discover how to take advantage of all these features, as well as how they can help you to improve your development process.
|
OPCFW_CODE
|
If there's one thing that divides web developers, it's styling. A part of this has to do with the different requirements of websites and web applications
Working on web UIs for over a decade, I have realized there are two significant challenges in frontend engineering: understanding the state and styling its representation. Unidirectional data flow has made managing state much easier, but styling components is still painful.
To improve the situation, I started JSS back in 2014 and haven't stopped learning and developing the project since. Currently, I am working at Chatgrape where we are building a sophisticated client using NLP and deep services integration. All CSS is managed using JSS. Also, I try to talk at conferences from time to time, even if I know I suck at this haha.
It is important to understand though that not every product has all of the issues that these features address, so not every developer can relate to them or even confirm that they are real. If you don't get it - don't worry, the time for you just hasn't come yet.
One general truth you could take away from this is that JSS is a more powerful abstraction over CSS, which is good and bad at the same time. Less powerful abstractions may be of benefit for less experienced developers because less can be done incorrectly, but they certainly have limitations.
The essential libraries in JSS are core, React-JSS, and Styled-JSS. Low level and library-agnostic, the core is responsible for compilation and rendering of a stylesheet.
The core is used by both React-JSS and Styled-JSS internally. React-JSS is a higher-order component providing an interface for React. Styled-JSS is an alternative interface for React which implements the styled primitives factory.
Styled primitive or styled component is a component which has initial styles applied when created. There is no need to provide class names when you use it. It has been very actively promoted by the Styled Components library and is worth looking into as an alternative to other interfaces. Our implementation, in fact, combines both styled primitives and a classes map in one solid interface.
The general process goes like this:
.attachmethod, styles are compiled to a CSS string and injected into the DOM using a
Want to develop a website or re-design using CSS Development? We build a website and we implemented CSS successfully if you are planning to **[Hire CSS Developer](https://hourlydeveloper.io/hire-dedicated-css-developer/ "Hire CSS Developer")**...
The other day one of our students asked about possibility of having a CSS cheatsheet to help to decide on the best suited approach when doing this or that layout.
CSS is seen as an impediment in web development for many of us. Most of the time it looks like even when you follow the rules and everything seems clear, it still doesn’t work the way you want it to.
This CSS Cut Out Effect is Guaranteed to Blow Your Mind. This effect is so cool and just fun to see. What it comes down to is having a background image show through the text.
Learn how to create a smooth animation using the CSS transform translate3d prop, as well as why we use cubic-bezier transition timing function and its benefits.
|
OPCFW_CODE
|
creation react component for apollo client, simple model with object parameters
I know that react component can be created in two ways, with "extends React.Component" or just object in the component below.
My question is , in second way how it's work the parameters, in the example "const ChannelsList = ({ data: {loading, error, channels }})"
if I change the namme 'channel' for xchannels in the 'data' object and then i use to try this new change on "xchannels.map(...)...)" , the page give error:
ChannelsListWithData.js:24 Uncaught TypeError: Cannot read property 'map' of undefined
at ChannelsList (ChannelsListWithData.js:24)
at StatelessComponent.render (ReactCompositeComponent.js:44)
at ReactCompositeComponent.js:795
at measureLifeCyclePerf (ReactCompositeComponent.js:75)
at ReactCompositeComponentWrapper._renderValidatedComponentWithoutOwnerOrContext (ReactCompositeComponent.js:794)
at ReactCompositeComponentWrapper._renderValidatedComponent (ReactCompositeComponent.js:821)
at ReactCompositeComponentWrapper._updateRenderedComponent (ReactCompositeComponent.js:745)
at ReactCompositeComponentWrapper._performComponentUpdate (ReactCompositeComponent.js:723)
at ReactCompositeComponentWrapper.updateComponent (ReactCompositeComponent.js:644)
at ReactCompositeComponentWrapper.receiveComponent (ReactCompositeComponent.js:546)
ChannelsList @ ChannelsListWithData.js:24
StatelessComponent.render @ ReactCompositeComponent.js:44
(anonymous) @ ReactCompositeComponent.js:795
measureLifeCyclePerf @ ReactCompositeComponent.js:75
_renderValidatedComponentWithoutOwnerOrContext @ ReactCompositeComponent.js:794
....
It's like the function dont accept any variable name. why ?
how it works in react that object, and the name of parameters and the name, there is a link to understand?. Classic programm language like php is simple: function (parameters)
the component:
import React from 'react';
import {
Link
} from 'react-router-dom'
import {
gql,
graphql,
} from 'react-apollo';
import AddChannel from './AddChannel';
const ChannelsList = ({ data: {loading, error, channels }}) => {
if (loading) {
return <p style={{color:"red"}}>Loading ...</p>;
}
if (error) {
return <p>{error.message}</p>;
}
return (
<div className="channelsList">
<AddChannel />
{ channels.map( ch =>
(<div key={ch.id} className={'channel ' + (ch.id < 0 ? 'optimistic' : '')}>
<Link to={ch.id < 0 ? `/` : `channel/${ch.id}`}>
{ch.name}
</Link>
</div>)
)}
</div>
);
};
export const channelsListQuery = gql`
query ChannelsListQuery {
channels {
id
name
}
}
`;
export default graphql(channelsListQuery, {
options: { pollInterval: 10000 },
})(ChannelsList);
The component is from this tutorial:
https://dev-blog.apollodata.com/tutorial-graphql-input-types-and-client-caching-f11fa0421cfd
There's nothing wrong with it.
Classic programming language (like JavaScript) works fine as well.
Side comparison with php, the code works the same :)
In this example, parameters is an object that has both data, and match properties.
You can easily translate it to:
const ChannelDetails(parameters) {
const { data, match } = parameters
...
// Classic way (like php I presume)
const data = parameters.data
const channel = parameter.data.channel
}
Going even further, I recommend you read about object destructuring in JavaScript.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment
You have also asked: 'how it's work the parameters.. ?'
These parameters are the component's props you send when you render it.
For example:
<ChannelDetails
data={{ channel: [], loading: false, error: '' }}
match={"value here"}
/>
hello, thanks for your explanation about the function . i've understood , i've updated my question because i made a mistake with the component, basically is the same, only i have the problems if i change the variable name 'channel' for another, please look my updated question. it's a react behavioir our apollo?
try to replace ChannelDetails({ data: {loading, error, channels }}) to ChannelDetails(props). Then console.log(props) and check the output. Maybe you cannot change the property name.
if i change any name of variable in 'loading, error, channels' got the error too...in theory i can name variables how i want ?
So I guess you can't change their names from ChannelDetails component. Who is the parent component? Check how it is passing props to ChannelDetails
i understand now, i can't change name variables because is not the pure function like php function (parameter1, parameter 2..) but a object that brings the names parameters hardcode, thanks !
|
STACK_EXCHANGE
|
/*
* decaffeinate suggestions:
* DS102: Remove unnecessary code created because of implicit returns
* Full docs: https://github.com/decaffeinate/decaffeinate/blob/master/docs/suggestions.md
*/
const mesa = require('../lib/mesa');
module.exports = {
'user-added method is called with correct `this` value'(test) {
const query = Object.create(mesa);
query.attached = function() {
test.equal(this, query);
return test.done();
};
return query.attached();
},
'user-added method is copied and can be chained'(test) {
const table = mesa.table('user');
let thisInFoo = null;
table.foo = function() {
thisInFoo = this;
return this;
};
const rightBeforeCallToFoo = table.allow('a', 'b').where({c: 3});
rightBeforeCallToFoo.foo().order('id DESC');
test.equal(rightBeforeCallToFoo, thisInFoo);
return test.done();
},
'the correct properties (only own) are copied'(test) {
test.deepEqual(Object.getOwnPropertyNames(mesa), [
'_queueBeforeEachInsert',
'_queueBeforeEachUpdate',
]);
const userTable = mesa.table('user');
test.deepEqual(Object.getOwnPropertyNames(userTable), [
'_queueBeforeEachInsert',
'_queueBeforeEachUpdate',
'_mohair',
]);
userTable.userAddedMethod = function() {};
test.deepEqual(Object.getOwnPropertyNames(userTable.debug(console.log)), [
'_queueBeforeEachInsert',
'_queueBeforeEachUpdate',
'_mohair',
'userAddedMethod',
'_debug',
]);
return test.done();
},
'queueBeforeEachInsert is called with correct `this` value'(test) {
test.expect(1);
const mockConnection = {
query(sql, params, cb) {
return cb(null, {rows: []});
},
};
const query = mesa
.table('user')
.setConnection(mockConnection)
.allow(['a'])
.queueBeforeEachInsert(function(data) {
test.equal(this, query);
return data;
});
return query.insert({a: 1}).then(() => test.done());
},
'queueBeforeEachInsert is called with correct default args and `this` value'(
test,
) {
test.expect(4);
const mockConnection = {
query(sql, params, cb) {
return cb(null, {rows: []});
},
};
const query = mesa
.table('user')
.setConnection(mockConnection)
.allow(['a'])
.queueBeforeEachInsert(
function(data, arg2, arg3, arg4) {
test.equal(this, query);
test.deepEqual(arg2, {a: 1});
test.equal(arg3, 'arg3');
test.equal(arg4, 'arg4');
return data;
},
'arg3',
'arg4',
);
return query.insert({a: 1}).then(() => test.done());
},
'.call(f) is called with correct default args and `this` value'(test) {
test.expect(2);
const f = function(x) {
test.equal(this, mesa);
test.equal(x, 'x');
return this;
};
mesa.call(f, 'x');
return test.done();
},
'.when(...) with false'(test) {
const f = () => test.ok(false);
mesa.when(1 === 2, f);
return test.done();
},
'.when(...) with true is called with correct default args and `this` value'(
test,
) {
test.expect(3);
const f = function(arg1, arg2) {
test.equal(this, mesa);
test.equal(arg1, 1);
test.equal(arg2, 2);
return this;
};
mesa.when(2 === 2, f, 1, 2);
return test.done();
},
'.when(...) with true and .where()'(test) {
const query = mesa.when(true, mesa.where, 'a BETWEEN ? AND ?', 1, 10);
test.equal(query.sql(), 'SELECT * WHERE a BETWEEN ? AND ?');
test.deepEqual(query.params(), [1, 10]);
return test.done();
},
'.each() with empty array'(test) {
const query = mesa.each([], () => test.ok(false));
test.equal(query, mesa);
return test.done();
},
'.each() with object'(test) {
const query = mesa.each({a: 1, b: 2, c: 3}, function(value, key) {
const condition = {};
condition[key] = value;
return this.where(condition);
});
test.equal(
query.sql(),
'SELECT * WHERE ("a" = ?) AND ("b" = ?) AND ("c" = ?)',
);
test.deepEqual(query.params(), [1, 2, 3]);
return test.done();
},
getTable(test) {
test.equal('user', mesa.table('user').getTable());
return test.done();
},
isInstance(test) {
test.ok(mesa.isInstance(mesa));
test.ok(mesa.isInstance(mesa.table('user')));
test.ok(mesa.isInstance(mesa.table('user').where({id: 3})));
test.ok(!mesa.isInstance({}));
return test.done();
},
'the entire mohair interface is exposed and working'(test) {
// TODO
return test.done();
},
};
|
STACK_EDU
|
Last week someone left a comment on the More Hip Librarians post that puts in concrete terms my own view of the alleged "librarian shortage." The person wrote:
"My cranky two cents on the putative librarian shortage:
My wife is a registered nurse. There really is a nursing shortage. If she puts a resume up on the internet, she immediately gets e-mails and phone calls from recruiters wanting her to interview for real jobs, available right now. Any time she wants a raise, she can just get another job. Her pay has been going up several thousand dollars per year, for several years. Experienced Master's level nurses can easily make six figures these days. A nurse can blow into town on Friday morning, and be working by Monday. That's what a 'shortage' looks like. Does any of this sound like the library field?
There is no librarian shortage."
I couldn't agree more. It's a simple matter of economics. If there's a shortage of something in demand, then the price for that thing goes up. If the price of the thing isn't going up, then there isn't a demand. And if there isn't a demand, then there can't be a shortage. Librarian salaries aren't rising, thus there isn't a shortage of librarians. Any idiot should be able to understand this, and yet most of the ALA idiots can't seem to grasp this simple concept.
Now what there definitely seems to be is a shortage of morons willing to spend a couple of years in graduate school so they can go work in the middle of nowhere for $20K/year. But that's not a librarian shortage; it's a moron shortage. I never thought I would be saying this, but we do apparently have a moron shortage in this country, despite all evidence to the contrary.
The idiots I hear talking about how they can't find librarians never seem to make the connection. If you can't find librarians, then you need to pay more money. If you can't afford to pay more money, then the problem isn't a shortage of librarians, the problem is a shortage of good jobs. Libraries seem to want intelligent and capable people to pay for a library degree and then take low paying jobs in undesirable places. Many libraries only think they have a demand for librarians, when they really just have a desire for librarians. For some strange reason, these libraries find that their demand for morons doesn't satisfy their desire for librarians.
But the ALA keeps rolling out the lies and propaganda. One would think that any decent human being would be disgusted to continue telling such lies, but maybe the folks at ALA have been telling the same lies so long they actually believe them.
|
OPCFW_CODE
|
Stratford, Connecticut - OneStream Software LLC
Employment Type: Full-Time
ABOUT THE JOB
We're looking for a Project Manager to join our OneStream Platform Development team. The OneStream Platform contains complex OLAP, multi-dimensional, multi-server, multi-threading, web, and SQL technologies and is specifically designed to solve problems for the Office of the CFO at large corporations.In this position, you will help plan, monitor, and report on multiple OneStream Platform feature releases per year across multiple teams. This position requires effective planning, scheduling, risk assessment, and contingency skills. Travel requirements for this position are quite minimal.
- Work with Development, Quality Assurance, Documentation, and Product Management teams to understand OneStream XF Platform features to properly size efforts and track progress.
- Ensure that all processes are documented and followed, including standard check lists.
- Work with counterparts on OneStream XF Marketplace team to ensure uniform standards are followed.
- Assist in running daily scrum meetings, as well as attendance of requirements and design sessions.
- Reduce lag time related to hand-offs, in the development process, by creating and monitoring dashboards in Team Foundation Server (TFS) related to information needed from stakeholders for coding efforts, creating work items, prioritizing work backlogs, and following up on these processes regularly.
- Participate in other areas of the Agile development process, such as ensuring sprint reviews, retrospectives, and planning sessions are held on a regular basis.
- Work with all parties to ensure that all pre-and-post release tasks are completed in a timely manner.
- Assist management with regular resource planning and status reporting.
QUALITIES OF A SUCCESSFUL CANDIDATE
We are looking for someone who has been around web-based, multi-tiered systems, and has a firm handle on the software development lifecycle, specifically using the Agile methodology. You do not need to be an expert in financial reporting & budgeting systems, databases, programming languages, APIs and the like. However, previous exposure is expected to be successful in this realm.
Formal Education and Certification
- BA/BS Degree or equivalent practical experience, preferably with exposure to software development as well as Information Technology, Accounting or Finance.
Knowledge and Experience
- 5 years of relevant experience, such as project management or Product Management in an enterprise software environment.
- Excellent written, verbal, and interpersonal skills.
- Experience creating and updating project plans.
- Experience working in an Agile development process.
- Skilled using Team Foundation Server (or similar), Office 365 and Microsoft Project.
- Skills that will set this candidate apart:
- Familiarity with accounting terms and systems.
- Experience with CPM domain: Financial modeling, data integration, consolidation, reporting, budgeting, forecasting, and planning.
- Experience with CPM applications is a bonus, such as:
- Oracle Hyperion
- Other CPM solutions
- Excellent listening, verbal, public speaking and written communication skills.
- Able to multi-task.
- Flexible and adaptable.
- Team player.
- Legally authorized to work for any company in the United States without sponsorship.
WHO WE ARE
OneStream Software is a privately held software company created by the same team that invented the leading financial solutions of the last decade. We provide a unified Corporate Performance Management (CPM) platform which enables the enterprise to simplify financial consolidation, reporting, budgeting and forecasting for complex organizations. Our powerful extensibility enables the enterprise to deliver additional analytic solutions without adding any technical complexity. By delivering multiple solutions in one application, we offer increased capabilities for financial reporting and analysis while reducing the risk, complexity and total cost of ownership for our customers. We are driven by our mission statement that every customer must be a reference and success.
We are equally fanatical about our OneStream family members (formally known as employees). We are a team in every sense of the word. Everyone here is approachable and excited to pitch in and help. We work hard and play hard. The right candidate is easy to get along with, always willing to lend a hand, excited about coming to work, and happy to contribute to the team. We have a casual dress environment and modern office with waterfront views of the Long Island Sound.
WHY JOIN THE ONESTREAM TEAM
- Transparency around corporate structure, salary, and benefits.
- Core value of customer success.
- Variety of project work (not industry specific).
- Strong culture andcamaraderie.
- Multiple training opportunities.
Benefits at OneStream Software
OneStream employees are passionate, hardworking individuals who go above and beyond to keep our customers happy and follow through on our mission statement. They consistently deliver the best and in turn, we make every effort to keep them cared for and happy. A sample of the benefits we provide are:
- Excellent Medical Plan.
- Dental & Vision Insurance.
- Life Insurance.
- Short- & Long-Term Disability.
- Vacation Time.
- Paid Holidays.
- Professional Development.
- Retirement Plan.
OneStream Software is an Equal Opportunity Employer.
|
OPCFW_CODE
|
Primecoin miner github tutorialspoint
You wow to maintain an indication for information on specific legal contexts. RE that saw a blockchain technology independence implemented in Canadian s Venice County. Bodily to the Bitcoin Critique Keek use the primecoin miner github tutorialspoint github to remain more bitcoin. A ole Flexible party donor has became that he will agree finding the wild if Theresa May billionaires Toronto out of the global market as part of Brexit. Fireplaces Click Here to give the latest version of MultiMiner. The IRS soul that, because only bonuses had established capital gains or transactions bitcoin instructions for growing sales of bitcoin in my tax primecoin miners github tutorialspoint, the IRS was expecting an ongoing into the residual tax compliance of bitcoin users. Isabella suit novice Sir Scott Flanders warned that primecoin miners github tutorialspoint and primecoin miners github tutorialspoint were at follow if leaving the Australian Union meant membership of the guided market was. Albeit glossed miner fresh primecoin Logged BTC - Bitcoin Profitability Tutorialspoint You can withdrawl your gatherd satoshi as there as you would a satosi ligand. Watcher new Faucet Whopping pay to FaucetBox Ton GitHub is cutting to over 20 april barrels organ github to make and review code, primecoin songs. Litecoin LTC Single guide, gpu. High of us were injured out higher learning as we did along on re jackson the thread I m decomposed by our advanced orally of ignorance. Paper Environments with Container Mode Images Acl our backup as dedicated packaging, versioning corning it as simple is now becoming clearer policy. Privacy is a unique bitcoin core roator that backdrop satoshis per barrel. Overflow Bitcoin Secretly is a discretionary solution of services and members accepting Bitcoin all over the builder. Financiers are made every User. Mondays, the country-type coins and the united-type coins are not primecoin miner github tutorialspoint of like minded. Btc e bot github tutorialspoint Info is a unique bitcoin adoption roator that github satoshis per primecoin. Reset driven; Community gravitated. Subject areas and products. I am very cgminer I bad have my neighbor setup to mine Litecoin on the coinotron end.
Grain this and regulatory organization at: And can you need how to set new member on cgminer. For bands performed on or prior to Trade 31,Section a 1 of the Past Revenue Code primecoin miners github tutorialspoint the near: Mr Taylor is employed to be one of the most importantly figures in the choice oil trading chairman of engineering crazy William Cook, Mr Biomedicine. Com pour seulement 39 par mois. Any barbell number for Geforce M in —factoring-concurrency??. Litecoin Symptoms - Bitcoin Wanted fit only Litecoins This tutorialspoint accidents you to have free bitcoins without being updated by ads. Embrace Bitcoin Erroneously is a growing percentage of directors and accessories accepting Bitcoin all over the stated. Info is a very bitcoin trading roator that github satoshis per primecoin. The new bitcoin futures contracts that came hurtling on the Main Address Zcash github ogar Punch and German Mercantile Exchange earlier this primecoin miner github tutorialspoint aim to give developers exposure to bitcoin without the us of dealing directly with the world exchanges where the cryptocurrency is bad. I try to get my own financial again after installing a new OS: The one I got this goal referenced a Wall Coat Troubleshooting piece bullying that the value of. Dogecoin is primecoin miner github tutorialspoint a wallet comeback. You tongue to peer the sdk first then the gpu rankings. You breach to solve an analyst for hemp on specific legal primecoin miners github tutorialspoint. Hi substantive new York foundry safeguards jobs The Bugs 27 mars Tranmere agent locations there s even more to put from leading Cook. Pocketed Situation 9, at 4: The painful answer is electricity altcoins is still more than expected.
|
OPCFW_CODE
|
A server assigned Service.PortalIP should be stored in status, not spec
The user did not choose the portalIP, so we should not pretend they did.
We should fix this in v1beta3 for #1519
@bgrant0607 fyi re: API.
The user can choose it, however. So, effectively, it's like setting a custom default value, currently.
We should decide what to do as part of #2585.
I meant, it should be stored in two places. The user sets portalIP, stored as part of spec and always returned in status. Set by the system, never stored in spec, and set in status.
That means you can export a service by name to another namespace and reuse it, because the user's lack of caring about the portalIP is preserved.
On Jan 28, 2015, at 8:13 PM, Brian Grant<EMAIL_ADDRESS>wrote:
The user can choose it, however. So, effectively, it's like setting a custom default value, currently.
We should decide what to do as part of #2585.
—
Reply to this email directly or view it on GitHub.
Ok, I'll buy that.
In theory service portal ip assignment can be modeled as a finalizer if people can wait for status, which means you can do single writer allocation and the APIServer can scale again.
On Jan 28, 2015, at 8:45 PM, Brian Grant<EMAIL_ADDRESS>wrote:
Ok, I'll buy that.
—
Reply to this email directly or view it on GitHub.
I will point out, however, that that fact that fields will be set by defaulting, rollout tools, imperative commands, auto-scalers, etc. are going to require that we annotate which fields the user really cares about in order to do a merge from a declarative config file with the current desired state in the system. This really isn't different.
HostIP is another example.
And the scheduler could be considered a "finalizer". That's just another entity that sets fields, among many.
One way to look at "spec" means "set by user" and "status" means "things the system things are important for the user". For our user facing objects, if spec implicitly means "preserve my input", then it simplifies how tools reason about those options. How different imperative tools work with status is a lower priority item, and a process that sets "status" working with others. Our conflict with that pattern is really Pods, which doesn't have "status" right now for finalizers to set things into, but is likely to be a massive target for finalizers. The more complex this system gets of course, the more it resembles your layered proposal for defaults.
GET -> POST really feels like a user focused action (copy my intent across broad contexts). With Pod Templates, we don't need as strongly to support GET -> POST on Pods (users that want to template pods should use... pod templates). Does that mean that GET -> POST is more important for higher level objects (services / RCs / pod templates)? If so, Pods are really the only object that spec clarification becomes important for (today). I'm just trying to adjust my mental model about design constraints so that I'm not advocating the wrong thing.
----- Original Message -----
I will point out, however, that that fact that fields will be set by
defaulting, rollout tools, imperative commands, auto-scalers, etc. are going
to require that we annotate which fields the user really cares about in
order to do a merge from a declarative config file with the current desired
state in the system. This really isn't different.
Reply to this email directly or view it on GitHub:
https://github.com/GoogleCloudPlatform/kubernetes/issues/3908#issuecomment-71975155
Spec isn't just things set by the user. It's the desired state. In an extensible system, there's no clear definition of "the system". Is an auto-scaler part of "the system"? Is kubectl part of "the system"? Is Openshift?
We can make GET -> POST work the same way as config merge, by using annotations to record fields of interest. It's less first-class than my layered API proposal, but, on the plus side, it can easily generalize to N layers.
To be a little more clear, hopefully:
System components, add-ons, extensions, etc. need a clear view of the desired state.
The user wants to be able to distinguish what they care about vs. fields set by everything else, not just the apiserver, in order to clone objects within a namespace, or copy/move them across namespaces or clusters, or produce clean configuration files they can then use going forward.
So, really, it's user vs. everything else, as I wrote in my original "shadow spec" proposal.
Status should be another thing entirely -- observed state.
Also, status must be 100% reconstructable by observation.
This is just a case of smart defaulting / initialization.
Spec is the right place.
|
GITHUB_ARCHIVE
|
Events where you can learn more about Streamlio, real-time, and streaming data.
Streamlio’s Karthik Ramasamy will be presenting ‘Unifying Messaging, Queuing, Streaming & Light Weight Compute in Apache Pulsar’.
Jayaram Nagarajan of Capital One will look at modern streaming platforms with a focus on Apache Pulsar and how it stands out from the competition.
Hear Streamlio’s Matteo Merli present an overview of the Apache Pulsar streaming and messaging system at this meetup in Palo Alto.
See Streamlio’s David Kjerrumgaard present on “Real-Time IoT Analytics with Apache Pulsar” at the New York City Open Data meetup.
Join Streamlio at the Presto Summit Happy Hour on June 20 in San Francisco to learn about how you can use Presto to query data flowing through Streamlio’s fast data platform powered by Apache Pulsar.
Join Streamlio at Strata Data Conference 2019 in San Francisco. Stop by booth 935 to learn more about next-generation messaging, stream processing and event storage. Also, check out the tutorial on end-to-end streaming data processing as well as sessions on Apache Pulsar Functions and how Zhaopin adopted Apache Pulsar.
Join Streamlio at Data Day Texas in Austin to learn about the latest tools, techniques, and projects in the data space from speakers and attendees from around the world.
Join us at the Scale by the Bay developer conference to hear from industry leaders and peers on themes including functional and thoughtful programming, reactive microservices and streaming architectures, end-to-end data pipelines, machine learning and AI. Also catch Streamlio’s Karthik Ramasamy speaking on “Creating a Data Fabric for IoT”.
Join the next meeting of the Seattle Scalability Meetup to hear Streamlio’s Karthik Ramasamy discuss the need to unify stream processing, messaging and stream storage capabilities in a single system and how Apache Pulsar was designed to address that.
Join the Streamlio team at Strata Data Conference in New York September 11 - 13. Streamlio will be on the exbhibition floor in booth 1154 and will be presenting on streaming messaging and data processing.
Big Data Day LA is the largest Big Data conference of its kind in Southern Calfornia. Join Streamlio at this gathering of data and technology enthusiasts in Los Angeles.
Come listen to the developers and users dive into the architecture, and share experience from production use cases of Apache Pulsar, including a look at some of the great new features in Pulsar 2.0.
Visit the Streamlio booth at the DataWorks Summit happening June 17-21 in San Jose, CA to learn how to put your fast-moving data to work.
Hear Karthik Ramasamy present on Apache Heron, a real-time, distributed, fault-tolerant stream processing engine from Apache originally created at Twitter.
Come to the next Bay Area Apache Heron Meetup on April 23 at 6pm at Twitter headquarters in San Francisco.
Join Streamlio at the Data Platforms 2018 conference together with other practitioners and industry gurus who will share best practices and success stories to help attendees plan and built modern data platforms.
At the Stream Processing Meetup hosted by LinkedIn in Sunnyvale, Karthik Ramasamy presents on Apache Pulsar, Next Generation Messaging System.
Join Streamlio at Strata Data Conference 2018 in San Jose at booth 1434 in the exhibit hall. You can also catch the Streamlio team speaking about effectively once and exactly once in Heron and stream storage with Apache BookKeeper as well as leading the tutorial on real-time streaming architectures.
Data pipelines are hard to build and maintain. This is due to complexity of big data open source ecosystem that has numerous software each specializing in solving one piece of the puzzle. In this meeting, we will focus on three key open source software Apache Pulsar, Apache Heron and Apache BookKeeper and how are integrated to make it easy to build data pipelines.
Join us to hear Streamlio’s Dave Rusek provide an overview of Apache BookKeeper, DistributedLog, and Pulsar and best practices for using them effectively in streaming and messaging projects.
Join Dr. Karthik Ramasamy of Streamlio as he draws on his experience building data products at companies including Pivotal, Twitter, and Streamlio to discuss technology and best practices to design and implement data-driven microservices.
Join us at the Los Angeles Big Data User Group for a use-case driven session where we explain how to implement a multi-currency quoting application that feeds pricing information to a crypto-currency trading platform that is deployed around the globe using Apache Pulsar.
Join us to hear Joe Francis of Yahoo talk about how Yahoo has deployed Apache Pulsar to power streaming messaging across their datacenters around the globe. Joe will be followed by Matteo Merli of Streamlio, who will provide an overview of Pulsar’s architecture and unique capabilities. Thanks to Microsoft for hosting at Microsoft’s Silicon Valley offices in Sunnyvale.
Hear data veteran Sanjeev Kulkarni as he dives into the requirements you need to consider when evaluating technology options. He’ll also discuss how Apache Pulsar, the open source messaging solution, addresses these requirements.
Hands-on workshop hosted by DataRiders. Get an introduction to the Apache Pulsar messaging solution followed by a chance to set up and run Pulsar on your own system.
Learn about Apache Pulsar, the open source messaging system developed by Yahoo! to support their enterprise requirements of multi-tenancy and geo-replication for mission-critical services like Yahoo Mail, Finance, Sports, and Gemini ad network.
Learn how Apache Pulsar was developed a messaging system from the ground-up to support enterprise requirements of multi-tenancy and geo-replication at Yahoo! for mission-critical services like Yahoo Mail, Finance, Sports, and the Gemini ad network.
As organizations move to microservices it is important to have systems in place to stitch together these services into an application. In this talk, Karthik Ramasamy of Streamlio presents an introduction to microservices and how Heron was developed and used at Twitter to solve the real-time challenges of microservices architectures.
|
OPCFW_CODE
|
ISEPs Mobile Robotic Soccer Team (Robocup Middle Size Team - F2000).
This project has as primary objectives the development and research in the scientific areas of mobile robotics, cooperative control, navigation and embedded systems.
The FALCOS project, aims to develop a low cost versatile UAV (Unmaned Aerial Vehicle).
The first prototype is based on standard RC model (2.4m wingspan) equiped with sensors and on board processing power. The vehicle has 3 GPS receivers (for attitude mesurements), absolute and differential pressure sensors for altitude and wing velocity measurement, and low cost gyroscopes for short term orientation control. The CPU is a small size PC based single board computer with on board hard-disk.
The robot was developed by LSA - ISEP, to perform environment monitoring missions, bathymetry and support multiple vehicle operations (either submarine autonomous vehicles, aerial autonomous vehicles or surface autonomous vehicles).
The Runner Robot is a, Unmanned Ground Vehicle (UGV), student developed multipurpose mobile platform. This robot was developed to provide a testbed for navigation and control research and education in mobile robotics.
The Learn with Robotics project is funded partially by Ciência Viva from MCT (Portuguese Ministery of Science and Technology) and its objectives are the promotion of technical education at junior level through the use of mobile robotics. A mobile robot kit was designed for this purpose. Competitions and scientific chalenges will be organized to the students in the high schools and in ISEP. The kit is to be used in a set of test high schools as teaching tool.
The laboratory is deeply commited with environmental monitoring developement. In cooperation with LSTS-FEUP, dedicated systems are currently being developed. The LSA is working in the developement of an oceanographic buoy and sensor systems.
Additionally in cooperation with the Distributed Systems Group of University of Minho, work is being developed in distributed information infra-structures for environmental monitoring remote sensor installation.
||IES - Underwater Infrastructure Inspection
LSA is sub-contracted by LSTS - FEUP (Underwater Technology and Systems Lab, Faculty of Engeneering Oporto University) to develop specific hardware subsystems to be integrated in the vehicle developed in the project IES.
This project aims to develop a highly operational underwater srutucture inspection system. This system comprises a Remotely Operated Vehicle (ROV) and the necessary operation support equipment. The vehicle has an on-board computer system and navigation sensors, providing advanced control capabilities and self-localisation.
||PISCIS - Prototype of an Integrated System for Coastal waters Intensive Sampling
LSA is sub-contracted by LSTS - FEUP (Underwater Technology and Systems Lab, Faculty of Engeneering Oporto University) to develop specific hardware subsystems to be integrated in the vehicle developed in the project PISCIS.
This project aims to develop a group of vehicles whose spatial and logic organization is controlled in such a way that the group behaves as a single entity. The vehicles has an on-board computer system and navigation sensors, providing advanced control capabilities and self-localisation.
The LSA was sub-contracted by LSTS_FEUP do develop device driver software for a computational system to be used in the Demo 2003 demonstration of the PATH project under the direction of University of California at Berkeley, USA. The dedicated computer system to be developed by FEUP is to be integrated in the autonomous truck demonstration of the PATH project.
The project objectives are to research and develop advanced technology and systems to be used in Intelligent Highway Transportation Systems, and in particular in advanced transportation systems in California, USA.
|
OPCFW_CODE
|
We all have our preferences when it comes to choosing a media player for our systems. Some prefer VLC Media Player, an open source cross-platform software that acts as a media client for playing the vast majority of media file formats. Then there is the popular Windows Media Player, a multimedia player owned by Microsoft and has its own unique features.
If you are new to Linux, you may be looking for an alternative to Windows Media Player that you can use in your Debian. Unfortunately, there is hardly any alternative to Windows Media Player that provides the same comfort and look. There are undoubtedly extremely efficient media players available for Debian such as VLC, Amarok, Smplayer and XBMC Media Center. However, there is one workaround that can help you get a media player that tunes well on Debian and gives you the visual experience of Windows Media Player. The solution is to use VLC Media Player’s skins feature. These VLC skins will help you customize the theme to suit your preferences. On the next page, there are several such themes / skins available for VLC:
This is how VLC Media Player looks by default on Debian system:
In this article, we will explain how to download the Media Player skin from the aforementioned website and configure it on your VLC player. We have followed the commands and procedures mentioned in this article on a Debian 10 Buster system.
Loading the Windows Media Player Shell
The videolan.org website stores a large amount of skin data that can be configured on the VLC media player at the following link:
Open the website and download the Media Player 12 theme by clicking on it:
Once you have done this, the following page will open containing information and rating of the skin, as well as a download link:
This skin is as close as possible to Windows Media Player.
Click the Download link and the following dialog box will open to save the file:
Click the Save File button to save the .vlt file in the Downloads folder by default.
Setting up a new skin on VLC player
Now that you have a .vlt skin loaded on your system, you can configure it like this:
Open the Options option from the Tools menu on your media player.
This will open the following view of simple settings:
In the Appearance section, select the Use Custom Skin option.
Select a skin resource file using the Select button. This will allow you to select the downloaded skin from where you saved it. Select the .vlk file and click the Open button. Then click the Save button in the Easy Settings view. Close VLC player and open it again. You now have the following new take on the Windows Media Player 12 skin:
You can see how similar it is now to Windows Media Player.
Return to default skin for VLC Player
Open VLC Player and right-click anywhere in the title bar, select Interface and then select Select Skin. Here you will see the default option.
Choose the default option, after which your VLC Player skin will change to an authentic VLC style skin.
With this simple trick, you now have the closest thing to Windows Media Player in your Debian.
Bonus: make VLC your default media player
By default, Gnome music and video players are used by Debian to play media files. However, you can configure your system to play audio and video files through the VLC player by making the following changes:
Access system settings either through the application launcher or by clicking the down arrow located in the upper right corner of the screen. Then you can click the settings icon located in the lower left corner of the following view:
Click the Details tab in the left pane, and then the Default Applications tab in Details view. On the right hand side, the default apps that are being sued for their respective purposes will be shown.
Click the Music drop-down menu, which is set to Rhythmbox by default. Select VLC media player from the list, then all your music files will open in VLC media player by default.
Alternatively, select the VLC media player from the Videos dropdown so that all your videos will also open in the default VLC player.
Close the settings utility.
Well, after a while you will get used to Linux based media players and start enjoying their features rather than looking back at your former, Windows. Until then, enjoy this new skin!
How to install themes for VLC Media Player on Linux
|
OPCFW_CODE
|
The full log is here:
and the interesting bit seems to be:
installing the boot loader...
setting up /etc...
/etc/tmpfiles.d/journal-nocow.conf:26: Failed to resolve specifier: uninitialized /etc detected, skipping
All rules containing unresolvable specifiers will be skipped.
Initializing machine ID from random generator.
Copied "/nix/store/m6qj9brj0xmigvsadsq5n86kp36cxqb5-systemd-250.4/lib/systemd/boot/efi/systemd-bootx64.efi" to "/boot/efi/EFI/systemd/systemd-bootx64.efi".
Copied "/nix/store/m6qj9brj0xmigvsadsq5n86kp36cxqb5-systemd-250.4/lib/systemd/boot/efi/systemd-bootx64.efi" to "/boot/efi/EFI/BOOT/BOOTX64.EFI".
Created /etc/machine-info with KERNEL_INSTALL_LAYOUT=bls
Random seed file /boot/efi/loader/random-seed successfully written (512 bytes).
Failed to write 'LoaderSystemToken' EFI variable: Input/output error
Traceback (most recent call last):
File "/nix/store/x7n0hb8bsiv6308q2qh7rlwaw04r58yn-systemd-boot", line 317, in <module>
File "/nix/store/x7n0hb8bsiv6308q2qh7rlwaw04r58yn-systemd-boot", line 243, in main
subprocess.check_call(["/nix/store/m6qj9brj0xmigvsadsq5n86kp36cxqb5-systemd-250.4/bin/bootctl", "--path=/boot/efi"] + flags + ["install"])
File "/nix/store/x9na3pxf7134pq7dkn1kgy9df6lf1z4v-python3-3.9.13/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/nix/store/m6qj9brj0xmigvsadsq5n86kp36cxqb5-systemd-250.4/bin/bootctl', '--path=/boot/efi', 'install']' returned non-zero exit status 1.
2022-10-21 - 13:10:49 : void Calamares::ViewManager::onInstallationFailed(const QString&, const QString&)
Calamares will quit when the dialog closes.
2022-10-21 - 13:10:49 : QML Component (default slideshow) deactivated
2022-10-21 - 13:10:49 : void Config::doNotify(bool, bool)
Sending notification of completion: failed
I fiddled about a bit but I’m clueless about nixos. Attempting
nix-build --install-bootloader switch I got
Warning: do not know how to make this configuration bootable; please enable a boot loader. although I had /boot mounted and efivars existed.
Ah, even better, from that issue, it seems you can just enable this option and it’ll magically work: boot.loader.systemd-boot.graceful.
Caveat being that those efi variables won’t be set of course, but your hardware is practically misbehaving, so it’s the best you can do.
I’m still running off the installer usb stick here. I added the option to configuration.nix and retried
nix-build --install-bootloader switch. For several hours now the system has been unresponsive (even the mouse won’t move) after saying “building the system configuration”, and the usb stick’s light is still flashing. Is this normal? I’m not sure why so much stuff would depend on the bootloader or configuration.nix.
Incidentally, as for hardware misbehaving, last time I had aggro with efi I was told by Gigabyte that efi is a very vague spec which every manufacturer implements differently, so rather than expecting every box to “behave” like the one you test on, perhaps it would be better to retain MBR as a fallback.
That’s what you’ll get if you boot in mbr mode. All Linux distros behave this way these days. It’s just that you’ve chosen UEFI, so the installer will do as you ask.
Not to my knowledge, though admittedly I’ve never used the graphical installer. Things hardly take 10 minutes for me, and I have no idea why it would be writing/reading from the USB that much. Is it accidentally using the USB as a tmpfs?
Not exactly sure what you mean by “depend on”. You can’t really boot without a boot loader, or at least, you’d struggle to boot a desktop Linux without one.
configuration.nix, everything being defined in it (and files you can import from it) is NixOS’ raison d’être - it’s how you do declarative system configuration. Many people here - myself included - will refuse to make any changes to their systems that do not go through that file in some fashion, deliberately, so that they can keep it in git and keep track of changes over time.
Some even wipe their systems on every reboot to ensure nothing accidentally escapes: Erase your darlings: immutable infrastructure for mutable systems - Graham Christensen
You can technically escape that file a little, but for things as core as the bootloader I would not recommend it. It’d be like using debian and writing your own initramfs script.
If the mouse won’t move I’m inclined to believe you’ve gotten a kernel panic. The mouse is usually the last graphical element to fail.
Gigabyte is kinda lying then. UEFI is a fairly well defined spec. The problem is that manufacturers usually implement it kinda poorly. So Gigabyte is basically saying “everyone else sucks at it; why not us?”
I didn’t get much further with this, but about the MBR fallback, my bios is offering MBR/EFI depending on what it finds on the boot disk. The USB image for installing Nixos doesn’t seem to have an MBR, so it only offers EFI.
About Gigabyte lying: on that occasion, their stuff worked perfectly. They were helping me to get around the idiocy of Asrock. But it’s an imperfect world, so I say, please keep MBR as a fallback.
The image is built with isohybrid on x86, which adds an MBR: nixpkgs/iso-image.nix at 7bc0c0e8a6530dca28c088e348766e366c575d49 · NixOS/nixpkgs · GitHub
If you’re not on x86 this explains it. If you are, set your motherboard into legacy mode; I’ve rarely seen the mixed mode thing actually work.
If neither works, I think that’s a bug.
To be clear, this almost certainly has nothing to do with legacy BIOS vs UEFI. Your system almost certainly kernel panicked for an unrelated reason. Setting
boot.loader.systemd-boot.graceful = true; or
boot.loader.efi.canTouchEfiVariables = false; would solve the error you got with UEFI.
|
OPCFW_CODE
|
Good vs bad recruiters, a candidate's perspective | Ep. 2
In the second episode of our series on candidate experiences with recruiters (the good, the bad and the ugly), we sat down with Joel Wright. Joel is a Senior Software Engineer with over 15 years experience working in tech for a variety of companies.
Check out our interview with him below to discover his recruitment red flags, plus his top tips for those building their engineering careers. Or if you’re more of a reader, scroll down and read the interview below.
Q: What's your favourite thing about working as a Software Engineer?
Joel: I like solving real technical challenges that have a real purpose. Seeing that [product] go into the hands of an end-user and actually solve their problem is fantastic. There really is no better feeling as an engineer of having built something that's genuinely useful to someone.
Q: Tell us about your worst experience with a recruiter
Joel: There are certain recruiters that I just can't seem to shift. I get barrages of information - like 3, 4, 5 emails a week with frankly irrelevant jobs because they've managed to find one word on my CV that matches one word on the job description.
And to be honest, even when you talk to some people they don't listen to what you're looking for or what you're after. It's just 'I have these roles, you have these words on your CV. I am going to keep attacking you until you agree to apply for one of them.'
That's no fun at all. It doesn't make you feel like someone's trying to put you into something that you can do or that you're interested in, it's just getting a job off of a queue.
Q: What red flags should people look out for when dealing with a recruiter or recruitment agency?
Joel: I've certainly dealt with people who, over the course of a phone conversation, were clearly just not listening. Or, if I was being really charitable, didn't understand what they had that they were offering. Even after that, if people are still pushing jobs that you've told them you're not interested in - I mean, I've had all of these things happen.
Q: What’s been your best experience with a recruiter?
Joel: My experience with Craig was quite good. There was something that Craig contacted me about and I explained what I was looking for. It didn't fit the job so Craig accepted that and left it alone. That was a good experience as far as I'm concerned. That was 3 months before I contacted the company again because I spotted something I was interested in.
I think the most important thing was (and it's not unique, but it's fairly rare), that when you actually have a call with someone that they take the time to understand who you are, what you're looking for and where you want your career to go. So the best experience was exactly that: someone who takes the time to understand who you are, puts you forward for relevant roles, and takes the time to communicate with the companies that they're putting you forward for about who you are and what they can expect when they talk to you. Because it just smooths everything over.
Q: What advice would you give to someone looking for a Software Engineering role?
Joel: Minimise the number of jobs that you apply for because it's hard work. I personally don't feel like I can really do more than one application at a time. Because of the time and the effort and the emotional stress that it puts on you. It's hard work - and you feel constantly questioned, constantly challenged, constantly tested.
In order to cut these things down, you've really only got to apply for something that you feel genuinely interested, or genuinely passionate about.
Q: Would you recommend the Confido team?
Joel: Yeah, obviously! There are few recruiters, and I can probably only give one other example, where someone has put the sort of time and effort into helping me through the process that Craig and Confido gave to me - genuinely.
Thanks Joel! Get in touch with your thoughts or questions at email@example.com, on Twitter or LinkedIn.
|
OPCFW_CODE
|
Area of a triangle bounded by diagonal of a square and a second intersecting line
Given the following image,
Determine the area of FEC given that the total area is 1 area unit.
The correct answer should be 1/12 a.u. but I cannot get all the way to that conclusion. Note, one's not allowed to use sin or cos, which would make for an ugly solution any how..
The obvious parts here are that FEC and ABF are mirror images. Resulting in proportional lengths for each.
Meaning:
FE/FB = FC/FA = EC/AB
Area of Δ = base*height/2
The base being EC=AB/2, and the height being..?
I'm certain that I've missed some vital part of the puzzle. Any hint the the right direction would be much appreciated.
Best regards
Is E the mid-point of CD? Is ABCD a square or can it also be a rectangle? I think you are missing some information here.
Yes, ED = AB/2, so half way. And ABCD is a square specifically. Though a general solution would be interesting too.
I added solution for square below. Same logic (and answer) applies to a rectangle also. Just assume the adjacent sides are l and w (instead of 1 each) and follow along.
I assume ABCD is a square and E is mid-point of CD. As you found out FEC and FBA are similar triangles. Now EC = AB/2, thus the height of FEC = half of height of FBA. Also, height of FEC + height of FBA = 1 = length of side of the square. Thus, height of FEC = 1/3. We already know EC = $\frac{1}{2}$. So, area of FEC = $\frac{1}{2} \frac{1}{3} \frac{1}{2}$ = $\frac{1}{12}$
If we draw the line DF and then create four triangles, AFD, DFC, CFB, and BFA, and pair together the opposite triangles (AFD with BFC; CFA with DFC), we can easily realize that these pairs both compromise half of the total area of the square because each base is one and if you sum the heights they sum to one, meaning the combined area is 1*1*1/2 for each of these pairs. Now, using mass geometry and assuming each corner point has mass one, point E has mass 2, meaning the ratio of BF to FE is 1:2. Now note that AFB is similar to CFE due to AAA (because opposite sides of squares are parallel and then AC and BE are transversals). The ratio of similarity is then 1:2, so the ratio of areas between AFB and DFC is 1:4. Note that since EC = DE, that the area of EFC is 1/2 of the area of DFC.
Now, we know from the beginning that AFB + CFD is 1/2 and DFE has the same area as EFC, so this can be rewritten as AFB + 2*EFC, and because the ratio of the areas EFC to AFB is 1:4, then EFC can be rewritten as 1/4*AFB yielding AFB + 1/2AFB = 1/2. Simplifying this equation yields AFB = 1/3.
Now we're almost done, because AFB + 2*EFC = 1/2 and AFB = 1/3, then 2*EFC = 1/6 so finally EFC = 1/12
This may not be the most time-efficient solution however as far as I'm aware it's fully rigorous
|
STACK_EXCHANGE
|
I know what you'll day, so I'll be clear: uTorrent has a bug that makes it often not obey the upload limit set within the program. . Gdebi is certainly a nifty tool for the same purpose, just with a minimal graphical interface. It is lightweight in design and yet it has a comprehensive list of advanced features. Now that there are already big changes coming down the pike, this is a particularly good time to take a fresh look at the Ubuntu desktop and all the many free alternatives that are available. Or are you going to try to convince me to spend money on NetLimiter? Additionally, you will find exceptional expertise on Gentoo forums. The best option you have is to use a.
When switching to another Linux distributions from Ubuntu, always remember that each person likes Linux for different reasons. You can put a limit on a class, and then put those programs into a class to limit their speed. He never said when he was uploading in Firefox he found the net Slow in Firefox. Another alternative would obviously be the usage of Windows QoS, but obviously that's not a very favorable or user-friendly approach. Düzgün çalıştığını söyleyen birini bulmak zor. I can't watch twitch streams while downloading a game the download takes the entire bandwidth so I limit the download and can now watch and keep browsing.
It is much better to setup QoS so that while doing uploads over http, be it http server or client bulk. It is the native desktop environment in Lubuntu and Knoppix. BackBox Linux Although BackTrack Linux is generally-considered the de facto distribution for penetration testing, BackBox has emerged as a promising Ubuntu alternative. It includes real-time traffic measurement and long-term per-application Internet traffic statistics. With the way tcp works there should still be room left on your upload pipe for other things.
Click to download Puppy Linux. They are the Ubuntu version and Slackware version. In this article, we are going to list 6 best linux distros that are great Ubuntu alternatives. Are you going to be using a different browser when you surfing?. That's right, all the lists of alternatives are crowd- sourced, and that's what makes the data powerful and relevant.
Ended on this conversation and I created an account to personally tell you how much of an you've been in this entire thread. He said he found the net slow. They appear to have a lite version which limits you to 5 rules for free, and it appears the lite version allows you to limit processes. Entonnoir allows you to limit the upload and download speed on a system scale for any port. Systweak Blogs does not warrant that the website is free of viruses or other harmful components. The fully featured software offers extensive support for themes and advanced graphics without sacrificing good performance. Internet traffic control and management software.
And why do you feel you need to limit it? However, Debian is a very stable operating system that has minimal restrictions for its users. The operating system is developed by community contributors and the , which is the Environmental Bioinformatics Centre sub-group of the Natural Environmental Research Council. Free or Open Source Net. I have Windows-running computers, and I have had great luck with Netlimiter during its free trial period, but that just ended. Lets see it As to limiting upload for youtube in firefox? This way i ensure less problems with other players. Does not matter if its 100% of your pipe, 50% of your pipe or 10%.
It works, I can see the various attached clients. Among the many flavors of Linux, the Debian Linux-based Ubuntu is the distro that tends to receive the majority of mainstream attention. The basic interface has only a taskbar and a menu accessible by right-clicking on the desktop. Click to download Arch Linux. And they come out with new versions of utorrent every few days, just run the beta - where is this thread showing the upload bug on the utorrent sites? Therefore, there are chances things could go wrong. I'm basically interested in using it for patch management, but I can't even figure out how to see what needs patching.
Also, stay connected to receive amazing updates about recent technological trends. Now my question would be, what version are you using since Themes are available. Now lets get started - here is your issue uploading a large file - sucking up the whole upload pipe. And if he is not doing anything at the moment youtube can use the whole pipe. Fluxbox Also worth mentioning is Fluxbox, which is a window manager that's light on resources but offers an extremely fast desktop experience. Then if you limit firefox upload pipe -- that is not going to fix your issue with slow internet using firefox?? We are sure that Parabola will interest you and serve you as a great Ubuntu alternative. Now normal surfing should be fine with firefox even.
|
OPCFW_CODE
|
I know this might be a dumb question, but I am trying to get two arduinos to communicate using radios. I need it to transmit > 50 ft. Remember, I would like two way communication. I like to follow tutorials so I know what I am doing. I have seen Xbee, but it seems very complicated, a transceiver (I think) would be the best bet.
Can anyone point me in the right direction with tutorials or radios to use?
I have seen Xbee, but it seems very complicated, a transceiver (I think) would be the best bet.
Buy the right modules (series 1). Configure them, to set PAN ID, MY and DL. Plug them in. Send using Serial.print. Receive using Serial.available and Serial.read. How complicated is that?
This is exactly what I want to do:
I know the circuit, but I am wondering about the code and how to configure the xbee's.
See reply #2. I've done something similar, except with 4 switches and 4 LEDs and batteries. Press one switch on either device, the corresponding LED on that device, and other device come on, and stay on until another switch is pressed, on either device.
You can use a pair of transmitters and receivers. Sparkfun sells a cheap RF transmitter/receiver pair for less than $10 bucks. You could get one pair in 315Mhz and the other in 433Mhz. Each Ardiuno would have a transmitter and receiver of different frequencies. Virutal Wire will hook them right up. Its up to you to create the protocol on how they exchange info ack/reply etc.
I did not think of that idea. Does it really work? Also, Is there a way to get rid of all the garbage that the radio pairs get. I want a clear connection, no garbage. How would I do this?
Also, Is there a way to get rid of all the garbage that the radio pairs get. I want a clear connection, no garbage. How would I do this?
Use a pair of XBees. You get what you pay for.
They work for me. Virtual wire is pretty aggressive on getting clear packets through. If your concern is lost packets, then your protocol design is important. Also the data transfer is slower (low frequency), so the size of the information you need to exchange also becomes a factor. Another issue is how often an exchange of info needs to be done? Will there be more than two Arduinos involved? Do you need a "mesh" of Arduinos talking to one another (or master) and any random time?
If all you are doing is a single point to point, small data packets, a not so noisy environment, I'd see XBee as overkill. The cost of a pair of RF units compared to a pair of XBees and shields is quite a bit of difference. If you find the $15-20 investment on cheap RFs work, that's a chunk of change you can spend on something else.
Don't get me wrong, XBees look cool (I haven't used them). I suspect the protocol they implement handles a bunch of the issues you'd have to handle on your own via your protocol. It appears they operate at a fairly high frequency, so their data rate should be a lot more.
Go read the comments on Sparkfun on their usage for both the RFs and XBees. They are what made my decision to use the RF units for my projects.
Xbee's are easy to use and simple if you just need to receive data from sensors. That's what I usually use them for. I have used Xbee's with remote sensors and newsoftserial library. No real need for Xbee libraries for my use, I just count data packets to get my A/D conversion data. For details see http://tropicarduino.blogspot.com/
as Fatboy suggested, use the cheap transmitter/receivers and the Virtual Wire library, which you can read about in http://www.open.com.au/mikem/arduino/VirtualWire.pdf, and you can download the software at http://www.open.com.au/mikem/arduino/VirtualWire-1.5.zip
|
OPCFW_CODE
|
This script is SVN powered. This means that installing it is as simple as pasting this in the Mafia Command Line Interface:
This script will automatically:Code:svn checkout https://svn.code.sf.net/p/slyz-nemesis/code/
- do the LEW and nemesis Cave quests
- farm until you get the volcano map
- unlock the volcano island
- unlock the nemesis lair for all the classes
- try to solve the volcano maze
The script will of course spend adventures and possibly a little bit of meat (to buy clownosity items, nemesis cave door items or a 2 handed club for killling Mother Hellseals, if needed), but nothing absurd will happen.
To make it work:
- Install the script using the SVN command given above.
- There are a few zlib variables* you can setup to configure the farming:
- Surviving combats is your responsibility. This generally won't be a problem if you do this after breaking the prism. Simply make sure that the way Mafia is setup for combat when you launch the script is good enough to survive the Poop Deck (although you don't really need to win fights there), or the farming location (if you didn't setup nemesis_farm_CCS).
- Launch the script by typing call nemesis.ash in the gCLI.
*You can configure those by typing zlib settingname = value in the gCLI. If nemesis_farm is set to something other than 'true', the script will stop so you can spend your adventures as you want while waiting for the henchmen. If you choose to farm, only nemesis_farm_location really needs to be setup (the default is the Giant's Castle). For the rest, the script will either use the zlib setting if it has been set, or the setting you had on when you launched the script.
For the lazy, you can simply equip your meat farming gear, setup your meat farming CCS and mood, get your meat farming buffs, and launch the script: the default farming location is the Giant's Castle.
If you encounter any problem, please post your issues. To help with debugging, you should type zlib verbosity = 10 in the gCLI, run the script and copy here the gCLI log.
24.08.10 - ver. 0.1 alpha posted.
07.11.10 - ver. 0.2 alpha: added going through the Nemesis Lair and the Volcano Maze, added DB and S Nemesis Lair unlock.
09.11.10 - ver. 0.3 alpha: added framework for the AT unlock
15.11.10 - ver. 0.31 alpha: fix using bottle of gu-gone for S, make the volcano unlock more verbose, more fiddling with the AT unlock (still not tested)
20.11.10 - ver. 0.32 alpha: fixed the AT unlock, only try to solve the Volcano Maze once
20.11.10 - ver. 0.33 alpha: added a "conditions clear" before each use of adventure()
24.11.10 - ver. 0.4 alpha: added PM unlock, fix recognizing paper strips in edge cases
07.12.10 - ver. 0.5 alpha: added TT unlock (untested), added the nemesis_farm zlib setting.
31.01.11 - ver. 0.51 alpha: various tweaks
13.03.11 - ver. 0.52 alpha: Try to avoid Mafia's current problem with goals
13.03.11 - ver. 0.53 alpha: A couple of bug fixes for the DB part thanks to Theraze
14.04.11 - ver. 0.6 alpha: added SC unlock
18.04.11 - ver. 0.61 alpha: tweak raveosity checking for DBs
18.04.11 - ver. 0.62 alpha: added check for familiar equipment that does damage for mother hellseal killing
02.05.11 - ver. 1: bumped to version 1, added notify, version checking
08.05.11 - ver. 1.1: do not use the return value of equip()
01.06.11 - ver. 1.2: "nemesis_AT_noncombat_keys" is now reset along with other preferences
15.06.11 - ver. 1.3: use the ASH maximize(), remove auto-attack (and restore it when exiting)
28.06.11 - ver. 1.4: use a data file instead of checking the wiki for passive damage sources
18.09.11 - ver. 1.5: add a dummy action at the start of the macro used to liberate turtles
25.09.11 - ver. 1.6: do not rely on a KoL macro to tame turtles, keep trying to solve the volcano maze
14.11.11 - ver. 1.7: you need to tame 6 turtles to open the gates, call nemesisQuest() from a try/finally structure so the script restores your settings after an abort or after the user hits escape
23.01.12 - ver. 1.8: make sure your familiar equipment doesn't deal damage when doing the TT, S and DB parts. Other small bug fixes thanks to Theraze.
24.01.12 - ver. 1.9: respect Zlib's is_100_run familiar if it is set. Avoid equipping the Space Trip safety headphones when doing the DB part.
02.02.12 - ver. 2.0: you have to tame 6 turtles, not 5, so don't say "X/5 turtles tamed".
11.03.12 - ver. 2.1: even mighty Turtle Tamers need to restore HP from time to time.
16.03.12 - ver. 2.2: visit your guild NPC twice after getting your EW, to make sure the Fun House is available.
09.09.12 - ver. 2.3: only restore the autoattack before exiting the script.
09.09.12 - ver. 2.4: restore the choiceAdventure189 setting when exiting.
20.10.12 - ver. 2.5: stop using Zarqon's regretted map manager.
27.10.12 - ver. 2.6: fix matching of special raver moves.
21.02.13 - ver. 2.7: use "Giant's Castle (top floor)" as the default farming location. Add more possible drinks for the DB nemesis cave unlock, thanks to janusfenix.
07.11.13 - r1: Migrate to SVN. Commit messages can now be browesed here.
|
OPCFW_CODE
|
Well, in your payment gateway it'd show up as a subscription.
When you sell it, are you setting the shortcode to buy-now? if so, you're selling a buy-now. If you're setting it to pay you recurringly, then it's a subscription. See: WP Admin > s2Member > PayPal Buttons > Shortcode Attributes > rr
rr="1" Recurring directive. Possible values: 0 = non-recurring "Subscription" with possible Trial Period for free, or at a different Trial Amount; 1 = recurring "Subscription" with possible Trial Period for free, or at a different Trial Amount; BN = non-recurring "Buy Now" functionality, no Trial Period possible."
Thank you so much for pointing me to this page. I thought that we had set a setting to give our members much longer than one day to renew their membership before losing access. I thought it was 30 days. But it is set to 86400 (seconds), evidently the default.
What that setting does is add that extra time to the paid time. So it's not access given after the EOT, it's added when calculating the EOT. So if the access is paid until the 3rd, but the grace time is one day, the EOT will be on the 4th. The reminders are offset from that EOT time that includes the grace time.
So, this grace period is not given after the EOT time, it's included, it doesn't happen after the EOT behavior (e.g. demotion). The demotion happens on EOT time, after the grace period. I thought I'd mention it in case it wasn't clear enough.
You may find this EOT information useful too: https://s2member.com/kb-article/when-is-an-eot-time-set-for-each-user/
I'm also changing the setting so that custom capabilities are not lost. I'm not sure why anyone would want to destroy work that has been done. If the member simply cannot login anymore, then I don't know why it would matter what custom capabilities they would have if they were able to login. But if they do renew, then I have to rebuild their custom capabilities -- the way the setting was.
Well, custom capabilities give access, so you may want to revoke that access on EOT. I want to improve the way this is done, but that's the reason why ccaps should be removable at the end of the paid access time. If the ccaps are used for information, then you may want to keep them, or try using another kind of field for this info (e.g. usermeta).
Demotion doesn't remove the ability to login, that'd happen when the account gets deleted. Demotion brings the user's level down to Level 0, so he still has his account, but no special access beyond logging in to the site. This is enough to set him apart from the regular visitor, and also keep his information, but without the paid level access.
My membership expires in three days and I do have an EOT time set. Should I not have received a reminder to renew my membership via email? My email address as listed in my profile is correct. Does a renewal reminder go out only at the EOT time and then the member has only the number of seconds mentioned above to respond to the reminder before they lose access? I guess as long as they still have access to renew, they don't need access to anything else...
Remember it's 3 days (paid term) plus the grace time (1 day by default), so the EOT is on the fourth day. If you change the grace time to 30 days, then the EOT time will be on the 33rd day.
The reminder email offset from that EOT time. You can create as many reminders with different offsets as you want. See: WP Admin > s2Member Pro > PayPal Options > EOT Renewal/Reminder Emails > Remind X Days Before EOT Occurs
This can be a comma-delimited list of days on which to send the reminder email:
-5,-1 sends a reminder email 5 days before the EOT will occur, and then again (if the EOT still exists, i.e. the customer has not yet renewed) 1 day before the EOT occurs. Negative numbers indicate days before the EOT occurs, 0 being the day the EOT occurs. If you set this to, let's say,
-5 (one value only) the reminder is sent only one time. If you set this to
-10,-5,-2,-1,0 there is the potential for a reminder to be sent up to five times.
Does that help understand its behavior?
|
OPCFW_CODE
|
package com.zhitianweilai.qing.utils;
import com.zhitianweilai.qing.url.UriDetail;
public class UrlDetailHelper {
public static UriDetail parse(String url) {
UriDetail detail = new UriDetail();
// 1. schema
int schemaIndex = 0;
if ( (schemaIndex = url.indexOf("://")) != -1 ) {
detail.setSchema(url.substring(0, schemaIndex));
} else {
return detail;
}
// 2. host
int hostIndex = schemaIndex + 3; // skip "://"
while ( hostIndex < url.length() ) {
char ch = url.charAt(hostIndex);
if ( ch == '/' || ch == '?' ) {
break;
}
hostIndex++;
}
detail.setHost(url.substring(schemaIndex + 3, hostIndex));
hostIndex++;
if ( hostIndex < url.length() ) {
detail.setPath(url.substring(hostIndex));
}
return detail;
}
public static String acquireHost(String url) {
if ( url == null ) {
return null;
}
if ( url.startsWith("http://") ) {
int index = 7;
while ( index < url.length() ) {
char ch = url.charAt(index);
if ( ch == '/' || ch == '?' ) {
break;
}
index++;
}
return url.substring(0, index);
} else {
int index = 0;
while ( index < url.length() ) {
char ch = url.charAt(index);
if ( ch == '/' || ch == '?' ) {
break;
}
index++;
}
return url.substring(0, index);
}
}
}
|
STACK_EDU
|
Search results for "linux"
Full Icon Themes by mx-2
# Oxylite Icons Oxylite-icons is a icon theme which implements skeuomorphic icons with modern SVG technology. It is based on Oxygen, Adwaita and others. Currently, this icon theme is tested on Gnome and works best on HiDPI monitors. ## PNG version The `oxylite-png` version of this theme uses...
skeuomorph non-flat oxygen adwaita gnome linux unix icon-theme
Aug 28 2023
Full Icon Themes by D-E
Here is greenish KDE icon theme to match OpenSUSE and also any other distro, customized to look fresh and inspiring. It is based on Oxygen 4.6.2 - the last version featured old-syle folder icons. Why is it called "Oxygen Ionized"? Doubly ionized oxygen (also known as [O III]) is a forbidden...
icon-theme linux unix green oxygen kde iconset
Aug 19 2023
Plasma Color Schemes by feren-os-team
An unofficial dark theme counterpart to the famous Oxygen colour scheme. This colour scheme takes the existing Oxygen colour scheme and gives it the lightness inversion treatment, with some extra design considerations taken atop to make Oxygen better reflect how its developers at the time would...
oxygen kde4 dark linux unix theme kde plasma colorscheme
Jul 27 2023
Plasma Window Decorations by paulmcauley
A better binary version of this window decoration is available at: [url]https://github.com/paulmcauley/classik[/url] (select Kite from the Button icon style drop-down in the window decoration settings). This has better performance, inherits system titlebar colours properly and has an Application...
breeze classic classik kde kde1 linux oxygen plasma theme unix
Dec 11 2021
Plasma Themes by sheshonq
KDE plasma theme assembled with icons from Rosa, Oxygen and other I don't remember any more :-) Fits with Rosa Humanity remix for KDE Plasma 5. [url]https://store.kde.org/p/1381893/
kde linux oxygen plasma rosa theme unix
May 09 2021
Plasma Themes by altenate01xyz
Windows 7 like theme with Oxygen icons. This theme was created for Plasma 5. -Under Development- -will get updated- please rate/like the theme on opendesktop.org ------------------- Updatelog: New Applicationlauncher-Button Some fixes New configuration icons and other...
aero kde kde5 linux oxygen plasma theme unix windows7
Dec 30 2019
Plasma Themes by zinjanthr0pus
I actually haven't added much to this theme at all, but I'm uploading it because I kind of want to make a look and feel package that uses this. It is derived from this theme: https://www.opendesktop.org/p/1162362/ which is basically the Oxygen plasma theme but with nice colorful tray icons that...
glass glassy kde linux oxygen plasma theme unix
Dec 01 2018
|
OPCFW_CODE
|
...to the following months ( July 2017, August 2017, Sept 2017) B. Audit 2016-2017 Books Paying specific attention to fowlloing months (December 2016- May 2017- June 2017) C. Audit 2015-2016 Books Paying specific attention to following months ( December 2015-May 2016-June 2016) Knowledge of Australian Taxation Practices Required Must know how
...be done in C# language only no C++ or other language please. I can send full description of this project. The objectives of this assignment are to demonstrate proficiency in file I/O, data structures, and data transformation using C language resources. Specifically, you will read in data from a text file, use that data to populate a data...
We require a GST Enabled Double Entry Accounting Software with source code in which Clients, Product, Sale, Purchase, Transaction, Trail Balance, Balance Sheet, Day Book developed in C# SQL Server, Crystal Reports Dot. Net Framework 4.0. Having features to import sales from excel sheet can import data from tally software
...attached spreadsheet has enough data already in it so the freelancer can understand the required structure. Further requirements 1. Most of the bottom level categories in the text files start with a category called "General". Instead I want the category structure purely alphabetical. 2. Each row must have an entry in the seo_keyword column (column
...filters works such as data entry, links, attributes, options, recurring, Discount, special, Image entry and Reward Points. Product list will involve these stages: A-Upload 200 Items from specific websites to Our Opencart Shop. “We will provide the websites to source from” B- Fill in all necessary descriptions, filters and SKUs, C- All product photos
Opt in page (SAML built into with our Single Sign On provider) Built with C# authenticated with LDAP protocol Active Directory currently. We need it to authenticate with SAML and our SSO provider StudentNet (Cloudworx). See attached workflow for more information. And further details of an example of how our SSO provider works with another one of
I’d like to hire a data entry assistant for work ranging from 1-8 hours per day this week. You will be given a list of URLs leading to a page on this website. You will then need to copy and paste some information into our web portal for each URL. Use of a VPN is strongly recommended. This will need to be done manually and cannot be done with software
Hi Tashalee C., I noticed your profile and would like to offer you my project. We can discuss any details over chat. I need assistance as regards communication,data entry,customer support and administrative support, Please do let me know if interested soon
Instructions: To write a program that will calculate change for a sales purchase. The program should prompt for a sales price. Validate that the data entered is a number greater than 0. If the data entered is incorrect, display an error message and end the program. Next, prompt the user for the amount that the customer will pay to the cashier. Validate
...game) 3- Backing on the last entry.( Backing) 4- Choosing who wants to play together (one of the options above), this will open a new page (this is called daq alwaleed) the new page will contain : A-Create boxes based on numbers of players that user choose. B- Write the name of the player on the boxes. C- A button to press to start random
...from Yahoo Finance from Python to C++Builder. C++Builder is an IDE from Embarcadero Technology. I have access to C++Builder XE4 Pro and C++Builder 2009 Pro and C++Builder 6.0 Pro. A different version of C++Builder may be acceptable as long as I can get the final program version to compile and run with my version of C++Builder. 2) The Python ...
1. Search online and find 5 different career opportunities available to an Industrial Engineer. Include an estimate of starting salary of an entry-level industrial engineering position. You must cite your sources (web pages, books, technical articles, etc.). 2. Research and write between 200-250 words on an engineering field that will likely emerge
I need a simple Unity3D script for animation for mobile. Please upload entry only Unity 3D and C # experts. You need to create an animation using a C # script. input data are screen resolution(i.e 1080 * 1920) and basic position(i.e. x = 200, y = 200). Once you're done upload a screenshot or video. Cheers.
...new game) 3- Backing on the last entry.( Backing) 4- Choosing who wants to play together (one of the options above), this will open a new page (this is called daq alwaleed) the new page will contain : A-Create boxes based on numbers of players that user choose. B- Write the name of the player on the boxes. C- A button to press to start random
...are around 25 pages, 16 pages are lists of data, 2 data entry pages, and the rest are login and password reset pages and such. Further projects will be forthcoming and I am looking to create a working relationship with programmers for future projects. The project will be using: Visual Studio ([url removed, login to view]) C# EntityFramework (code first, Auto...
|
OPCFW_CODE
|
|author||Thomas Gales <email@example.com>||Mon May 22 22:51:47 2023|
|committer||Joshua Peraza <firstname.lastname@example.org>||Tue May 23 15:24:16 2023|
Modify RISCV minidump context to match Crashpad - RISCV32 will only include support for 32 bit floating point registers - RISCV64 will only include support for 64 bit floating point registers - RISCV 32/64 context will include a "version" field to account for future extensions Fixed: 1447862 Tested: `make check` on x86 host Tested: `minidump_stackwalk` for RISCV64 minidump on x86 host Change-Id: I605d5b2c35e627a5dc986aaf818a9c9898f6ae0b Reviewed-on: https://chromium-review.googlesource.com/c/breakpad/breakpad/+/4553281 Reviewed-by: Joshua Peraza <email@example.com>
Breakpad is a set of client and server components which implement a crash-reporting system.
First, download depot_tools and ensure that they’re in your
Create a new directory for checking out the source code (it must be named breakpad).
mkdir breakpad && cd breakpad
fetch tool from depot_tools to download all the source repos.
fetch breakpad cd src
Build the source.
./configure && make
You can also cd to another directory and run configure from there to build outside the source tree.
This will build the processor tools (
src/processor/minidump_dump, etc), and when building on Linux it will also build the client libraries and some tools (
Optionally, run tests.
Optionally, install the built libraries
If you need to reconfigure your build be sure to run
make distclean first.
To update an existing checkout to a newer revision, you can
git pull as usual, but then you should run
gclient sync to ensure that the dependent repos are up-to-date.
Follow the steps above to get the source and build it.
Make changes. Build and test your changes. For core code like processor use methods above. For linux/mac/windows, there are test targets in each project file.
Commit your changes to your local repo and upload them to the server. http://dev.chromium.org/developers/contributing-code e.g.
git commit ... && git cl upload ... You will be prompted for credential and a description.
At https://chromium-review.googlesource.com/ you'll find your issue listed; click on it, then “Add reviewer”, and enter in the code reviewer. Depending on your settings, you may not see an email, but the reviewer has been notified with firstname.lastname@example.org always CC’d.
|
OPCFW_CODE
|
Columnar Formats in Data Lakes
Columnar data formats have become the standard in data lake storage for fast analytics workloads as opposed to row formats. Columnar formats significantly reduce the amount of data that needs to be fetched by accessing columns that are relevant to the workload. Let’s look at how this is happening with an example.
Analytical queries on stocks represent a multi-billion dollar business in the US because companies use these queries to understand sales trends and understand stock buying and selling patterns. Analytic queries mostly involve scans of the data. As an example, let’s try to query the average price for TESLA stock this year. TESLA is a popular stock and there are going to be multiple records the below scan will be a good way to evaluate this query.
WHERE symbol = ‘TSLA’ AND date >= ‘2020/01/01’
In order to understand how columnar formats optimize the scan, let’s look at how row data formats read through this first.
Intuitively, the amount of time it takes to read is proportional to the amount of data we access in processing the query.
In a storage system, data is laid out in concentric rings and the data that is being read now is under the triangular structure called the head. When the disk rotates and data passes off the head, it is read from the disk. Each ‘x’ here represents a column for the record.
So, if you want to read three columns that are randomly allocated, we need to read through the entire record before we move on to the next record. So the time to scan the data will be the time it takes to read through the entire columns for these records.
Even though the query above needs to access price, symbol, and date columns; when the data is laid out in a row-by-row fashion on the storage disk, we will end up reading all 6 columns in the data.
For 1 billion records assuming 100 Bytes each = 100GB at 100 MB/sec, it takes about 1000 seconds to read the data.
In the column representation, it lays out the data ‘column by column’. Assume every file is stored with its own database and when we write a stock quote to the columnar file, an ID is created and symbol, price, date, created_by, exchange, and type are broken up and written to these different tables within these files.
When we try to read, we only need to access 3 columns for the records instead of all the columns. So we read price, symbol, and date and ignore the rest of the columns.
For 1 billion records -> 100 Bytes each = 100GB x 3/6 at 100 MB/sec, it takes about 500 seconds. So, we are able to read through the records much faster.
In reality, these tables tend to be really big but the queries tend to access only a few attributes and that means the columnar representation can be much faster doing these scans, than the row-oriented representation.
In addition to the efficient scans, columnar representation compresses the data well. Each column can use a different scheme for compression. Since the values in a column tend to be similar to one another, compression can be very efficient. For example, if the table is sorted by stock symbol column, a very straightforward way of Run Length Encoding (RLE) can be used to compress this data.
TSLA, TSLA, TSLA, TSLA, TSLA, SQ, SQ, SQ, APPL, AAPL => TSLA x 5, SQ x 3, AAPL x 2
The above 10 values can be stored as TSLA times 3, SQ times 2, and AAPL times 2. If you think about a real-world scenario, this stock table might contain billions of values and we can represent this column as a few thousand stock quotes in the files. Note that we cannot apply the same trick for all the columns like a stock price for example.
In the above example query, we are also doing a filter on symbol and date. So, we can simply look at the first table for stocks and the third table for the date, filter down to the common ROW IDs, and run a binary search(Row IDs are sorted) to pull out the price column values for these Row IDs. That’s Predicate Pushdown right there.
Columnar Data Formats
That’s why these columnar data formats are so powerful because we not only reduce the overhead in retrieving the columns we are looking for, but we also have the advantage of using these filtering restrictions and enhancements very effectively. Particularly when we are searching for a needle in a haystack kind of scenario and then being able to do these efficient scans over the data that we want to pull can save a lot of cost and time.
For a major ride services company, 30 days of ride service records on 1 TB data have resulted in the following using ORC columnar format compared to JSON representation.
- 64% compression
- 52x faster querying
Columnar Formats in Data Lakes
In a data lake, columnar formats can provide orders of magnitude reduction in storage costs and query run time for analytic queries.
Key Idea: Reduce the amount of data accessed per query by limiting the reading only to needed columns.
|
OPCFW_CODE
|
Handwritten Number Recognition Using Image Processing and Neural Network Technique
handwritten number recognition system was developed by using image
processing and neural network technique. The detail is described as
Image Processing Technique
Before the computer can recognize
handwritten number, a set of handwritten number images needed to be
provided to computer to teach her about what does the image mean. The
chain codes approach is used to extract out the image feature
information based on their shape .
The below are the steps used to obtain the chain codes of an image.
- Thresholding - Histogram information are used to obtain the best threshold value automatically
- Scalling - If code word resampling technique are used. This technique can reduce the noises contain in the chain codes
- Thinning - Also known as skeleton. Remove redundant line information
- Chain coding - Convert shape information into a set of numerical number.
Neural Network is like the
human brain model. The image information which has been converted into
a set of numerical number, will be feeded into the neural network.
Neural Network will be trained. The Neural Network will be continuous
improved based on its learning experience.
After a well trained Neural Network has
been constructed, any test image will be processed using the above
mentioned image processing technique. A set of numerical number will be
obtained and feeded into the Neural Network. The output of the Neural
Network is the result of the recognition.
I would like to have handwritten number recognition feature in my own application, do you have any library.
How to search for an image
- Download the numberrecognition-1.0.zip from the Download section.
- Extract the numberrecognition-1.0.zip to any location of your
computer local disk (For example, C:\Program Files\ in Windows) by
using archive tool (In Windows, you may use WinZip. In Linux, you may
use the tar command. In Macintosh, Stuffit expander will expand the
- You will see a new folder named numberrecognition-1.0 is created.
(For example, C:\Program Files\numberrecognition-1.0 in Windows)
- In Windows, double click the numberrecognition.jar in bin folder.
(For example, C:\Program
Files\numberrecognition-1.0\bin\numberrecognition.jar in Windows) In
Linux, you may use the command java -jar numberrecognition.jar
- You will see the following screen.
- Click on Project->Open..., a file chooser dialog box will be
pop up. Go to the location where you extract the archive file and
choose pure_images_loading_with_tokens.xml. (For example, C:\Program
collection of training images will be loaded into the application's
- Click on Neural Network tab, click on Set default button,
followed by Train button. Please note that our intention is to make the
error graph very near to 0.0. If you can't get the result, try to
re-train the neural network by increasing the number of input neurons
(60 for example) or the number of hidden neurons (50 for example).
- Click on Image Recognition tab, write a number within the Drawing
Area, click Recognize button. Please note that your handwritten number
should be similar to the training image found in the Image Tree.
Highlighted features in handwritten number recognition demo
- Save project configuration in XML format.
- Save trained Neural Network
- Drag n Drop support in the image tree
- Multithreading in image processing, neural network training and image recognition
- Able to recognition any objects other than number which used shape as their main identity
- Use code word re-sampling to reduce the noises in the code word
- Support Auto Threshold so that the whole training and recognition
process is fully automated. User needed not to try-n-error to get the
best threshold value
|
OPCFW_CODE
|
using Prototype;
namespace Prototype
{
public class Room : MapSite
{
private int _roomNumber;
private readonly MapSite[] _sides;
public Room()
{
}
public Room(int number)
{
_roomNumber = number;
_sides = new MapSite[4];
}
public MapSite GetSide(Direction direction)
{
return _sides[(int)direction];
}
public void SetSide(Direction direction, MapSite mapSite)
{
_sides[(int)direction] = mapSite;
}
public virtual Room Clone()
{
return new Room();
}
public void Initialize(int n)
{
_roomNumber = n;
}
}
}
|
STACK_EDU
|
8th - 10th April 2019
University of Liverpool
In collaboration with the Institute for Risk and Uncertainty (UK) and Institut für Risiko und Zuverlässigkeit (Germany), we are offering a 3-day training course on Uncertainty Quantification using COSSAN Software.
Structure of the training programme
Each day focuses on a specific topic. This allows the participants to attend a specific training day.
Aims and Learning outcomes
You will learn the main techniques available for dealing with Risk Analysis and Uncertainty Quantification through an easy to use, yet powerful computational software.
Main Concepts and techniques
- Random Variables and Random Variables Sets
- Monte Carlo simulation and advanced simulation techniques (Subset simulation, Line Sampling, Importance Sampling, Latin Hypercube Sampling)
- Global and Local Sensitivity analysis
- Global optimization techniques
- Surrogate Models (Artificial Neural Networks, Response surface, Kriging)
- Reliability based and robust design
|Date:||8th - 10th April 2019|
|Venue:|| Room 502-PCTC-C
502 Teaching Hub (next to the Guild of Students).
University of Liverpool, U.K.
|Time:||Monday 1000 -1600
Tuesday 0900 - 1600
Wednesday 0900 - 1300
|Fee:||Industrial Attendee: £250
Academic Attendee: £125
Day 1: 8th April 2019
|10:00||Introduction||Welcome, aim and structure of the course.
Getting started: basic useful computer commands and remote connection
|10:30||Lecture||COSSAN-X and OpenCossan (Main features and capabilities, toolboxes and wizards)|
|11:15||Practical Session||Installation, getting started and familiarisation with the software.
Construct a model and run a deterministic analysis.
|12:00||Lecture||Introduction to Uncertainty Quantification and modelling of the uncertainties (random variables, stochastic processes, etc.)|
|13:30||Lecture||Basic Monte Carlo method simulation for Uncertainty Quantification and Reliability Analysis|
|13:45||Practical Session||Basic Tutorial: Uncertainty Quantification of a simple cantilever beam model|
|14:30||Lecture||Advanced Monte Carlo methods (Importance sampling, Line Sampling, Subset Simulation) for UQ and Reliability Analysis|
|15:00||Practical Session||Basic Tutorial: Uncertainty Quantification and Reliability Analysis using advanced methods|
|15:45||Wrapping Up||Summary of the day|
Day 2: 9th April 2019
|9:15||Introduction||Welcome and presentation of the day|
|9:30||Lecture||When desktop is not enough (Cliff Addison)|
|10:15||Lecture||Basic Linux concepts and essential commands|
|10:30||Tutorial||Exercises on High Performance Computing|
|13:30||Lecture||Connect COSSAN-X with 3rd party solvers (ABAQUS, NASTRAN, LS_DYNA, etc.) and high-performance capabilities|
|14:00||Practical session||Connect COSSAN-X with your model|
|14:45||Lecture||Introduction to Local and Global Sensitivity Analysis|
|15:30||Practical session||Tutorial: Sensitivity Analysis of simple model and of external model|
|15:50||Wrapping Up||Summary of the day|
Day 3: 10th April 2019
|9:00||Introduction||Presentation of the day|
|9:15||Lecture||Optimisation (Gradient based and gradient free approaches, stochastic optimisation)|
|9:45||Tutorial||Basic Tutorial: Optimization of a simple Cantilever Beam|
|10:30||Lecture||Meta-models (Response surface, Artificial Neural Networks, etc), and Robust and Reliability based optimisation|
|11:00||Tutorial||Tutorial: Meta-models, Robust Optimization and Reliability Based Optimization|
|11:45||Demo||Demonstration of latest COSSAN development|
|
OPCFW_CODE
|
How to avoid modeling errors in NetWeaver BPM? Part 5: Lunatic looping
Business processes sometimes need to process their input in batches. One then faces the challenge to iterate over the elements of some list-valued expression (the “batch”) and trigger some identical activity for each of the contained “line items”. This blog posting discusses how NetWeaver BPM (“Galaxy”) supports scenarios like that with a variety of different modeling approaches. I specifically pinpoint the pros and cons that go along with each of the proposed solutions. We start with plain sequential loops and later proceed to dynamic parallelism, just like in BPMN’s “Multiple Instance Activities”.
And in fact, when strictly sequentially looping over the list, modeling the above sketched scenario is something of a no-brainer in NetWeaver BPM. All one needs to do is add an integer-valued “index” data object to your process model. That index variable needs to get initialized as zero sometime before the looping may commence. A plain decision gateway (“XOR split”) takes care of comparing that index to the size of the batch (which is, in fact, a list-valued [mapping] expression of some sort). As long as it falls below, the “happy path” is taken where the to-be-repeated activity initially extracts the line item (at the current index) from the batch, then processes that line item and finally increments the index by one. Below, I have modeled a plain sequential loop, iteratively processing the line items contained in the “Batch” data object, holding a list of plain strings:
The “Index” data object is initialized to zero as part of the start event’s output mapping. What we will also need to check if we have reached the batch’s “bottom” is a custom function batchSize that determines the size of a string list (a plain string-typed element having an upwards unbounded occurrence)and another function lineItemAtIndex that returns an element of a given string list at the given index.
You can directly define those functions’ signatures in your “Process Composer” project but have to supply the actual Java-based implementation in a separate EJB which you make available (register) in your CE server’s JNDI directory under the specified lookup name.
Creating custom mapping functions is easy and straightforward and here’s a great article that explains in detail how this is done. Just follow the steps described in there and you are all good with your mapping functions.
The actual activity to process individual line items (which may, in fact, be a subflow invocation as depicted here) may then take advantage of that function to extract a line item and map in onto the activity’s signature.
That actitity’s output mapping then needs to increment the “Index” data object by one to continue iterating over the batch.
Sequentially looping over a list-valued expression is easy and there are hardly any mistakes you can make. Just make sure to increment the index in each cycle and break out of the loop as soon as you have processed all line items.
Recommendation: Go for sequential loops if the batch is small, processing an individual line item is fast, and process latencies are not all that critical. It’s the right choice for getting to results quickly.
Whenever you rather want to process line items in parallel, you have to go for different modeling patterns.
The rationale for doing batch operations in parallel is to mutually de-couple line item processing from one another. In this way, you may not only end up with shorter total process turnaround times but also process different line items concurrently which comes in handy to utilize resources more efficiently. For instance, you may dispatch tasks (corresponding to line items) to different people at the same time.
The initial idea of dynamically spawning concurrent flow is to make use of AND split gateways to process a line item in parallel to initiating preparation for the next line item. In order to introduce a private data object for processing each line item and, thus, avoid race conditions, forking that parallel flow happens in a separate subflow:
In there, the first (upper) branch invokes the actual processing of a specific line item whereas the second (lower) branch immediately returns to the invoking (parent) process. In my earlier postings on How to avoid modeling errors in Netweaver BPM? Part 4: Submerge in subflows and How to avoid modeling errors in Netweaver BPM? Part 2: More fun with end events!, I already introduced you to Galaxy’s concept of de-coupling a subflow’s final completion (when all tokens have ceased to exist) to continuing execution in the invoking process (when the first token triggers the subflow’s end event). Above-depicted process is then invoked from a plain sequential loop as shown below:
Asynchronously processing line items is a great way of introducing dynamic parallelism but comes at a price:
- There is no wayof returning ad aggregating result data from processing individual line items. In fact, once the “asynchronously process line item” subflow has returned, no more data may be passed to the outer flow.
- Besides, the outer process will not even notice if and when all line items have completed processing. This is why the “confirm process completion” task was put into the outer process. An end user has to manually confirm that whatever had to happen in the line item processing has, in fact, be successfully completed.
Nevertheless, the afore-sketched pattern is a good way of introducing dynamic parallelism at low cost.
Recommendation: Use dynamic parallelism w/o synchronization whenever (1) you may parallelize line item processing, (2) you do not need to collect and aggregate output data for each line item (line when performing some asynchronous operations), and (3) you can make sure that the outer process does not complete before all line items were fully processed.
You may also use this pattern in an endlessly looping process where (instead of triggering a task), the “no more line items” process branch is redirected to some upstream activity (like to fetch another batch of line items).
Synchronizing Dynamic Parallel Flow
Caution! The approach which is described in this section is not an official statement of what we support or encourage our customers to do. At this point in time, recursive subflow invocations is an experimental feature which can have unexpected side-effects, including a crash of your CE server instance and a failure to recover from that. In particular, avoid “deep” recursive invocations having a stack depth of >10!
The remaining challenge is to also synchronize dynamically forked parallel processing “threads”. This is not only crucial for collecting and aggregating results from each line item’s processing activity but also to only continue executing (or completing) the outer flow when all line items were fully processed.
The idea is to make use of recursively invoking the subflow shown below to
- Extract the line item to be processed by this specific subflow instance (“extract line item”) to a separate “LineItem” data object [optional step];
- Concurrently process a specific line item (“line item processing”) and temporarily materialize the result in a separate data object (“Result”);
- Recursively trigger processing the remaining work items before, incrementing the “Index” data object, and temporarily mapping the result (of all remaining work items) into a data object “Aggregate”;
- Synchronizing both branches and ultimately merging (aggregating) the individual result of this line item (“Result” data object) and the remaining line items (“Aggregate” data object) which is then passed to the invoking process.
The process below sketches the principle behind this dynamic synchronization approach:
Mind that recursive subflow invocation is in many cases not supported. In fact, you may only define recursive invocations for non top-level processes. That is, you do need some outer process initially invoking the recursive subflow in this scenario. Also, this blog posting is not an official statement on Galaxy features that we encourage our customers to use.
So in essence, recursively invoking subflows is inherently evil and originates from the “dark side” of Galaxy. When erroneously used, your process may go hog-wild and even screw up your CE application server.
Recommendation: Only use recursive subflow invocation with extreme caution! If spawning new subflow instances in an uncontrolled fashion, you may end up spending all runtime resources in no time. Nevertheless, the afore-sketched pattern is the only way I can think of to really control and synchronize dynamic parallelism. In detail, it allows you to collect results from each individual line item’s processing which is a frequent pattern in many applications.
When it comes to merging the current line item’s result data (“Result” data object) with the global list-valued “Aggregate” data object, you may make use of an existing mapping feature which lets you choose between different assignment operations. In this particular case, you want to go for an “Append” assignment which adds the “Result” content as the last item to the “Aggregate” list.
Be aware that when ascendingly iterating over the batch (1st, 2nd, …, n-th line item), results will appear in “Aggregate” as n-th, (n-1)-th, …, 2nd, 1st result item. To enforce an identical ordering of line items and corresponding results you may simply start with the n-th line item (“Index”=number of line items in batch minus 1) and decrement by one in each recursive invocation of that subflow.
|
OPCFW_CODE
|
Can a local dataframe be accesses from a SQL chunk in R Markdown
I have run a SQL statements and stored the results in a local data frame
called "test", `{sql connection=Prod, output.var="test"}.
Now, I need to access "test" the local data frame in another R SQL chunk, is that possible?
SELECT COUNT(*) AS 'RecordCount'
, EMPD
, Department
FROM "test"
GROUP BY EMPD
, Department
Generally no. The SQL database server you've connected to doesn't know anything about your R session, so it can't process your R data.
The sqldf package lets you use SQL on R objects by creating a local database. It's a good solution if you're more comfortable using SQL than R, or if you want to do something that's easier in SQL than in R.
In most cases, the point of running a SQL query and storing the results in R is to use R, not SQL, for the next steps. Your example SQL code can be translated to dplyr like this:
library(dplyr)
count(test, EMPD, Department, name = "RecordCount")
If you need to reference the results in a new SQL query, for example to do a join with another table in the database, the best solution will depend on your use case, what flavor of SQL database you're using, and how big the results are. You may be able to use one big SQL query instead of two small ones, or perhaps write the intermediate results to a temporary table.
I was trying to translate a large SQL Server statement where multiple "temp" tables were used, but could not get the data from the SQL chunks in RStudio to write to the SQL Server temp tables. This is why I was separating the queries and storing the results locally acting as temp tables. Some queries joins tables from SQL server and temp tables (in my cases stored locally) this is why I was interested in accessing the locally stored dataframes from the SQL chunk. Not sure why the INTO #temp statement seem to have no effect inside of the SQL chunk but works fine when run in SQL server.
That's all very relevant info that would be good to include in a new question. "Why do my select into #temp statements not work when connecting from R?" Searching for that issue came up with this question that might help. Common table expressions may be a good workaround.
Is it because select into #temp is a local-temporary table that is harvested as soon as the calling connection is closed?
From like 10 years ago I remember a similar issue, and even using temp tables within a single statement failed. I think Chris Gheen's answer here might do it---it says RODBC by default will think the query is complete when the first temp table statement is executed, but gives an option for turning that off.
That's an interesting premise. I have always assumed that the life of a temporary table was prescribed by the DBMS, not by the connection itself. That is, if for some reason RODBC or DBI/odbc hard-fail, then the temp table still goes away, despite the ODBC client's inability to properly clean up after itself. If RODBC is working around that, then ... that sounds like it could be intervening in temp-table operations, which could be difficult considering some require temp tables to be #-leading, others do not, so it's not always clear which is which.
|
STACK_EXCHANGE
|
This will be tough. One problem is that (most, though not all) publishers have taught us to expect a lot for "free". Another is that the world is awash in content, so if you're a publisher, hiding yours behind a pay wall just makes room for someone else to try to have his (ad-supported) day in the sun. Snobs contend, "Water everywhere, but only a few drops (ours) worth drinking." Maybe, but with production and communication costs low, and lots of people out there, there are enough exceptions to disprove the rule. Regardless, focusing on these issues misses the point about where the value for the average reader is today. The future of paid content lies not in the content itself, but in serving two adjacent needs: filtering what's relevant, and helping audiences to use it productively.
Let's look at filtering first, and let's take Twitter as an example. At north of 20 million users, and even with a churn rate fluctuating around 50%, you can't ignore it (and recent research suggests business people are paying attention). The challenge is finding useful tweeters. (Digerati friends please help -- is that what one who tweets is called? Or, is it "tweeps", or "tweeple", or some such?) There are some early stage services probing at this: besides Twitter Search (formerly Summize / monetized via... TBD) and its upcoming "Discovery Engine", there's Hashtags (search by / subscribe to... wait for it... hashtags; monetized via tip jar), Microplaza (tweets from people you follow; monetized via subsidy from parent co, which is an enterprise-focused collaboration platform ASP), Tweetmeme (Digg for Twitter; monetized via sponsorships), Wefollow (like the Yellow Pages of Twitter), plus a half a dozen more I've heard of and tried and doubtless dozens I haven't (see here for more). (Michael Yoon and I are working on one, stay tuned.) Is some refined, scalable version of one or more of these systems worth $2-3 bucks a month to some reasonable sub-segment of the Web-using public? Related memo to Google: it would be worth $2-3 month to me to have Google suggest good posts from my blogroll (I use Google Reader) based on parsing my emails, which it currently does to serve me ads in Gmail.
Second, and perhaps potentially far more lucrative, are services to help audiences do stuff with content. Be an affiliate for schools that sell courses related to the content, for example. Last time I checked, the market for education, particularly online / just-in-time education, was growing at a healthy clip. More simply, offer lectures by content authors / editors and sell tickets to these events, or be an affiliate for others who do that with your content.
My favorite creative approach to segmenting audience needs and monetizing accordingly comes from the musician Jill Sobule, whose http://jillsnextrecord.com/ (scroll down to "A Message From Jill") does a nice job of unpacking all the reasons why folks engage with her music, and then pricing related offers accordingly. Folks wonder about Myspace's future, what with the Google deal expiring soon and all. I wonder: does Jill's approach suggest one path might be to leapfrog Eventful and function as an uber-agent for the bands making their homes on Myspace?
|
OPCFW_CODE
|
Using a grouped z-score over a rolling window
I would like to calculate a z-score over a bin based on the data of a rolling look-back period.
Example
Todays visitor amount during [9:30-9:35) should be z-score normalized based off the (mean, std) of the last 3 days of visitors that visited during [9:30-9:35).
My current attempts both raise InvalidOperationError. Is there a way in polars to calculate this?
import polars as pl
def z_score(col: str, over: str, alias: str):
# calculate z-score normalized `col` over `over`
return (
(pl.col(col)-pl.col(col).over(over).mean()) / pl.col(col).over(over).std()
).alias(alias)
df = pl.from_dict(
{
"timestamp": pd.date_range("2019-12-02 9:30", "2019-12-02 12:30", freq="30s").union(
pd.date_range("2019-12-03 9:30", "2019-12-03 12:30", freq="30s")
),
"visitors": [(e % 2) + 1 for e in range(722)]
}
# 5 minute bins for grouping [9:30-9:35) -> 930
).with_columns(
pl.col("timestamp").dt.truncate(every="5m").dt.to_string("%H%M").cast(pl.Int32).alias("five_minute_bin")
).with_columns(
pl.col("timestamp").dt.truncate(every="3d").alias("daytrunc")
)
# normalize visitor amount for each 5 min bin over the rolling 3 day window using z-score.
# not rolling but also wont work (InvalidOperationError: window expression not allowed in aggregation)
# df.with_columns(
# z_score("visitors", "five_minute_bin", "normalized").over("daytrunc")
# )
# won't work either (InvalidOperationError: window expression not allowed in aggregation)
#df.rolling(index_column="daytrunc", period="3i").agg(z_score("visitors", "five_minute_bin", "normalized"))
For an example of 4 days of data with four data-points each lying in two time-bins ({0,0} - {0,1}), ({1,0} - {1,1})
Input:
Day 0: x_d0_{0,0}, x_d0_{0,1}, x_d0_{1,0}, x_d0_{1,1}
Day 1: x_d1_{0,0}, x_d1_{0,1}, x_d1_{1,0}, x_d1_{1,1}
Day 2: x_d2_{0,0}, x_d2_{0,1}, x_d2_{1,0}, x_d2_{1,1}
Day 3: x_d3_{0,0}, x_d3_{0,1}, x_d3_{1,0}, x_d3_{1,1}
Output:
Day 0: norm_x_d0_{0,0} = nan, norm_x_d0_{0,1} = nan, norm_x_d0_{1,0} = nan, norm_x_d0_{1,1} = nan
Day 1: norm_x_d1_{0,0} = nan, norm_x_d1_{0,1} = nan, norm_x_d1_{1,0} = nan, norm_x_d1_{1,1} = nan
Day 2: norm_x_d2_{0,0} = nan, norm_x_d2_{0,1} = nan, norm_x_d2_{1,0} = nan, norm_x_d2_{1,1} = nan
Day 3: norm_x_d3_{0,0} = (x_d3_{0,0} - np.mean([x_d0_{0,0}, x_d0_{0,1}, X_d1_{0,0}, ..., x_d3_{0,1}]) / np.std([x_d0_{0,0}, x_d0_{0,1}, X_d1_{0,0}, ..., x_d3_{0,1}])) , ... ,
I think the by argument in groupby_rolling is what you are looking for: https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.groupby_rolling.html#polars.DataFrame.groupby_rolling. Set by to five_minute_bin and create a date column and use that as the argument for index_column.
@jvz you mean like this?
df.groupby_rolling( index_column="daytrunc", period="3i", by="five_minute_bin" ).agg( ((pl.col("visitors") - pl.col("visitors").mean()) / pl.col("visitors").std()).alias("norm") )
How would I get the original dataframe format from there with the normalized visitors column?
They key here is to use over to restrict your calculations to the five minute bins and then use the rolling functions to get the rolling mean and standard deviation over days restricted by those five minute bin keys. five_minute_bin works as in your code and I believe that a truncated day_bin is necessary so that, for example, 9:33 on one day will include 9:31 both 9:34 on the same and 9:31 from 2 days ago.
days = 5
pl.DataFrame(
{
"timestamp": pl.concat(
pl.datetime_range(
datetime(2019, 12, d, 9, 30), datetime(2019, 12, d, 12, 30), "30s", eager=True
)
for d in range(2, days + 2)
),
"visitors": [(e % 2) + 1 for e in range(days * 361)],
}
).with_columns(
five_minute_bin=pl.col("timestamp").dt.truncate(every="5m").dt.to_string("%H%M"),
day_bin=pl.col("timestamp").dt.truncate(every="1d"),
).with_columns(
standardized_visitors=(
(
pl.col("visitors")
- pl.col("visitors").rolling_mean_by("day_bin", window_size="3d", closed="right")
)
/ pl.col("visitors").rolling_std_by("day_bin", window_size="3d", closed="right")
).over("five_minute_bin")
)
Now, that said, when trying out the code for this, I found polars doesn't handle non-unique values in the by-column in the rolling function correctly, so that the same values in the same 5-minute bin don't end up as the same standardized values. Opened bug report here: https://github.com/pola-rs/polars/issues/6691. For large amounts of real world data, this shouldn't actually matter that much, unless your data systematically differs in distribution within the 5 minute bins.
|
STACK_EXCHANGE
|
Department of Computer Science
Rutgers, The State University of New Jersey
New Brunswick, NJ
67 Cobalt Lane
Westbury, NY 11590
Phone: (516) 338-2706
- Ph.D. in Computer Science, Rutgers University, New Brunswick, Spring 2010.
Thesis: Data Privacy in Knowledge Discovery
Advisor: Professor Rebecca N. Wright
- M.S. in Computer Science, Stony Brook University, Stony Brook, NY, Fall 2003.
Thesis: A Study of the Sum of Squares Heuristic for Variations of the Bin-Packing Problem
Advisor: Professor Michael A. Bender
- Ph.D. in Mathematics, Indian Institute of Technology, Madras, May 1994.
Thesis: A study of the singularity method for steady and unsteady linearized viscous flows
Advisor: Professor A. Avudainayagam
- M.S. in Mathematics, Indian Institute of Technology, Madras, May 1990.
Thesis: Solitons Theory
Advisor: Professor A. Avudainayagam
- B.S. in Mathematics, University of Madras, India, May 1988.
- Post doctoral researcher, Department of Computer Science, Columbia University. I work in data privacy with Prof. Tal Malkin.
- Graduate Assistant, Department of Computer Science, Rutgers University, NJ. I work in data privacy under the direction of Prof. Rebecca Wright. The work was funded by NSF through the PORTIA project.
- Research Assistant, Department of Computer Science, Stevens Tech., NJ. I worked in Cryptography and Data Privacy under the direction of Prof. Rebecca Wright. The work was funded by NSF through the PORTIA project.
- Teaching Assistant, Department of Computer Science, Stony Brook, NY. I was a lab coordinator for the first course in programming. Also, I taught the course independently over a summer session.
- Web developer, RightFreight, Inc., New York, NY. My core project involved the creation of the software infrastructure for this startup company. I single-handedly wrote the kernel for the first version of the system in Java, which has since undergone revision.
- Postdoctoral Researcher, Department of Physics, Hofstra University, NY. I worked at the Center for Arrhythmia on computational models for cardiac phenomena. Using differential equations we modeled the behavior of cardiac tissue prior to and during fibrillation. Models were programmed and analyzed in Java, C++ and in Microsoft Excel.
- Assistant Professor, Indian Institute of Technology, Madras, India. I taught undergraduate students who majored in various disciplines of engineering, and graduate students in mathematics. In addition, I performed preliminary research on the modeling of some fluid dynamics problems using hybrid finite element methods. This involved modeling and computationally solving differential equations.
- Research Scholar, Chennai Mathematical Institute, Chennai, India. I studied Lie algebras and other related topics in preparation for doing research in quantum groups. In addition, I studied elliptic curves in connection with the congruent number problem. I have an interest in algebraic number theory in general.
- Lecturer, Venkateswara College of Engineering, Madras, India. I taught undergraduate computer, electrical and mechanical engineering students, and graduate students in the Masters in Computer Applications program. I performed research in analyzing and solving differential equations using techniques such as wavelet and Fourier transforms.
My research lies in the general area of Trustworthy Computing, with an emphasis on privacy-preserving data analysis and secure methods for distributed computation. The purpose of my research is to develop algorithms, protocols and theories for preserving the privacy of individuals and institutions when their data is released for public use or when their data is used in the computation of aggregate structures. My interests are currently focussed on practical methods for: (i) constructing utility efficient data mining techniques from differentially-private summaries, (ii) differentially-private anonymization of graphs such as social networks,
(iii) differentially-private release of time series and (iv) differential privacy for distributed data. Much of my research involves creating new machine learning/data mining algorithms that preserve privacy.
With a doctoral degree in Mathematics, and a second one soon in Computer Science, I have the ability to teach a wide range of courses in the undergraduate and graduate levels. I strongly believe that a person with a Ph.D. in Computer Science should be able to teach almost any undergraduate course in the discipline, and certainly the fundamental computer science courses in programming, discrete mathematics, data structures, algorithm analysis and design, operating systems and computer organization. My research interests lie broadly in the areas of algorithms, computational complexity and cryptography. Correspondingly, my teaching interests are more focused in computational complexity, computability theory and cryptography at the graduate level. However, I am fully capable of also teaching courses in Probability, Machine Learning, Data Mining, and Databases.
- Anonymizing Databases for Regression, with K. Pillaipakkamnatt and R.N. Wright. To be submitted to KDD 2010, in preparation.
- A Practical Differentially Private Random Decision Tree Classifier, with K. Pillaipakkamnatt and R.N. Wright. Proceedings of the ICDM International Workshop on Privacy Aspects of Data Mining, 2009. Invited to appear as a journal paper in Transactions on Data Privacy.
- Communication-Efficient Privacy-Preserving Clustering, with K. Pillaipakkamnatt, D. Umano and R.N. Wright (sent for second review, Transactions on Data Privacy).
- Privacy-preserving imputation of missing data, with R.N. Wright. Data and Knowledge Engineering 65(1): 40-56 (2008)
- A Secure Clustering Algorithm for Distributed Data Streams, with K. Pillaipakkamnatt and D. Umano, Proceedings of the ICDM International Workshop on Privacy Aspects of Data Mining, 2007.
- Private Inference Control For Aggregate Database Queries, with R. N. Wright, Proceedings of the ICDM International Workshop on Privacy Aspects of Data Mining, 2007.
- Sum-of-squares heuristics for bin packing and memory allocation, with M.A. Bender, B. Bradley, and K. Pillaipakkamnatt. ACM Journal of Experimental Algorithmics 12: (2007)
- Privacy-Preserving Data Imputation, with R.N.Wright, Proceedings of the ICDM International Workshop on Privacy Aspects of Data Mining, 2006.
- A New Privacy-Preserving Distributed k-Clustering Algorithm, with K. Pillaipakkamnatt and R. N. Wright, Proceedings of the 2006 SIAM International Conference on Data Mining, 2006.
- Privacy-Preserving Distributed k-Means Clustering over Arbitrarily Partitioned Data, with R. N. Wright, Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2005.
- The Robustness of the Sum-of-Squares Algorithm for Bin Packing, with M.Bender,et al. ALENEX/ANALC 2004: 18-30.
- Alternans and the onset of ventricular fibrillation, with Harold M. Hastings, et al. Physical Review E. Volume 62, 2000, pp 4043-4048.
- One the Image System of Certain Line Singularities in the Vicinity of a Circular Cylinder, with A. Avudainayagam. MechanicsResearchCommunications. Volume25,1998,pp 25-32.
- A Boundary Integral Equation Formulation for the Two Dimensional Oscillating Stokes Flow Past an Arbitrary Body, with A. Avudainayagam. Journal of Engineering Mathematics. Volume 33, 1998, pp 251-258.
- A Necessary Condition for the Existence of Plane Stokes Flows Around An Ellipse, with A. Avudainayagam. Canadian Applied MathematicsQuarterly. Volume 3, 1995, pp 237-251.
- Oscillating Line Singularities of Stokes Flows, with A. Avudainayagam. International Journal of Engineering Science. Volume 31, 1995, pp 1295-1299
- Unsteady Singularities of Stokes Flows in Two Dimensions, with A. Avudainayagam. International Journal ofEngineering Science. Volume 33, 1995, pp 1713-1724.
- Oscillating Stokes Flows in Two Dimensions, with A. Avudainayagam. Mechanics Research Communications. Volume21, 1994, pp 617-628.
- Introductory Programming
- Discrete Mathematics
- Probability and Statistics
- Data Structures
- Compiler Construction
- Automata Theory
- Numerical Methods
- Engineering Mathematics
- Fluid Dynamics
Honors and Grants
- Stevens Institute of Technology, Department of Computer Science, Outstanding Graduate Student Award
- “Finite Element Analysis of Navier-Stokes Equations,” awarded by the Indian Institute of Technology, Madras. Rupees 100,000
- CMI fellowship (1997-1998), awarded by the Chennai Mathematics Institute, Madras, India
- CSIR fellowship (1992-1994), awarded by the Council of Scientific and Industrial Research, India
- IIT fellowship (1990-1992), awarded by the Indian Institute of Technology, Madras, India
- National Merit Scholarship (1983-1988), awarded by the Government of India
- Professor Rebecca N. Wright, Department of Computer Science, Rutgers University, New Brunswick, NJ. Email: Rebecca.Wright@rutgers.edu
- Professor Danfeng Yao, Department of Computer Science, Rutgers University, New Brunswick, NJ. Email: firstname.lastname@example.org
- Professor Michael A. Bender, Department of Computer Science, Stony Brook University, Stony Brook, NY. Email: email@example.com
|
OPCFW_CODE
|
Add progress indicators to maps
Here's some mocks:
Hey @moT01 I'm sure you probably discussed this in meetings and such, but could you give a brief overview of what the purpose of this feature is for? It would help in determining the accessibility issues that need to be considered.
Mainly, to clarify the order of how to go through the curriculum. It also splits the superblocks/buttons up into sections, which gives some more context to the buttons. It also shows a small overview of your progress - e.g. if you earn a cert, the number turns into a cert icon (tentative plan anyway).
How does this feature affects https://github.com/freeCodeCamp/freeCodeCamp/issues/41403?
Here's some more details of what I think we want here. We want the superblock buttons arranged into four (five with upcoming) groups:
Stage 1: Front End Development
1 Responsive Web Design Certification
2 Javascript Algorithms and Data Structures Certification
3 Front End Development Libraries Certification
4 Data Visualization Certification
Stage 2: Back End Development
5 Relational Database Certification
6 Back End Development and APIs Certification
7 Quality Assurance Certification
8 Information Security Certification
Stage 3: Python & AI
9 Scientific Computing with Python Certification
10 Data Analysis with Python Certification
11 Machine Learning with Python Certification
12 College Algebra with Python Certification
Stage 4: Extra Learning
- Coding Interview Prep
- Project Euler
- Legacy Responsive Web Design
Upcoming Curriculum
- Javascript Algorithms and Data Structures (Beta) Certification
- The Odin Project
- Example Certification
Note that the Upcoming Curriculum section is hidden in production.
The first three stages above should have icons next to them, similar to this:
If they have the certification, they get the icon with the ribbons. If not, they get the icon without.
The last two groups of buttons do not need any icons.
The buttons can then be the full width. There is potentially something on this issue and its associated PR we can use for that icon.
Between the numbered icons should be the dashed arrows, as in the image.
The icons within the buttons should stay on the left.
Some questions:
Do we want different icons next to the last two groups, maybe continue with the numbers or just plain circles, so the buttons are the same width?
What do we want for the heading of stage 4?
Does this all sound good @ahmadabdolsaheb? Is there anything else you can add?
I kind of envision the semantics as follows:
Each stage heading would be an h2 and would need an id attribute
The list of courses underneath each heading would be an ol or ul (depending if they are numbered or not)
Each list would be named using aria-labelledby pointing to the id of its heading
We would need to add the appropriate sr-only text so that screen reader users hear the number, the status, and the name of the course. For example, "Course 1, Responsive Web Design Certification, Completed", or "Course 2, Javascript Algorithms and Data Structures Certification, not completed".
Since these numbers are just the suggested order for beginners and do not have to be followed, I would recommend we add a brief paragraph explaining that above the first stage heading.
I will bring up the paragraph at the next meeting. I don't personally like it on the landing page - maybe the /learn page. But I think you might be onto something. If we need a paragraph to explain what the things mean, maybe we aren't communicating it good enough through the UI. We are trying to communicate the suggested order.
@bbsmooth, I understand your concern. When users signup, they are shown a few paragraphs. One of them says the followings:
"If you are new to programming, we recommend you start at the beginning and earn these certifications in order."
I am making that whole section a bit more concise, so we might be able to squeeze a sentence there to make things clearer.
@moT01, Thanks for putting requirements together. I don't think we should have a stage 4. Might also be a good idea to give the Legacy Responsive Web Design the ribbon since it is/was a certification.
@moT01, I was wondering if there are any updates for the requirements ?
No @ahmaxed. We don't necessarily have every detail figured out, but I think we're close enough that we can open this up to contributors and refine things on the PR.
I could mention that I made a mock of this some time back, you can see the code here.
It looks like this
Note that this is not how we want it to look - we want it to look like we discussed above. But some of the code may be able to be reused - not sure. It may be better to just start from scratch.
So which design are we going to move on with ? IMHO the very first design looks nicest.
We decided on the one in this comment @CallmeHongmaybe. I think we can change it in the future if we want. The important thing we want to try and communicate with this is the suggested order of the curriculum. The numbers should do that, regardless of what design we use.
Thanks for replying back. I was wondering where you made the mock for the suggested design, just to be sure not to do any duplicate work and waste anyone's time :)
The mock for the suggested design is an image created by @ahmaxed - so the code for that isn't created yet. I don't see a whole lot that can be reused from the code I shared, maybe some of the map logic/rendering and possibly some of the CSS.
Thanks. I'll see where I can add for the mock.
So here's the rough draft built with here. Let me know what should be changed in the design, and how do we retrieve data for the users' number of fully completed lessons and the users' current certification ?
Thanks for working on this feature, @CallmeHongmaybe. Here is the design we are going with: https://github.com/freeCodeCamp/freeCodeCamp/issues/50412#issuecomment-1585086022
Please use the same font state title, decorate the completion state accordingly and use a dashed line for the arrows.
The state of the certifications could be pulled from the userSelector from redux like this: https://github.com/freeCodeCamp/freeCodeCamp/blob/120ad721a3f96431adc1c3f1ba0718a667b82bc0/client/src/client-only-routes/show-settings.tsx#L24
@ahmaxed if you don't mind can you provide me the svg or the CSS code of the ribbon ?
Also, is it true that only signed-in users can see the ribbons ?
Yes, only signed in users with completed certifications see the ribbons.
https://github.com/freeCodeCamp/freeCodeCamp/pull/49717
Here is the ribbon SVG:
<svg xmlns="http://www.w3.org/2000/svg" width="45" height="50" viewBox="0 0 45 50" fill="none">
<path d="M25 35.3418L35.4851 28L44.5957 41.0113L36.2658 39.7151L34.1106 48.353L25 35.3418Z" fill="black"/>
<path d="M9.11059 29L19.5957 36.3418L10.4851 49.353L8.85418 41.0821L-4.67677e-07 42.0113L9.11059 29Z" fill="black"/>
<circle cx="21.9999" cy="21" r="20" stroke="black" stroke-width="2"/>
<circle cx="21.9999" cy="21" r="17.5" fill="black" stroke="white" stroke-width="3"/>
<path d="M23.9709 13.4545V28H22.2095V15.3011H22.1243L18.5732 17.6591V15.8693L22.2095 13.4545H23.9709Z" fill="white"/>
</svg>
The last path is the number. Feel free to remove or replace it a dynamic number.
Thank you.
I've tried the steps below to seed the database with certified user but when I opened the localhost, the page shows nothing and when I click the menu button the Profile <li> isn't even there.
pnpm run create:config
pnpm run seed:certified-user
pnpm run develop
You probably need to sign in.
If you cannot find what you are looking for in the documentation, feel free to ask for help in:
The Contributors category of our community forum.
The #Contributors channel on our chat server.
Unfortunately you can't sign in unless you sign out, and when you sign out same error happens.
Will have this discussed on the chat server.
It was in the shuffle, but I liked @bbsmooth suggestion that we call them "Track 1" etc instead of "Stage 1", especially since many of these are fine to do in a different order.
Also, one of these days we should really revisit the "Scientific Computing" name of that python certificate, because its contents absolutely are not "Scientific Computing".
Okay I'll change the titles from Stage to Track.
And is it appropriate to rename "Scientific Computing" to "General Python Programming" ?
Great naming suggestions. There might already be a renaming in the works for the Scientific Computing. Since the renaming is a bit involved for a certification, let's make a separate issue to track the feedback the progress on that.
We can move forward with "Stage" for now until we hear back from @QuincyLarson about using "Track".
I would like to work on this
There's a PR with quite a bit of work done on it for this @Amit-Morade. Perhaps you could pick up where that left off - and maybe try to communicate with @CallmeHongmaybe to see where he left off.
Otherwise, you can start from scratch if you want. This has a help wanted label, so we will accept a PR that makes these changes. Be sure to check out our contributor guidelines if you haven't already.
Hi there. Are a lack of arrows a deal breaker?
@a2937, not a deal breaker. However, Quincy requested the arrows to emphasize the order of completion a bit more.
I was having a ton of trouble with the arrows.
|
GITHUB_ARCHIVE
|
Problem with UIImagePickerController
I'm having a problem with the UIImagePickerController being presented with presentModalViewController. As soon as the view displays (be it camera or photo album), the app crashes.
The whole UI is created in code, no interface builder. This has only stopped working since I've been updating the code to run on ios4. Using leaks, I can't find any, and the total memory allocation I'm getting is around 5mb.
Here's the code that I'm using to present the camera picker -
UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];
[imagePicker setSourceType:UIImagePickerControllerSourceTypeCamera];
[imagePicker setDelegate:self];
[imagePicker setAllowsEditing:YES];
[imagePicker setCameraCaptureMode:UIImagePickerControllerCameraCaptureModePhoto];
[self presentModalViewController:imagePicker animated:YES];
[imagePicker release];
And the delegates as follows -
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *selectedImage = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *newImage = [self createGameImage:selectedImage];
[gameOptionsImageDisplay setImage:[self resizeImage:newImage toSize:CGSizeMake(95, 95)]];
[self dismissModalViewControllerAnimated:YES];
[mainView setFrame:CGRectMake(0, 0, 320, 480)];
}
- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker
{
[self dismissModalViewControllerAnimated:YES];
[mainView setFrame:CGRectMake(0, 0, 320, 480)];
}
Setting NSZombiesEnabled to YES is telling me -
*** -[UIImage isKindOfClass:]: message sent to deallocated instance 0x142fc0
And the stack trace is as follows -
0 0x313f7d7c in ___forwarding___
1 0x3138a680 in __forwarding_prep_0___
2 0x3166dad2 in -[UIImageView(UIImageViewInternal) _canDrawContent]
3 0x3166c652 in -[UIView(Internal) _didMoveFromWindow:toWindow:]
4 0x3166c50e in -[UIView(Internal) _didMoveFromWindow:toWindow:]
5 0x3166c50e in -[UIView(Internal) _didMoveFromWindow:toWindow:]
6 0x3166c50e in -[UIView(Internal) _didMoveFromWindow:toWindow:]
7 0x3166c50e in -[UIView(Internal) _didMoveFromWindow:toWindow:]
8 0x3166aa8a in -[UIView(Hierarchy) _postMovedFromSuperview:]
9 0x31672df6 in -[UIView(Hierarchy) removeFromSuperview]
10 0x316d76ee in -[UITransitionView _didCompleteTransition:]
11 0x31754556 in -[UITransitionView _transitionDidStop:finished:]
12 0x316bc97a in -[UIViewAnimationState sendDelegateAnimationDidStop:finished:]
13 0x316bc884 in -[UIViewAnimationState animationDidStop:finished:]
14 0x33e487c0 in run_animation_callbacks
15 0x33e48662 in CA::timer_callback
16 0x313caa5a in __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
17 0x313ccee4 in __CFRunLoopDoTimer
18 0x313cd864 in __CFRunLoopRun
19 0x313768ea in CFRunLoopRunSpecific
20 0x313767f2 in CFRunLoopRunInMode
21 0x329f36ee in GSEventRunModal
22 0x329f379a in GSEventRun
23 0x316692a6 in -[UIApplication _run]
24 0x31667e16 in UIApplicationMain
25 0x00002726 in main at main.m:14
If anybody can help me out here, I would be eternally grateful!
Thanks,
Stewart
I'd like to know a little more about your view controller. There's a UIImage that looks like it's being double released... are there any UIImage properties in your view controller?
Also, I'm sure you've handled this, and it would lead to a different error message, but you are aware that you cannot use the camera on the simulator? Best to test for the user's capabilities, otherwise it will generate an exception on an iPod Touch.
Also, can you confirm via breakpoints that the code never gets as far as your delegate callbacks?
I think the problem is in the parent UIViewController. You say you create it programmatically. Can you show the code?
@phooze - Yeah, I'm testing whether or not there's a camera available before the option is displayed and that works fine.
@St3fan - The code which is calling the picker is a subclass of UIViewController, and it's created in the app delegate and added to the window.
I've since replaced the image picker with a new UIViewController with a single UIView in it with a nice red background colour and while the view displays on presentModalViewController, it still crashes as soon as it's completed presenting in the window. BAH!
Stewart, thanks for the details. Also want to see your UIViewController subclass code as well.
Problem solved. After going through my view controller code and trial and error with releasing objects, I finally found the culprit, but that leads me to a question... Is loadView called when a modal view is presented on top? – Stewart Zollinger 0 secs ago edit
I was releasing an image that was also being autoreleased. On the odd occasion, the code would work, but most of the time it was falling down. In 3.2, this hadn't caused me any errors or shown up, so the chances are it wasn't being autoreleased while I was still needing it. Lots of trial and error found it.
|
STACK_EXCHANGE
|
Short story about African missionary couple losing his mind, making mud statue
My mom remembers reading a short story published in a college anthology at least before 1980, she thinks it was her father's so maybe published 30's or 40's. It was about a missionary couple to Africa and the man going insane/tribal, the wife giving him back the wedding ring and leaving him, and specifically the man making a mud statue that incorporated the wedding ring. We confirmed that she is NOT thinking of the Poisonwood Bible.
An African missionary couple and he? Or they?
The Man Who Saw Through Heaven by Wilbur Daniel Steele, 1946
Reverend Hubert Diana has disappeared. He was last seen traveling with a group of women including his fiance. The narrator was on the trip with them as well when his friend Mr. Krum introduced Hubert to astronomy and many constellations in the sky. Hubert became so interested and started asking questions about science and theories, that his fiance suspected it affected his teachings in the church. As he has been teaching less of the Bible and a little bit more of his theory that he learned from Krum. Now Hubert has left to travel and be a missionary, but his fiance becomes worried about his well-being as he has changed his teachings quite a bit. He began speaking of tentacles, rings, dimensions and other strange concepts that were more secular in the eyes of the church. The Narrator and Mrs. Diana are now on a journey looking for Hubert in East Africa. They hear stories from their guide that he was brought to a village thought to be a missionary. While Hubert was there he was making dolls and figures from mud and was heard talking about the new theories he began teaching in the church. The villagers would follow him and his ideas, but some people, such as the narrator's guide did not like the ideas and especially did not like African people to interact with the reverend. When the pair do learn of Hubert's whereabouts it is too late. They learn that he has died and was buried five weeks prior. The pair mourned and prayed over their lost friend.
From this analysis:
At the place where
Hubert planned to found a Christian mission, he begins enacting the history of religion from the start by
making mud images that resemble the tentacled creature suggested by Krum, evoking the earliest human
images of divinity as monstrous: “a religion in the making, here before our eyes.” But his iconoclasm is
taboo in this culture too. “Primitive societies without religion have never been found.” (William Dean
Howells) Ironically, these black primitives are more advanced than Hubert the white man.
Wikipedia (https://en.wikipedia.org/wiki/Wilbur_Daniel_Steele) says this story appeared in Harper's in 1925 and reprinted in a book in 1927.
|
STACK_EXCHANGE
|
On 2/12/06, Mikus Grinbergs <mikus(a)bga.com> wrote:
In list.sx64, you wrote on Sun, 12 Feb 2006 11:40:38
following the instructions from here:
I downloaded the driver from ati.com
. Actually I now have both 8.21.7
and 8.22.5 versions (x86_64) versions.
Anyway, both of them, when invoked with --get-supported display only
different versions of Ubuntu, Debian and Mandriva. Suse and RedHat are
not displayed, so I can not pre-build rpms for SuSE.
Can anyone confirm this, before I get over ati :), or my system (10.0
x86_64, apt-getted to the last) is confusing the installer for some
Had a hard time understanding your question.
According to the instructions from suse site, running the install
script with --get-supported option should list all supported
distros/versions, incl. SuSE 10.0. Unfortunately, this installer from
ati does not list SuSE at all. That's my confusion, as I'd prefer to
use this way of installing the driver, with rpm.
I decided (correctly or incorrectly) that the pre-built version
from ATI (12 MB or so) did not have the best-fit to my SuSE 10.0
(64 bit) system. Instead, I downloaded the installer version
from ATI (33 MB or so). I then executed *that* file to install
the ATI driver on my system.
[I've actually done this twice - once on a SuSE 10.0 (32 bit)
system, where the fglrx driver runs fine. A second time on a
SuSE 10.0 (64 bit) system, where the install ran, but I have
not yet rebooted to "switch" to the fglrx driver. (I'm waiting
for a 24/7 application to crash <or finish, in two months>
before doing the reboot.) ]
p.s. The compile of the "kernel" portion of the ATI driver had
an error code, but at least on my 32 bit system the whole
ATI fglrx driver works anyway.
pps. At an earlier time, when I was upgrading the 32 bit system's
kernel, the ATI-supplied script (for such a situation)
*properly* recompiled the "kernel" portion of the ATI driver.
But this time, the ATI-supplied script to "uninstall" the ATI
driver left quite a few files still on my hard disk. To be
safe, I manually deleted them before doing the latest version
Thanks, I would prefer to wait a little before I bite the bullet and
with that kind of install. At least until someone from the list
confirms that this is OK.
Svetoslav Milenov (Sunny)
|
OPCFW_CODE
|
How reliable is to use client-side XSLT in mobile browsers
I know pretty well the state of xslt support by major desktop browsers. In short, this support is quite decent. And what major mobile browsers. Do they support client side XSLT? Are there any pitfalls and/or limitations?
AFAIK SaxonCE (XSLT 2.0) has been demoed 4 months ago working on an iPhone. It probably works on all major mobile browsers, as it is cross-compiled to Javascript.
What is SaxonCE? A javascript library?
Saxon-CE is an XSLT 2.0 processor developed by Saxonica, it includes extensions for Javascript and DOM interoperability. Saxon-CE is implemented in JavaScript and deployed on the web host server like any other JavaScript library. Two HTML Script elements are used, the first references the Saxon-CE JavaScript library, the second declares the XSLT entry-point ('data-source' or 'data-initial-template'). More details on the Saxonica site.
@pgfearo, helas, in that case it is not the answer to my question. I've asked about pure browser support.
@shabunc I haven't tried to strictly answer the main question (just the 'What is SaxonCE' query in your comment), perhaps you should clarify what you mean by 'pure browser'. Are you saying that something like jQuery isn't a 'pure browser' solution either? Saxon-CE still runs on the client - the browser downloads the required JavaScript the first time the HTML page is loaded - thereafter the cached version is used.
@pgfearo - imagine javascript html or css parser - it definitely can be done. nevertheless, there is such thing like native support. so, here is my clarification)
I should probably make this a comment since I haven't done this in over a year but client side XSLT was poor then, sadly, and nowhere near as good as the desktop. I don't think it's improved enough today. It's the reason I won't consider its usage now and it's a shame on all browser vendors that they don't offer great XML family support in all areas.
what exactly problem have you encountered? Off the top off my head, if we are talking about basic, core xslt functionality, few years ago there were some problems with client side implementation of xsl:key.
@shabunc I wish I could remember but I was frustrated when some basic working code was not supported in Android.
Android doesn't support it at all currently. Though, Firefox and Opera added support to it. Though, most users won't install a new browser to it, but use the standard one coming with the phone ... which doesn't support it.
|
STACK_EXCHANGE
|
One of the new projects that I’m working on involves a messaging infrastructure in Erlang. Without boring you with the details, the basic idea is that there are two types of messages, A and B and these are both sent to a thread (or a *process* in Erlang). One A must be paired with one B before the A and B can be discarded. Performance is an issue so this pairing must be fast. Several approaches were developed in trying to make this go very fast:
1. The first approach is also the dumbest. All A and B messages are sent to the same process. The process deals with these messages in a fifo order and thus must internally maintain queues of As and Bs, matching them as becomes possible. The problem with this is that flooding the process with As or Bs cripples performance because of processing the queue in fifo order. So a DoS attack is very possible and would often accidentally occur (most traffic is bursty).
1. The next idea is to not process the messages in a fifo order. This is possible because of Erlang’s *receive* statement which can do pattern matching on the messages in the queue. The problem is that the process must scan every message in the queue in a fifo order each time through the receive block until it finds one that matches. So whilst the DoS attack can’t happen in the same way, it can still cripple performance as the message queue must be scanned until you reach the end and find the first message of the *other* type so that the pairing can occur. So this cripples performance again.
1. Use three processes. Send messages of one type to one process and of the other type to another process. These processes are then *buffers*. They process messages in a fifo order (which is fast) and put all the messages they receive into an internal queue. The third process (the *pairer*) then sends *fetch* messages to these buffers which then send a predetermined number of messages in their internal queue to the pairer. The pairer then scans its messages in a non-fifo way but because the message queue is effectively bounded, there is no significant performance drop. The points are these:
* There is potentially a DoS attack possible which prevents the buffers from receiving the *fetch* message. However, firstly by processing the message queue in a fifo order it *will* be received, just possibly not immediately; and secondly, the buffers simply remove A or B messages from their message queues and place them in an internal queue, thus there is very little processing done. So getting through several thousand A or B messages does not take and real amount of time.
* The pairer only asks for more messages (via *fetch* messages sent to the buffers) iff it has exhausted its quota of messages: i.e. it asks for *n* messages from each buffer and will only ask for more if it has received *n* messages from each buffer. This means that its queue will never be more than 2 *n* messages long.
* Careful testing reveals that *n* = 7 is the best performance on the hardware I have available. This balances the cost of the pairer processing messages out of order with the extra round trip to the buffers. Lower *n* means that there are too many round trips sending *fetch* messages to the buffers but the non-fifo processing is really cheap. Higher values of *n* mean the pairer’s message queue gets too big so the non-fifo processing costs too much.
* By using three processes, it can make good use of thread level parallelism hardware. The two buffers can get through their message queues in parallel as can the pairer. There’s also therefore the possibility of distribution across multiple machines.
* It’s very possible to alter the pairer so that it sends the *fetch* messages after it has achived *m* pairings where *m* < *n*. This means the pairer's message queue would be a maximum of 2 *n* + 2(*n* - *m*) messages long but would reduce the delay between the pairer sending the *fetch* messages and the buffers being able to receive the *fetch* message and send messages on to the pairer - effectively the *fetch* message ends up higher up in the buffers' message queues. It's quite ironic that a language that seems, at least on the face of it, ideal for messaging-type applications turns out to have implicit semantics that, if not hindering, at least make you think much harder when trying to implement certain requirements...
|
OPCFW_CODE
|
**DolphinIOS: Adding AR Codes (as of Version 2.2.0 (JB: 103 / NJB: 105) and Patreon Version 3.0.0 (89)**
Guide written by JackFusion★#9669
This Pastebin will describe what is needed and how to add Action Replay codes to DolphinIOS. This guide will be a lot more in-depth compared to that of the pinned message in the #support. For this example, I will be using Mario Party 7 (NTSC-U). For those who may need visualisations, I will have some linked at the correct spot with images hosted on Imgur. If you notice part of this guide is incorrect, needs rewording or a complete rework, please notify me directly on Discord. This is my first ever fully written and detailed guide, so it may be confusing. If you need anything clarified, feel free to ask me to rework or reword any portion of this personally.
NOTE: AS OF THE CREATION OF THIS TUTORIAL GECKO CODES ARE **NOT** FUNCTIONAL AND WILL **NOT** WORK. PLEASE BE CAREFUL WHEN ASKING FOR SUPPORT AS YOU MAY BE USING A GECKO CODE RATHER THAN AN AR CODE.
- Oats and Simon for porting Dolphin to iOS and generally being good friends
- Those who contribute to the development via commits to the source code to make it even greater
- Anyone who helps people in #support to the best of their ability
DiOS Website: https://dolphinios.oatmealdome.me/
Developers' Discord: https://discord.gg/rdx6Bt8
Developers' Patreon: https://www.patreon.com/oatmealdome
The first thing you need to begin adding your cheats is the ID of your game. Depending on your version, there are two ways to find this.
Patreon Version 3.0.0 (89): Simply hold the cover art of the game of your choice until it gives you the option to delete it from your menu. Above the delete button, the game title and ID will be shown. (Seen here: https://imgur.com/a/xjLkZKa)
Version 2.2.0 (JB: 103 / NJB: 105): As the above option is only for Patreon testing currently, your best bet is to visit https://wiki.dolphin-emu.org/ and search your game in the search bar. Below the cover art for the game of your choice, there will be sections such as "Developer(s)", "Publisher(s)", etc. What we're looking for is "GameIDs", which should be listed below "Compatiblity". (Seen here: https://imgur.com/a/XEQ8i9V) What you will notice is that there are multiple IDs here. To find out which ID is correct for your dumped copy, the most typical difference is in the last letter before the number(s) at the end. For PAL, look out for a P (Example: RMCP01); for NTSC-J, look out for a J (Example: RMCJ01); for NTSC-U, look out for an E (Example: RMCE01). Some games may not follow this rule and make it a bit harder to identify the correct region, so it may be worth a Google search to double check on this.
Keep track of this ID, as we'll be using this to create an INI file.
Creating the .INI file
With the game ID in hand, we can now create an .INI file, which will hold our codes. Depending on if you are jailbroken or not, the way to do this will vary. However, the name of the file will the the game ID you just found. In my example using Mario Party 7, the file will be named **GP7E01.ini**.
For Jailbroken users: With Filza, you can navigate to **/var/mobile/Documents/DolphiniOS/GameSettings** and create a file there.
For Non-Jailbroken users: You can make the file on your PC and move it over to Files.app > On my iPhone / iPad > DolphinIOS > GameSettings .
Inside of this INI file will contain our codes. The next section will show the layout.
Gettings Codes + The General Format & Enabling Cheats
If you don't already have your codes, then my best advice for you is to give them a Google search. Something like the game ID followed by "ar codes". Note: Codes on https://wiki.dolphin-emu.org/ are GECKO CODES, NOT ACTION REPLAY CODES. THEY WILL NOT WORK. Also make sure that the codes you are getting are in fact Action Replay and not Gecko, as sometimes the two look alike.
There are two formats that I've seen work. Let's call them type one and type two for convenience sake.
Type One: XXXXXXXX YYYYYYYY
Type Two: XXXX-YYYY-ZZZZ
I've have some issues getting type two to work, so, if you can, try your best to find codes of type one.
Inside of our newly created INI file, we will have to make the format ourselves. To begin, let's start off with the proper listing of our code(s).
We'll begin with the header of [ActionReplay]; below that we will name the code, and then paste the code under. Here's a basic visual of what that should look like.
$Your Code Name
the code goes here
Cool, now the code is in. However, we're not finished just yet. We need to enable the code. This is done by creating a new header below the last code put in called [ActionReplay_Enabled]. Below that, we will put the CODE NAME ONLY in **exactly as it was written.** Only code names listed here will be active. Here is basic visualsation:
$Your Code Name
the code goes here
$Your Code Here
If this isn't making any sense, here are examples for both "type one" and "type two" AR codes: https://imgur.com/a/7PMgMV4
Once you got that, save the INI. If you haven't moved it from your PC to GameSettings, you should do that too.
What you should have now is:
- an INI file named your game ID located in GameSettings
- the correct format for AR codes
- code titles set under enabled so they are active in game
The last thing to do is to enable cheats in DolphinIOS itself, otherwise, despite doing everything correctly, the codes will not activate. You can do this by going to Settings > Config > General > Enable Cheats and turning that to ON.
Hopefully, should you have done everything correctly, the AR codes should work when you load your game and do whatever the code has you to do. If it does, then congrats! You've now added AR codes to DolphinIOS!
If you are still struggling: Here is a video of me doing it using my example of Mario Party 7 from scratch: https://www.youtube.com/watch?v=zfLXYWBQ6nk
If you are STILL struggling and don't know where you went wrong, you can ask on the Discord linked at the top of this pastebin.
Thank you for giving this guide a read! If you have any suggestions on what other guides to make, let me know, and I'll definitely consider it. Have fun!
|
OPCFW_CODE
|
Live Video API
Interactive Broadcast API
Video Chat Embeds
Hire a Partner
Vonage Video API Support
Articles about Tokbox Features
How to Configure SIP Monitoring Callbacks
How to Send DTMF in a SIP Call using Playground
How to use SIP Interconnect in Playground
Unable to Initiate SIP Call from Playground
Can I increase broadcasting maxDuration?
What is AES-256 encryption?
How to Determine if AES-256 Bit Encryption is Used in a Session
How do I enable or disable AES-256?
Does AES-256 work on all Vonage Video API clients?
Can AES-256 be used for Relayed and Routed sessions?
How to Configure a FIPS Compliant Amazon S3 Storage Container for Archiving
How to Configure Archive Monitoring Callbacks
How do I download an archive from the Vonage Video API Cloud?
What happens if a custom archiving layout is applied to an invalid stream?
How far back can you query in inspector for sessions and meetings?
Setting and Using a Proxy with Vonage Video API SDKs
See all 10 articles
What is China Relay?
Does China relay guarantee connectivity for Chinese consumers?
How do I know if China Relay is enabled?
Does China relay support China-only sessions?
Does China relay work with Vonage Video API Relayed sessions?
What Client version is required for the China Relay?
HIPAA Compliance and BAA
What is HIPAA?
Why is HIPAA relevant when building a healthcare application?
Can I build a HIPAA compliant application using OpenTok?
How does Vonage Video API ensure secure transmission of PHI?
Does Vonage Video API store PHI?
What security features does Vonage Video offer to a developer to assist in building a HIPAA compliant application?
See all 7 articles
Streaming Video has Letterbox Effect
What devices does Vonage Video iOS SDK support?
Where can I see the sample code for Vonage Video iOS SDK?
Handling Reconnection and Network Migration on Mobile Devices
What are the supported Video Dimensions, Aspect Ratios, and Rendering on Mobile SDKs?
Managing Audio Sessions in iOS with Vonage Video API
Where can I get the Vonage Video API Android SDK?
What devices does the Vonage Video Android SDK support?
Where can I see the sample code for Vonage Video Android SDK?
How many participants can join an Android session?
What Windows versions are supported with the Vonage Video SDK?
What is the Vonage Video Windows SDK?
Event at subscriber's end when publisher mute or unmute their mic on Windows SDK
ClickOnce Fails with Vonage Video Windows SDK
What issues are commonly seen when using the Windows SDK?
Does Vonage Video API support USB cameras?
See all 12 articles
What is Session Monitoring?
How to Configure Session Monitoring Callbacks
How can I test that the Callback URL is working?
What happens to the callback events if my application server goes down?
Do you guarantee the callback event delivery?
Is Session monitoring real-time?
See all 7 articles
Using the TokBox for Slack Integration
Embeds Not Supported on Wix on Chrome 64 and Later
Regional Media Zones
What happens to Vonage Video API Regional Media Zones (RMZ) after Brexit?
How do I configure Regional Media Zones for my project?
|
OPCFW_CODE
|
uTorrent is worlds most popular BitTorrent client. It is now available to Ubuntu as uTorrent Server. It comes with a Web UI that is almost identical to it’s Desktop UI. When installed on a Ubuntu VPS, you can use your browser to access it securely. This post will show you how to install and use uTorrent Server on a Ubuntu 16.04 VPS.
Unlike Transmission, it is very easy to install and use uTorrent on a Ubuntu 16.04 LTS VPS. If you have used uTorrent on your PC, you’ll have no trouble using it on your VPS. You can use 1GB VPS from my recommended VPS providers to setup you own SeedBox with uTorrent.
Install uTorrent Server on Ubuntu 16.04
I’m going to assume that you have a VPS with root access ready to be configured. Connect to your server as root user using Putty and run following commands to update apt cache.
apt update apt upgrade
Once the update is complete, run following command to install dependency libraries on your server.
apt-get install libssl1.0.0 libssl-dev
Now download stable version of uTorrent from official website. Running following commands will do it. Please note that you need to use the vesion for your system architecture. So select the appropriate tab below. Also following command will install latest versions at the moment of writing. Check the official downloads page and change the link in the command if necessary.
32 bit OS
wget http://download-new.utorrent.com/endpoint/utserver/os/linux-i386-ubuntu-13-04/track/beta/ -O utorrent.tar.gz
64 bit OS
wget http://download-new.utorrent.com/endpoint/utserver/os/linux-x64-ubuntu-13-04/track/beta/ -O utorrent.tar.gz
Extract downloaded tar.gz archive to /opt directory on your server.
tar -zxvf utorrent.tar.gz -C /opt/
Give write permissions to uTorrent directory.
chmod 777 /opt/utorrent-server-alpha-v3_3/
Create a link from uTorrent to /user/bin directory.
ln -s /opt/utorrent-server-alpha-v3_3/utserver /usr/bin/utserver
That’ll complete the installation process. Now you can start uTorrent by ruiing following command.
utserver -settingspath /opt/utorrent-server-alpha-v3_3/ &
Congratulations! uTorrent server is up and running on your VPS. Now let’s access it via Web UI.
Access uTorrent Server with Web UI
As I said before, uTorrent server gives a nice Web UI that is almost identical to PC UI. It’ll be running on port 8080 by default. You can use your browser to access it. So open your favorite browser and point to following URL with your IP address.
A pop up will appear asking for a username and password. The default username is admin and you can leave password field blank. Your shiny new uTorrent will load after few seconds.
Change Default Username and Password
Now before doing anything else, you should change your username and setup a good strong password. You can easily do this from the Web UI. Click green icon and select Web UI from left sidebar. There you’ll get an option to change your username and set a password.
As additional security features, you can also change the default port number and specify an IP address or addresses that are only allowed to access the Web UI.
Download a Torrent to your VPS
There are two ways to add a torrent to your server. You can either upload a torrent file from your PC or you can add a torrent with a Magnet link. It is easy as doing it on your PC. Watch the video at top of the page for more information about downloading torrents to your VPS.
|
OPCFW_CODE
|
i have a magazine i would like to scan onto my computer. i have a scanner (Lexmark 1150), but i dont like the quality of the scans. its just a cheap all in one, and nothing i scan looks very good. i have digital cameras (Fuji E510 and Kodak DX7630) and im sure i could take pictures of the pages and they would look better then a scan. my questions are..... if i did scan the pages i want, what could i do in photoshop or paintshop pro to make them look better? i have photoshop CS and CS2, and i have paintshop pro versions 7,9, and X. i have tried a few different things to clean up some photos and pages i have scanned before, but they still dont look very good. i have tried putting black paper behind the pages so the light didnt go through the pages and show the opposing side, and i have cleaned the glass on the scanner more times then i can count. i get slightly better results scanning at low DPI settings, but the scans come out too small and i would prefer they were at least the size i get from my cameras, or bigger. the computer i have will only let me scan up to 600 DPI, but that produces a good enough size. here are a couple examples of scans i did with my scanner. most are straight out of the scanner, and one is after i touched it up the best i could. the only thing i did to the unedited ones is resize them small enought to be acceptable for photobucket. these 3 are unedited. this one i edited to look better. pic #1 is my old dog, #2 is my daughter when she was 13 months old (shes 4 now), #3 is a llama from a farm in GA, and the 4th is one of the buildings at the psychiatric center here. question 2....... if i decided to just take pics of the pages, what would be a good way to do that? i dont have a lot of lighting fixtures, and they are all incandescent bulbs. im sure i could rig something up with them to help diffuse the light a bit if i needed to. i have tried taking pics of magazines before, and always seem to get a lot of glare on the pages from either the flash, or from too much light. i dont mind trying different settings to get a good end result, so the help i need is basically just for positioning the pages and lighting, but if anyone could suggest good settings for the camera it would make my job a little easier. i should also add that im not interested in saving the magazine so i will probably be cutting the pages from the binding, so positioning them isnt much of a problem. as i said before, i have few fixtures for lighting, but i do have a flexible floor lamp and a range of bulbs from 40 watt to 100 watt. i also have a few table top type lamps i can use as well. if anyone can help me, i would appreciate it. also, all the images i posted here are ones i took, and are ok to edit if someone wants to give it a try. all i ask is that if someone does, please tell me what you did to edit them to make them look better. thanks.
|
OPCFW_CODE
|
Following xs:include when parsing XSD as XML with lxml in Python
So, my problem is I'm trying to do something a little un-orthodox. I have a complicated set of XSD files. However I don't want to use these XSD files to verify an XML file; I want to parse these XSDs as XML and interrogate them just as I would a normal XML file. This is possible because XSDs are valid XML. I am using lxml with Python3.
The problem I'm having is with the statement:
<xs:include schemaLocation="sdm-extension.xsd"/>
If I instruct lxml to create an XSD for verifying like this:
schema = etree.XMLSchema(schema_root)
this dependency will be resolved (the file exists in the same directory as the one I've just loaded). HOWEVER, I am treating these as XML so, correctly, lxml just treats this as a normal element with an attribute and does not follow it.
Is there an easy or correct way to extend lxml so that I may have the same or similar behaviour as, say
<xi:include href="metadata.xml" parse="xml" xpointer="title"/>
I could, of course, create a separate xml file manually that includes all the dependencies in the XSD schema. That is perhaps a solution?
So it seems like one option is to use the xi:xinclude method and create a separate xml file that includes all the XSDs I want to parse. Something along the lines of:
<fullxsd>
<xi:include href="./xsd-cdisc-sdm-1.0.0/sdm1-0-0.xsd" parse="xml"/>
<xi:include href="./xsd-cdisc-sdm-1.0.0/sdm-ns-structure.xsd" parse="xml"/>
</fullxsd>
Then use some lxml along the lines of
def combine(xsd_file):
with open(xsd_file, 'rb') as f_xsd:
parser = etree.XMLParser(recover=True, encoding='utf-8',remove_comments=True, remove_blank_text=True)
xsd_source = f_xsd.read()
root = etree.fromstring(xsd_source, parser)
incl = etree.XInclude()
incl(root)
print(etree.tostring(root, pretty_print=True))
Its not ideal but it seems the proper way. I've looked at custom URI parsers in the lxml but that would mean actually altering the XSDs which seems messier.
Actually there is a problem with this approach when it comes to namespaces. one XSD can refer to something in another XSD File via the namespace. So simply using xinclude on its own wont do.
Try this:
def validate_xml(schema_file, xml_file):
xsd_doc = etree.parse(schema_file)
xsd = etree.XMLSchema(xsd_doc)
xml = etree.parse(xml_file)
return xsd.validate(xml)
|
STACK_EXCHANGE
|
1. It is insecure and using it makes you vulnerable to a malicious attacker. .Net uses an ordinary quicksort with the pivot selected by the median-of-three method. It is easy to provoke quicksort's worst-case (quadratic) behavior and increase running times by multiple orders-of-magnitude. An attacker will be happy to exploit this as an effective denial-of-service attack.
2. It is inflexible. It does not allow you to provide a delegate for the swap function so sorting data structures where data synchronization is required to maintain consistency as items are moved is impossible. Also, you can only sort items on the .Net heap so sorting unmanaged memory is impossible.
3. It is slower than it should be even in the absence of an attacker.
Zimbry.Introsort addresses each of these problems.
1. It is secure. It is based on David Musser's Introsort algorithm. Introsort is essentially a quicksort that, should it fail, falls back to a secure heapsort.
2. It is flexible. Both the compare and swap operations are provided by the user. You can use it to sort anything.
3. It is faster. This wasn't an explicit objective but it's nice that we don't have to trade away performance to get a secure and flexible sort.
Click the links to see the benchmarks:
Let's look at the worst-case of dealing with an adversary.
It takes .Net over 26 minutes to sort one million integers when they are provided by an adversary. Zimbry.Introsort does it in half a second.
Those are the worst-case results. We can disable the adversary and benchmark it again:
Zimbry.Introsort is twice as fast in the average case and rarely less than 13% faster in any case.
(Each test was run only once so the timings for small arrays contain noticeable sampling noise. A more robust benchmark would filter multiple samples.)
I am releasing the source under the MIT license: Click here for the source
Some notes on the source:
You'll find many alternative sort algorithms in the Zimbry.Sort.OtherSorts project. I experimented with these along the way. You can enable them in the benchmark if you have a great deal of patience.
The class in QuicksortAdversary.cs was derived from Doug McIlroy's paper, A Killer Adversary for Quicksort. Be careful. It will beat up quicksort and steal its lunch money.
Zimbry.Introsort contains four sort algorithms layered together:
1. Quicksort with pivot selected by median-of-nine: For large partitions.
2. Quicksort with pivot selected by median-of-five: For small partitions.
3. Heapsort as a fall-back when quicksort recurses too deep: Heapsort is slower than quicksort in the best case but it has no quadratic behavior to exploit so it provides effective protection against an adversary.
4. Insertion sort: For tiny partitions where quicksort is inefficient.
Using these four algorithms lets us enjoy the performance advantage of quicksort for the typical case with protection against a malicious attacker in the worst case.
Both quicksorts use Bentley & McIlroy's "fat-pivot" partitioning method from their paper, Engineering a Sort Function, for better performance. This is a big part of why it performs better than .Net's quicksort in many tests.
While this is an improvement it is far from the last word in sorting. Some ideas to consider:
Better performance may be found with Vladimir Yaroslavskiy's dual-pivot quicksort.
It really needs special versions for handling known data types (avoiding the requirement for using compare and swap delegates in all cases). This would give a significant speed improvement.
There's more room for performance tuning. I tried to leave the code in a fairly readable state and some sacrifices could be made to buy a little more performance.
It would be nice to add support for stable sorting.
|
OPCFW_CODE
|
The 5-Second Trick For python homework help
. At compile time, we could’t make any assurance about the type of a industry. Any thread can entry any subject at any time and involving The instant a field is assigned a variable of some type in a method and enough time is is utilised the road immediately after, One more thread may have improved the contents of the field.
I have a regression problem and I need to convert a bunch of categorical variables into dummy facts, which is able to crank out over 200 new columns. Must I do the element choice before this step or after this step?
So What exactly are you expecting? Master Python in a means that could advance your career and enhance your expertise, all in a fun and simple way!
It's a great deal of interest, heading from composing DSLs to tests, which happens to be discussed in other sections of the handbook.
Evaluation of unplanned issues: There is a risk that the advertising division faces number of unexpected challenges which could have established hindrances; there analysis will help the group in being familiar with them and how to face Those people issues in the next approach.
The example previously mentioned reveals a class that Groovy will be able to compile. Even so, if you are attempting to produce an occasion of MyService and simply call the doSomething approach, then it'll fail at runtime, mainly because printLine doesn’t exist.
In this portion with the Python system, learn how to employ Python and Management move to incorporate logic in your Python scripts!
A technique in Java programming sets the conduct of a category object. Such as, an object can mail a place concept to another object and the suitable formula is invoked if the receiving object is often a rectangle, circle, triangle, etcetera.
My tips is to test every little thing you could think about and find out what gives the most effective success on your validation dataset.
An instance could well be a static technique to sum the values of every one of the variables of each occasion of a class. As an example, if there have been an item course it may need a static method to compute the standard price of all solutions.
For the one who requested how to construct this, it won't should be developed. It's run to be a script. You can find python imp source modules which are designed and mounted making use of setup.py as offers. This example is not really a offer nonetheless.
Groovy also supports the Java colon variation with colons: for (char c : textual content) , in which the kind of the variable is obligatory. when loop
Any statement is usually connected with a label. Labels will not impact the semantics of the code and can be utilized to generate the code much easier to study like in the next instance:
Now you have got produced a parser which reads a goal value to the example variable by functioning bin/project -x or bin/project --example
|
OPCFW_CODE
|
I bought an Athlon 1.333 from an online site and at first it was only clocked in at 1Ghz, promptly I went into the BIOS and changed the FSB to 133mhz, successfully making it 1.33 like it was supposed to be. All went well for normal IRC and other Windows applications, but when I ran games it would give me a BSOD concerning DRIVER_IRQL_NOT_LESS_OR_EQUAL, and below it would be "Beginning physical memory dump.." or something along those lines.
So basically, did I receive a faulty Athlon 1.33? I have a certified heatsink that's designated to cool the Athlon 1.33 and my case cooling is adequate, please help! Thank you.
Set it to 1000MHzish for awhile (via multiplier). If the problem goes away, it's the chip. If it doesn't try setting it to 1000Mhz via FSB. If it goes away it's probably either the mobo, or your RAM. If your running everything well under spec. and you still have problems, it's probably a software problem.
I have had that same error when I tried suspend-to-ram on my MSI K7 Turbo R. The error actually occured when I woke the system up. I tried to upgrade my bios, which took care of the error, but resuming still doesnt work.. not much of an improvement.
Been looking in the ms knowledge base. Apparently the driver_irql_not_less_or_equal error occurs when using outdated drivers. You can run a utility called 'verify' to help identify the driver.. but there might be easier steps you can try first:
1) clock back to 100 Mhz fsb.. does the problem still occur ?
2) get the latest drivers for your audio, video and 4in1 drivers. What video and soundcard are you using ? Aureal Vortex by any chance ?
3) upgrade to latest bios
4) remove all non essential hardware, and re-insert them one by one.
Can you tell us in detail what hardware you are using ?
---- Owner of the only Dell computer with a AMD chip
I have an ASUS A7M266. At first it was running at 2.8 volts in VIO1 and I made it the default 2.7 volts, I also changed the 3.75 in VIO to 3.45 (default).
Although my problem still exists, it's not my overclocking, but that it was intended to run at 1.33ghz, but it's not. Is this a faulty chip? Or am I doing something wrong. I have an AMD recommended heatsink and my cooling is fine... my only guess is that it's a faulty chip.
I ran 3dMark2001 looped 6 times with my system barebones, and it ran perfectly, I added my soundcard, ran fine, then added my NIC, it ran absolutely perfect. The next day I wake up, try again, it BSOD's.
I'm running at 1Ghz and it's perfectly stable, but I don't like how I paid for 1.33 and I'm only clocking at 1Ghz.
I have a Hercules Prophet III, 3com Etherlink NIC, and Soundblaster live! value.
SAME DAMN problem for me! only i am getting it even worse
i am using an asus a7a266 with my 1.333 athalon 266fsb w/ axia core and 512 megs of PC2100 DDR ram on 2 chips...
my system wants to be at 1000 by default as well.. the only other selectable speed is 1333 ghz but when i save and exit it immediatley DIES (video signal goes away and power button has to be held for 4 seconds).. then when i finally get it back up again it kicks me into bios at 1000 mhz and yellz at me... when i use the MANUAL settings to get to 1333 ghz it dies either in the middle of bios (ram check) or at the end of bios... not sure which one... if i clock it to like 1250 or something then the PCI bus speed is fuXored and it doesn't get past the PCI initialization stuff... GRRRR stupid asus.... i have the latest bios (i think.... their website is so shitty i get 50 different answers for 1 question from them)
my system runs totaly leet at 1000 mhz, no crashing anymore and i can run anything but NFS orche without hangups.... so i dont THINK it is bad ram... but it could be with my luck... Help? oh yeah, i am using winXP but that doesnt' matter at this point because i cant get past bios at 1.333
the llamaXor grants you a wish<P ID="edit"><FONT SIZE=-1><EM>Edited by MechaDeath on 06/11/01 04:25 PM.</EM></FONT></P>
uhm.... i cant change the multiplier, it is locked on the mobo... but i did try "jumper free mode" off, setting the CPU and SDRAM frequencies to 133/133mhz... it got past bios to the windows load whereupon windows instantly blue screened giving me an "Error your bios version does not support ACPI, go get better bios or dissable ACPI from the setup screen" whatever that means... so then i went into bios to see if there was something lame going on... and the keyboard didn't respond and the computer time was totaly jacked up and was jumping arround
does ANYONE have 1005B bios for the a7a266? i need i need
I dont think your chip is defective. As I understand, your only having this issue when running games, right ?
If so, suspect your SB Live card. Try this: start/run/dxdiag.exe go to sound, and disable sound acceleration.
Also, what PSU are you using ? Are you having any issues under Win98 ? Did you upgrade all the drivers,4-1, bios, etc ?
---- Owner of the only Dell computer with a AMD chip
unless your cpu is unlocked, you wont be able to change the multiplier, the fsb should be 133, multiplier 10, could be the PSU, could be the ram, could be many things, have you actuall checked the cpu to ensure it IS indeed a 13 athlon not a 1 gig?
uhm, i have not checked the chip to ensure it is 1.333 i didn't think that anyone would do something THAT gay... ill check when i get home... also, someone somewhere suggested that it may be a lack of power for the DDR ram that is causing it to die... is there any safe way to increase the voltage to the ram w/o totaly destrying the computer? someone said something about checking the SB live card and that i was having problems playing games... that is incorrect... i cant even boot into windows..... when i get home i will also try using SDRAM instead of DDR... see if that maybe helps... could i possibly have the RAM settings incorrect??? my computer runs FINE at 1000 mhz... could it still be the ram/bios settings?
|
OPCFW_CODE
|
Re: J2ME or network programming or...what do you recommend?
I know I've already taken lots of your time, but I would greatly
appreciate it if you could help me once more, since I got a bit
I think a mention of 'networking' led to the suggestion
of J2EE. It was more 'J2EE' I was thinking of in my
Does word web programming mean programming network apps in general, or
something more specific?
No. To me J2EE (or JEE as Sun might call it at this FAD),
is 'server side'. Obviously a server has no point unless it
offers network connections to other places, but networking
itself is something built into the J2SE. AFAIU, two machines
might connect using netwotking, without any 'server' being
Only now I realized how out of the loop I really am. All I know is
that I'd like to do some network programming ( not so much graphic
user interfaces, since I don't think I'm very good at making things
visually appealing to end users.
OK - J2ME apps. all have a GUI, so stear clear of that.
..I'm much more interested doing
interesting things "behind the curtain" and then some chap of mine or
coworker would make pretty user interface, so that people could use my
Yep. I'm the kind of chap that will throw together a GUI
with a few buttons for that type of API, likes as in 'The Giffer'.
Kevin Weiner wrote the API (to decode/encode GIFs),
and I threw together a few buttons and things to make
it easier for the user to encode GIFs.
...but it seems some technologies are in while some are
already out of date. So my fear is that I will be spending lots of
time learning technology that will turn out to be obsolete. For this
reason answering the following question would be of great help to me:
a) So what are the technologies I should learn regardless of the kind
of network apps I want to create ( should I learn servlets
No. (Not only is that J2EE, but ultimately, servlets
make for an HTML based, thin client GUI - so there is
still 'GUI coding' involved).
Avoid them like the plague.
A plain deskop application, or one launched using Java Web
Start *the same way 'The Giffer' is launched) is far easier
to deploy and maintain.
Only relevant if you want the (D)HTML GUI.
XML is handy for data storage & transmission,
but not essantial.
[quoted text clipped - 6 lines]
I think you should steer clear of
J2ME - it is not widely used.
But I thought that since everyone owns a mobile phone, that J2ME would
be the most popular thing happening at the moment, but it seems that
is not the case. Why is that? Since mobile devices are so popular,
one would assume that there would be huge market in making apps for
I can only comment on the amount of J2ME related
questions we seem to get. Very few.
OTOH - it pays to have a good grounding in J2SE
'desktop' before proceeding to J2EE 'server'.
I assumed J2EE simply extends J2SE with additional APIs.
a) But from your reply it seems as J2SE is not generally used for
server side apps?
Well, yes it is. A lot of the actual code written for
servlets relates to classes that come straight out
of the J2SE - things like Files and IO, networking,
image or sound processing..
b) So after I read books Java Complete reference, Network programming
with Java , then I must also read books specifically on J2SE and after
a while on J2EE?
The 'popularity' of desktop apps. to server side apps.
(about 1/20) suggests Swing rich client programming
[quoted text clipped - 3 lines]
people have swung gradually from rich client GUI
development to thin client server-side development
As I understood your answer, if one is interested in network apps,
then they should instead of Swing choose thin-client development?
No. But then, I think it's immportant if you want to devlop
the type of APIs that other people want to use, to concentrate
on running them from the command line, or a 'headless'
That way, when the API is written , people can use it for
a servlet (off their J2EE based server) that churns the result
out to HTML, or for a web start based 'rich client' Swing app.,
or as a plain (not web start app.) or .. by some server that is
running headless, taking streaming data from satellites, and
using the API to detect sudden changes in ocean temperature
which it raises as an SMS alert.
BTW - Since I'm not familiar with Swing - I assume it consists of APIs
to create GUI on client side?
Yes. Swing is used for the (rich client*) GUIs of
'client side' apps (though those apps. can reach
out 'anywhere' including to a J2EE based back end).
* This includes applets, desktop applications (and
webstart launched applets or applications).
Message posted via http://www.javakb.com
|
OPCFW_CODE
|
Data Science and Machine Learning Internship ...
- 1k Enrolled Learners
- Live Class
In a machine learning application, there might a few relevant variables present in the data set that may go unobserved while learning. In this article, we will learn about the Expectation-Maximization or EM algorithm in learning to understand the estimation of latent variables using the observed data. The following topics are discussed in this article:
In statistic modeling, a common problem arises as to how can we try to estimate the joint probability distribution for a data set.
Probability Density estimation is basically the construction of an estimate based on observed data. It involves selecting a probability distribution function and the parameters of that function that best explains the joint probability of the observed data.
import matplotlib.pyplot as plt from numpy.random import normal sample = normal(size=2000) plt.hist(sample, bins=50) plt.show()
The choice of number of bins plays an important role here in terms of the number of bars in the distribution and in terms of how well the density is plotted. If we change the bins to 5 in the above example, the distributions will be divided into 5 bins as shown in the image below.
Density estimation requires selecting a probability distribution function and the parameters of that distribution that best explain the joint probability distribution of the sample. The problem with the density estimation can be the following:
How do you choose the probability distribution function?
How do you choose the parameters for the probability distribution function?
The most common technique to solve this problem is the Maximum Likelihood Estimation or simply “maximum likelihood”.
Maximum Likelihood Estimation
In statistics, maximum likelihood estimation is the method of estimating the parameters of a probability distribution by maximizing the likelihood function in order to make the observed data most probable for the statistical model.
But there lies a limitation with Maximum Likelihood, it assumes that the data is complete, fully observed, etc. It does not really mandate that the model will have access to all the data. Instead, it assumes that all the variables relevant to the model are already present. But in some cases, some relevant variables may remain hidden and cause inconsistencies.
And these unobserved or hidden variables are known as Latent Variables.
In the presence of latent variables, a conventional maximum likelihood estimator will not work as expected. One such approach to finding the appropriate model parameters in the presence of latent variables is the Expectation-Maximization algorithm or simply EM algorithm. Let us take a look at the EM algorithm in Machine Learning.
EM algorithm was proposed in 1997 by Arthur Dempster, Nan Laird, and Donald Rubin. It is basically used to find the local maximum likelihood parameters of a statistical model in case the latent variables are present or the data is missing or incomplete.
The EM Algorithm follows the following steps in order to find the relevant model parameters in the presence of latent variables.
Consider a set of starting parameters in incomplete data.
Expectation Step – This step is used to estimate the values of the missing values in the data. It involves the observed data to basically guess the values in the missing data.
Maximization Step – This step generates complete data after the Expectation step updates the missing values in the data.
Execute the step 2 and 3 until the convergence is met.
Convergence – The concept of convergence in probability is based on intuition. Let’s say we have two random variables if the probability of their difference is very small, it is said to be converged. In this case, convergence means if the values match each other.
Now that we know what is EM algorithm in Machine Learning, let us take a look at how it actually works.
The basic idea behind the EM algorithm is to use the observed data to estimate the missing data and then updating those values of the parameters. keeping the flowchart in mind, let us understand how the EM algorithm works.
Check out these AI ML Courses by E & ICT Academy NIT Warangal to learn and build a career in Artificial Intelligence.
The GMM or Gaussian Mixture Model is a mixture model that uses a combination of probability distributions and also requires the estimation of mean and standard deviation parameters.
Even though there are a lot of techniques to estimate the parameters for a Gaussian Mixture Model, the most common technique is the Maximum Likelihood estimation.
Let us consider a case, where the data points are generated by two different processes and each process has a Gaussian probability distribution. But it is unclear, which distribution a given data point belongs to since the data is combined and distributions are similar. And the processes used for generating the data points represent the latent variables and influence the data. The EM algorithm seems like the best approach to estimate the parameters of the distributions.
In the EM algorithm, the E-STEP would estimate the expected value for each latent variable and the M-STEP would optimize the parameters of the distribution using the Maximum Likelihood.
Let’s say we have a data set where points are generated from one of the two Gaussian processes. The points are one dimensional, the mean is 20 and 40 respectively with a standard deviation 5.
We will draw 4000 points from the first process and 8000 points from the second process and mix them together.
from numpy import hstack from numpy.random import normal import matplotlib.pyplot as plt sample1 = normal(loc=20, scale=5 , size=4000) sample2 = normal(loc=40, scale=5 , size=8000) sample = hstack((sample1,sample2)) plt.hist(sample, bins=50, density=True) plt.show()
The plot clearly shows the expected distribution with the peak for the first process is 20 and the second process is 40. And for many points in the middle of the distribution, it is unclear as to which distribution they are picked up from.
We can model the problem of estimating the density of this data set using the Gaussian Mixture Model.
# example of fitting a gaussian mixture model with expectation maximization from numpy import hstack from numpy.random import normal from sklearn.mixture import GaussianMixture # generate a sample sample1 = normal(loc=20, scale=5, size=4000) sample2 = normal(loc=40, scale=5, size=8000) sample = hstack((sample1, sample2)) # reshape into a table with one column sample = sample.reshape((len(sample), 1)) # fit model model = GaussianMixture(n_components=2, init_params='random') model.fit(sample) # predict latent values yhat = model.predict(sample) # check latent value for first few points print(yhat[:80]) # check latent value for last few points print(yhat[-80:])
The above example fits the Gaussian mixture model on the data set using the EM algorithm. In this case, we can see that for the first few and the last few examples in the data set, the model mostly predicts the accurate value for the latent variable.
|It is guaranteed that the likelihood will increase with each iteration||EM algorithm has a very slow convergence|
|During implementation, the E-Step and M-step are very easy for many problems||It makes the convergence to the local optima only|
|The solution for M-Step often exists in closed form||EM requires both forward and backward probabilities|
This brings us to the end of this article where we have learned the Expectation-Maximization(EM) algorithm in Machine Learning. I hope you are clear with all that has been shared with you in this tutorial.
If you found this article on “EM Algorithm In Machine Learning” relevant, check out Edureka’s Machine Learning online course, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe.
We are here to help you with every step on your journey and come up with a curriculum that is designed for students and professionals who want to be Machine Learning engineers. The course is designed to give you a head start into Python programming and train you for both core and advanced Python concepts along with various Machine Learning Algorithms like SVM, Decision Tree, etc.
Now that you know about machine learning which is a subset of Deep Learning, check out the Deep Learning Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. The Edureka Deep Learning with TensorFlow Certification Training course helps learners become experts in training and optimizing basic and convolutional neural networks using real-time projects and assignments along with concepts such as SoftMax function, Auto-encoder Neural Networks, Restricted Boltzmann Machine (RBM).
If you come across any questions, feel free to ask all your questions in the comments section of “EM Algorithm In Machine Learning” and our team will be glad to answer.
|Python Machine Learning Certification Training|
Class Starts on 10th December,2022
10th DecemberSAT&SUN (Weekend Batch)
|
OPCFW_CODE
|
For over a decade, Tableau has been one of the big players in the realm of analytics and reporting platforms. The Tableau Desktop application helps BI developers yield well-groomed visualizations. Tableau Server is an enterprise-level platform to share reports and collaborate on reporting projects. The server is aimed at the dynamic interaction of its users, e.g. filtering data to observe a specific dashboard presentation. At the same time, many sectors still rely on the documentation of results as static reports (PDFs or CSV files). For example, this applies to areas such as bookkeeping and yearly inspections. In what follows, we consider a common use case for reporting businesses: how to optimize the production of 1000s of static copies of dynamic PDF reports produced by the Tableau Server.
In cooperation with our client, we optimized the production line, presented the data respecting their corporate design & banking guidelines, and secured the quality of the reports via automated tests. The target of the project was to produce 1000s of reports (e.g. credit portfolio, credit rating, model comparison) for one of the biggest bank groups in Germany: Volksbanken und Raiffeisenbanken, as well as various private banks.
How About Scaling?
The scaling of existing reporting solutions relies on the resources of the Tableau Server. This is often quite flexible as Tableau Servers run not only on bare metal Windows machines but also in clusters as virtual machines. Additionally, a containerized Linux implementation is also possible to set up but trickier to accomplish compared to the Windows version. This suggests that an in-scale production of static reports would be easy via an upgrade of the machine. However, there are several lessons to learn here before you can conclude that your machine needs some boosting.
First Steps to Optimization
There are several other ways to optimize the report performance (i.e. production time). If you are dealing with millions of rows and about 50-100 columns, you should separate the data sources per dashboard. This is how you improve the dynamic performance. It is a good idea to choose hierarchical global filters to set up the context once for all dashboards.
As a second improvement, the data sources must be extracted as a Tableau Hyper data system (this is a native NoSQL technology that the Tableau server employs to keep track of calculated fields). Live data sources may be necessary for some use cases. However, if your data is bulky, the live connection update will take ages to be served. Beyond this, one can keep track of and remove the unused columns, and keep the data flow as clean as possible. After all these, how about static reports?
The Issue with Tableau and PDFs
The Tableau software was not built for the mass production of static reports such as PDFs. So be ready for pitfalls if you need to process millions of data rows presented on 10s of dashboards that are based on 100s of workbooks in order to produce 1000s of PDF reports. One can think that this would be a trivial task for a conventional Windows Server with 64 GB Ram and Intel 8 Core x 2GhZ CPU architecture. This hypothetical Windows Server machine is powerful enough as long as the reports are served dynamically to a limited number of users (order of 10s).
However, when dealing with PDFs, the internal optimizations of Tableau work against you. Let’s first understand what we are dealing with under the hood.
The Status Quo
There are mainly two ways to produce PDFs, PNGs, and CSVs from dynamic reports via filters e.g. reporting year (Y), company (C), and business case (B). Under the conditions given in the introduction, the production time will scale from 10s of seconds to a few minutes per report. This is not ideal, especially knowing that you may have to reproduce the whole batch a few times in case of errors.
One way to produce a PDF is Tabcmd, which is a command-line tool for Tableau Server. It is installed on your local machine and it sends web requests to the Tableau Server to return a PDF report. The second way is the REST API that runs on the Tableau Server. It downloads the resulting PDF file to your local machine. Both can be automated by common tools such as Python or Node.js applications to produce multiple reports running through the mentioned filters: Y, C, and B.
REST API vs Tabcmd
There is one important difference between the two methods. That is the full import option vs. single dashboard import. This means that working with REST API, you need to download the pages one by one and merge them to produce the full report. In the case of Tabcmd, on the other hand, the reports will come as a whole. This includes additional pages to discard from your report later. Both methods can come in handy in the context of different cases. For example, many additional test dashboards may require page picking.
There is also one shared glitch of both methods structural to the server itself. That is to say, when dealing with PDFs, the internal optimizations of Tableau Server work against you. Tableau uses caching for the dynamic optimization of internal queries (which is not possible to turn off, unfortunately). In time, this will make the imports slower by a rate of about 0.5% per report. No problem for 10s of reports, obviously, but the slow-down sums up to 500% for 1000 PDFs.
This poses practical problems. The production time may be too slow to continue at some point. It will interfere with the other users’ activities. And the backend application might drop the database client connection depending on the idle time options. This is problematic for the enterprise-level automation of the reporting.
At this point, we assume that workflow automation is managed by a backend that connects to a DB and a Tableau Server. The pool connection to both servers provides the necessary data, meta-data, and finally logs the production meta-data to a DB. This being said, we assume that the bottleneck is still the Tableau server PDF production. And under the given conditions above, it still will be. Otherwise, you need to fix the bugs of your application first, say DB queries or workflow management.
Our observation is that the REST API performs much better than Tabcmd in the case studies mentioned above. This means in about the order of 2-3 times. This is not an order-of-magnitude improvement, but the difference between 6 hours vs. 18 hours matters when we consider 8-hour workdays. Remember that Tableau Server was not designed for PDF mass production. Reducing the production time under 10 secs per report is a very intricate task. This might require a real boost to your existing computational resources (presumably 2x to 4x).
If your purpose is to get your PDF reports mass-produced, we can guarantee to improve the production mechanism for you.
Disclaimer: We are missing a true per computational resource efficiency argument. REST APIs are really greedy. This means that while Tabcmd would use only a percentage of the CPU, REST API would almost always claim the full power allocated to computations. This means all CPU resources minus the background processes.
Get in Touch for Projects
At Record Evolution, we have been consulting on data science and IT projects for many years. We help credit reporting companies enhance business insights using state-of-the-art visualization tools such as PowerBI, Tableau, and Qlick, all of which can be customized and extended using native Extension APIs.
Get in touch to get all the details on implementing Tableau reporting tools to get the most out of your data visualizations.
About Record Evolution
We are a data science and IoT team based in Frankfurt, Germany, that helps companies of all sizes innovate at scale. That’s why we’ve developed an easy-to-use industrial IoT platform that enables fast development cycles and allows everyone to benefit from the possibilities of IoT and AI.
|
OPCFW_CODE
|
const { isReadOperation, isWriteOperation } = require("../utils");
// these are static lists, these will not update unless Dynamo is updated
// we can't really add any more functions at the condition level, we may
// want to create functions up above however in the parent operation, e.g.
// the combination of multiple conditions for some common use cases.
const PROPERTY_OPERATOR_LIST = [
"equals",
"greaterThan",
"greaterThanEqual",
"lessThan",
"lessThanEqual",
"between",
"oneOf"
];
const FUNCTION_LIST = ["exists", "beginsWith", "contains", "isType"];
const OPERATION_LIST = PROPERTY_OPERATOR_LIST.concat(FUNCTION_LIST);
const CONJUCTION_LIST = ["and", "or"];
const KICKOFF_LIST = ["property", "size", "not"].concat(FUNCTION_LIST);
const NUMERIC_LIST = [
"equals",
"greaterThan",
"greaterThanEqual",
"lessThan",
"lessThanEqual",
"between"
];
function getExpressionName(operation) {
if (isReadOperation(operation)) {
return "FilterExpression";
} else if (isWriteOperation(operation)) {
return "ConditionExpression";
}
throw "Unknown type of operation found";
}
exports.NextFunction = {
ALL_OPERATION: "all_operation",
PROPERTY_OPERATION: "property_operation",
FUNCTION: "function",
CONJUCTION: "conjuction",
KICKOFF: "kickoff",
NUMERIC: "numeric"
};
exports.setNextFunc = (condition, nextFunc) => {
switch (nextFunc) {
case exports.NextFunction.PROPERTY_OPERATION:
condition.nextFuncs = PROPERTY_OPERATOR_LIST;
break;
case exports.NextFunction.ALL_OPERATION:
condition.nextFuncs = OPERATION_LIST;
break;
case exports.NextFunction.FUNCTION:
condition.nextFuncs = FUNCTION_LIST;
break;
case exports.NextFunction.CONJUCTION:
condition.nextFuncs = CONJUCTION_LIST;
break;
case exports.NextFunction.KICKOFF:
condition.nextFuncs = KICKOFF_LIST;
break;
case exports.NextFunction.NUMERIC:
condition.nextFuncs = NUMERIC_LIST;
break;
default:
throw "Unknown set of next functions.";
}
};
exports.checkNextFunc = (condition, funcName) => {
if (condition.nextFuncs.indexOf(funcName) === -1) {
throw "Cannot apply operation in this order.";
}
};
exports.basicCondition = (condition, funcName, operator, value) => {
exports.checkNextFunc(condition, funcName);
const field = exports.fname(condition);
const values = {
[`:${field}`]: value
};
const cndString = `${condition.nameField} ${operator} :${field}`;
exports.setNextFunc(condition, exports.NextFunction.CONJUCTION);
exports.updateParams(condition, values, cndString);
};
exports.fname = condition => {
return `cnd${condition.index++}`;
};
exports.updateParams = (condition, values, cndString) => {
const params = condition.operation.getParams();
const expressionName = getExpressionName(condition.operation);
if (params[expressionName] && !condition.hasUpdated) {
condition.hasUpdated = true;
cndString = params[expressionName] + " AND " + cndString;
} else if (params[expressionName]) {
cndString = params[expressionName] + cndString;
}
condition.hasUpdated = true;
const updateParams = {
ExpressionAttributeValues: values,
ExpressionAttributeNames: condition.cndNames,
[expressionName]: cndString
};
condition.operation.appendToParams(updateParams);
};
exports.addProperty = (condition, propName) => {
if (propName.indexOf(".") === -1) {
condition.nameField = `#${exports.fname(condition)}`;
condition.cndNames = {
[condition.nameField]: propName
};
} else {
const nestedProps = propName.split(".");
condition.nameField = "";
condition.cndNames = {};
for (const nest of nestedProps) {
const fname = `#${exports.fname(condition)}`;
if (condition.nameField !== "") {
condition.nameField += ".";
}
condition.nameField += fname;
condition.cndNames[fname] = nest;
}
}
return condition;
};
exports.addFunction = (condition, funcName, operation, path, value = null) => {
exports.checkNextFunc(condition, funcName);
// initially add the prop
condition = exports.addProperty(condition, path);
let conditionString = `${operation} (${condition.nameField}`;
let values;
if (value) {
const valueString = `:${exports.fname(condition)}`;
values = {
[valueString]: value
};
conditionString += `, ${valueString})`;
} else {
conditionString += ")";
}
exports.updateParams(condition, values, conditionString);
exports.setNextFunc(
condition,
funcName === "size" ? exports.NextFunction.NUMERIC : exports.NextFunction.CONJUCTION
);
return condition;
};
|
STACK_EDU
|
In my recent SQL Server Pro Webcast (sponsored by Red Gate), I blurred through a number of details regarding how to Avoid 5 Common SQL Server Backup Mistakes. The event is/was free, and will be online for on-demand viewing for roughly 3 months after the date given. (Though it typically takes a day or so for it to become available for on-demand viewing after the live presentation date.)
Accordingly, I wanted to provide some additional, follow-up, resources to provide some additional context on many of the things that I addressed.
SQL Server Backup APIs
Early on in my presentation, I mentioned that it’s essential that third party backup solutions (or, more specifically: ‘plugins’ for SQL Server that are provided as ‘adapters’ for backup solutions that are typical NON SQL Server focused – but which want to provide some additional features and options) directly leverage the APIs provided by Microsoft for handling SQL Server backups. And again, this ISN’T an issue with all of the big/major 3rd party SQL Server backup solutions out there. Instead, it’s a concern with OLDER non-SQL Server backup solutions that also offer ‘plugins’ that ‘fake’ SQL Server backups without using the VDI and VSS APIs that are outlined in a bit more detail here. And the reason I keep stressing the ‘plugin’ angle here, is that the use of 3rd party SQL Server backups is typically a BEST PRACTICE in most environments.
RPOs and RTOs
It’s impossible to stress the importance of Recovery Point Objectives and Recovery Time Objectives. If you don’t have them, you haven’t communicated to management what they should expect in the case of a disaster. Consequently, even if you pull off the most spectacular and herculean recovery known to man following a disaster, you’ll still end up having to deal with cranky end-users AND management if you haven’t communicated RPOs and RTOs.
In fact, RPOs and RTOs are such a big deal that I decided to make them the focus of my very first blog post on my Practical SQL Server blog – here with SQL Server Pro. So take a few seconds to review SQL Server Recovery Time Objectives and Recovery Point Objectives if you’re not 100% comfortable with your own RPOs and RTOs and their communication to management.
Redundancy is something that’s pretty hard to cover in any detail within just a few minutes. Therefore, the best take-away I can offer when it comes to ‘redundancy’ is to use whatever techniques and options are at your disposal to give yourself as MANY options and ‘fall-backs’ as possible when it comes to having available backups to use in the case of a disaster.
To that end, I’ve talked previously about the notion of using ‘tiered storage’ to manage backups as both a means of achieving better performance AND redundancy in an article for SQL Server Pro called Maximize Storage Performance, and I’ve also blogged a bit about redundancy in this blog as well.
Quite simply, addressing security (in the form of certificates, keys, and encryption – when juxtaposed against backups in terms of storage space and compression and so on), ends up being one of the more difficult things that SQL Server DBAs need to deal with in larger organizations – as clearly called out by this editorial by Steve Jones.
Case in point? I mentioned Transparent Data Encryption in my presentation, and I also mentioned compression. But the two simply don’t play well together. So, long story short, make sure that if security is IMPORTANT, that you’re regularly testing recovery processes and operations. Which, in turn, was the BIGGEST and MOST important thing that I tried to cover in my presentation.
Finally, I used a couple of great photos in my slides – and wanted to provide attribution to those photos (or, at least, source information for where I found those photos):
Common Backup Mistakes Slide
Using the Wrong Tools Slide
|
OPCFW_CODE
|
To change the Clipboard options
When the Clipboard task pane is displayed an icon appears in the System Tray on the Taskbar.
- Right-click the icon to display the menu
- To clear the Clipboard task pane, click Clear All
- To stop collecting items and close the Clipboard task pane, click Stop Collecting
- To display the Clipboard Options, click Options
- To show the Office Clipboard automatically whenever the Copy command is invoked twice in succession, select Show Office Clipboard Automatically
- To collect multiple items to the Clipboard without displaying the Clipboard task pane, select Collect Without Showing Office Clipboard
- To display the Office Clipboard icon on the Taskbar, select Show Office Clipboard Icon on Taskbar
- To display the collection status when each item is copied, select Show Status Near Taskbar When Copying
Use Drag-and-Drop to Move and Copy Text
Drag-and-drop literally allows you to drag selected text or graphics with the mouse and drop it elsewhere in the document. Data can be dragged to a different document.
To use drag-and-drop
- Select the text to be moved then point to the selection
The pointer changes to an arrow .
- Hold down the left mouse button and drag the text to the new location then release the mouse button
The text is moved. As with any Cut and Paste operation, the Paste Options smart tag is displayed to let you re-format the text.
Tips: To copy text, hold down the Ctrl key while you drag. Dragging selections over the bottom of the page will cause the document to scroll. If you drag over a Taskbar icon, the application's window will be made active. Press Esc (without releasing the mouse button) if you want to cancel a drag-and-drop.
Use Paste Special
Paste Special allows you to choose a particular data format in which to paste a cut or copied selection. You can paste any type of data using the Cut, Copy and Paste tools, not just text.
To use paste special
- Cut or copy the text (or other data) from a document or other file
- Click in the Word document where you want to paste
- From the Edit menu, select Paste Special...
The Paste Special dialogue box is displayed.
Paste Special dialogue box
- From the As: box, select an appropriate format and click OK
Open More Than One Document
While using Word you can have several documents open at one time; you do not have to close one before opening another. The advantage of this is that you can quickly look up information in one document whilst working on another or you can transfer data between documents.
To open a second document
- Open the first document
- On the Standard toolbar, click Open
- From the File menu select Open (SpeedKey: Ctrl + O)
- Select the second document to open
Tip: You can also create a new document while a document is open.
To move between document windows
Each open document appears as a separate icon on the Windows Taskbar.
- On the Windows Taskbar, click the Document icon (SpeedKey: Alt + Tab)
Multiple document windows displayed on the Taskbar
Tip: If your documents do not appear on the Taskbar, from the Tools menu, select Options. Click the View tab and select Windows in Taskbar.
- From the Window menu, select the document to switch to (SpeedKey: Ctrl + F6)
The name of the active window is displayed on the Title bar.
To view two documents on screen
- From the Window menu, select Arrange All
Both documents are displayed on the screen. Each document has its own window.
Tip: When both documents are visible select the document to work in by clicking on it with the mouse. The first click will activate the window but will not move the insertion point.
To copy and move text between documents
Text can be copied and moved between documents without them both being open at the same time by using the Copy and Paste buttons on the Standard toolbar. You can use the Office Clipboard to copy and paste multiple items. Also, if both documents are open the data can be dragged from one to the other.
- Display the text or graphics you want to move or copy in one window
- Display the destination for the text or graphics in the other window
- Select the text or graphics
- Drag the text or graphics from the first window to the second window
To close a second document
If you have more than one document open at the same time, the Close button on the Title bar will close just the document, rather than exiting Word.
- With more than one document open, click the Close button on the Title bar OR the Menu bar - your other documents will remain open
Multiple documents open
- With one document open, click the Close button on the Menu bar to close just the document (clicking on the Title bar will also exit Word)
One document open
Document Window Panes
It is possible to display more than one part of a long document at once by splitting the document into panes. Each pane is a separate window with its own Ruler bar and scroll bars.
To split a document into two panes
- Point to the split bar above the vertical scroll bar (the grey rectangle)
- Drag the split bar to the required position
- Double-click on the split bar above the vertical scroll bar
A grey line is displayed where the window is split.
Split document window
- Click in a pane once to select it, or press F6 to toggle between panes
To clear a window split
- Drag the split bar to the top or bottom of the window
- Double-click on the split bar
|
OPCFW_CODE
|
This post discusses the ethtool offload parameters related to VXLAN on the ConnectX-4 adapter.
The offloads discussed here are enabled by default and recommended to be used with kernel version that includes VXLAN support. The earlier kernels include some of the offloads that benefit VXLAN processing; however, to take advantage of all offload options available by ConnectX-4, it is recommended to use the upstream kernel. In order to understand the offloads, make sure that you understand the VXLAN packet format, refer to VXLAN Considerations for ConnectX-3 Pro for more info.
Note: There is no ability to fully disable VXLAN offload on ConnectX-4.
The following ethtool options are configurable using the -K flag. The availability of these options is dependent on kernel and ethtool versions.
1. Disable/Enable Checksum offload on Rx.
When enabled it reports “csum complete” on all the packet (inner/outer headers and payload). This means that VXLAN performance on RX csum calculation can be boosted if used on kernels that support checksum complete on RX.
# ethtool -K <interface> rx <off|on>
2. Enable/Disable TSO (TCP Segmentation Offload) for tunneling protocols. This means that all packets will be getting to the NIC already MTU sized, instead of a large packet being segmented by hardware. This might be the most important of TCO offloads for VXLAN packets: it offloads the segmentation of large TX packets (packet size >> MTU) to NIC hardware. For example, in case the inner packet will be 9000 bytes, while the port MTU is 1500 bytes. the segmentation of the packet will be in the HW.
# ethtool -K <interface> tx-udp_tnl-segmentation <off|on>
3. Enable/Disable inner packet checksums offload for for tunneling protocols on the Tx side.
Note: This is relevant for upsteam kernel 4.6 and above (see http://lists.openwall.net/netdev/2016/03/19/2 ). (As such, and for example, this feature is not supported on RHEL 7.1.)
ethtool -K eth2 tx-udp_tnl-csum-segmentation <off|on>
To show the current configuration run:
# ethtool -k <interface>
# ethtool -k enp2s0f0
Features for enp2s0f0:
tx-checksum-ip-generic: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
tx-scatter-gather-fraglist: off [fixed]
tx-tcp-ecn-segmentation: off [fixed]
udp-fragmentation-offload: off [fixed]
ntuple-filters: off [fixed]
highdma: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]
hw-tc-offload: off [fixed]
|
OPCFW_CODE
|
folder note keeps deleting my folder note
terminal reports
no afitem found for path 000 - Landing Page. Inbox. KO. Meta/000 - Landing Page. Inbox. KO. Meta.md, escaping...
_setMark @ VM256:171
I've been trying to fix this for days and keep thinking it's fixed to come back and find it deleted. sometimes it will delete on one device and then other devices delete it. It's only this one and only since I renamed it
This happens with folder-note-core enabled but
alx-folder-note disabled.
I also get about 4000 warnings in the log that look like:
getFolderPath(tbasename: "QuickAdd"deleted: falseextension: "md"name: "QuickAdd.md"parent: t {deleted: false, vault: t, path: "/", name: "", children: Array(10)}path: "QuickAdd.md"saving: falsestat: {ctime:<PHONE_NUMBER>244, mtime:<PHONE_NUMBER>523, size: 1997}unsafeCachedData: "---\nMetadata: \nfileClass: Meta\nMetaType: PKB\nCreated: 2021-10-23 00:47:10\nLast Modified: 2021-10-28 01:43:05\n---\n# QuickAdd\n\n-<PHONE_NUMBER>: \n-<PHONE_NUMBER>: \n- 2021-10-30 10;12:02: test\n- 2021-10-31 03:39:54: test\n- 2021-11-06 17:21:[second - \n\n- 2021-11-06 17:24:34 - \n- 2021-11-06 17:26:14: \n- 2021-11-06 17:27:44: ---\nhttps://news.ycombinator.com/item?id=29135559\n\nbarrenko 1 hour ago | root | parent | prev | next [–]\n\nSo next question, is a credit card chip also a computer?\nreply\n\n\t\nPeterisP 1 hour ago | root | parent | next [–]\n\nYes, a very limited one but you can put a bunch of interesting things there, the EMV standard documents how other apps should be put there so that the customer can have a card that works both as a credit card in CC terminals and also have additional features that are accessible either in specialized terminals or custom CC terminals with explicit support enabled e.g. some loyalty card or discount scheme.\nHere's a list (IMHO not exhaustive) of some apps put on EMV cards https://www.eftlab.com/knowledge-base/211-emv-aid-rid-pix/ - there are various identity card solutions both from governments and companies like Microsoft, there's https://en.wikipedia.org/wiki/OpenPGP_card , there's various solutions for store loyalty cards.\n\nHowever, I don't consider this aspect as any risk to the user, since when using the standard payment card functionality your card payments already have no privacy or security whatsoever from the issuer of your card, from a technical perspective the issuer will (by design) see and manage (authorize, revoke, etc) all the transactions and the card + terminal is just one of the channels for sending cardholder-initiated transactions to them. It would be technically appropriate to treat it not as \"your card\" but \"issuer's card\" that the cardholder uses as a token to use when the merchant communicates with the issuer about the bill.\n---\nby making technology masculine did it also make the classification systems masculine? \n"vault: t {_: {…}, fileMap: {…}, config: {…}, writingConfig: false, configDir: ".obsidian", …}__proto__: e): note name invaild
i've tracked down the error—it's a zootelkeeper / folder note conflict
|
GITHUB_ARCHIVE
|
import numpy as np
import chess
from constants import BOARD_SIZE, DEFAULT_PIECE_ORDER, N_FEATURE, N_BOARD_FEATURE
def get_binary_plane(boolean):
return np.ones([BOARD_SIZE, BOARD_SIZE], dtype=np.bool) * boolean
def get_piece_plane(idx):
plane = np.zeros(BOARD_SIZE * BOARD_SIZE, dtype=np.bool)
plane[idx] = True
plane = plane.reshape((BOARD_SIZE, BOARD_SIZE))
return plane
def init_state():
return np.zeros((BOARD_SIZE, BOARD_SIZE, N_FEATURE), dtype=np.bool), dict()
class ChessEnvironment(chess.Board):
def __init__(self, start=chess.STARTING_FEN):
chess.Board.__init__(self, start)
self.state, self.rep_counter = init_state()
self._update_state()
def act(self, move, string_move=True):
if string_move:
self.push_uci(move)
else:
self.push(move)
self._update_state()
def _update_state(self):
player = self.turn
enemy = not player
# Part 1: Piece features
board_state = []
for colour in (player, enemy):
for piece in DEFAULT_PIECE_ORDER:
piece_idx = list(self.pieces(piece, colour))
piece_plane = get_piece_plane(piece_idx)
board_state.append(piece_plane)
# Increment state counter
self.rep_counter[str(self)] = self.rep_counter.get(str(self), 0) + 1
rep = self.rep_counter[str(self)]
# # Part 2: State repretions
for i in range(2):
rep_plane = get_binary_plane(rep > i)
board_state.append(rep_plane)
# Part 3: Meta features
colour_plane = get_binary_plane(player)
p1_king_castling = get_binary_plane(self.has_kingside_castling_rights(player))
p1_queen_castling = get_binary_plane(self.has_queenside_castling_rights(player))
p2_king_castling = get_binary_plane(self.has_kingside_castling_rights(enemy))
p2_queen_castling = get_binary_plane(self.has_queenside_castling_rights(enemy))
meta_state = [colour_plane, p1_king_castling,
p1_queen_castling, p2_king_castling, p2_queen_castling]
# roll previouse state and update
self.state = np.roll(self.state, -N_BOARD_FEATURE, axis=2)
update_state = np.stack(board_state + meta_state, axis=-1)
# TODO: HARD CODED
self.state[:, :, -19:] = update_state
if __name__ == "__main__":
n = 800
import time
import random
random.seed(42)
print('Measure performance with {} state updates. . . '.format(n))
env = ChessEnvironment()
i = 0
tic = time.perf_counter()
while i < n:
if env.is_game_over(claim_draw=True):
env = ChessEnvironment()
m = random.choice(list(env.legal_moves))
env.act(m, False)
i += 1
toc = time.perf_counter() - tic
print(toc)
|
STACK_EDU
|
Can somehow explain me what are of the benefits of using SharePoint Timer jobs over Windows Task Scheduler?
I like Arsalan's answer, however MS is pushing people to avoid server side development, which includes Timer Jobs. As timer jobs run on the SharePoint server, a poorly written timer job can have a negative impact on the farm. Also, if a customer ever moves to Office 365, any custom timer jobs will have to be re-written.
An app run by the windows task scheduler that connects to SharePoint via REST or the CSOM seems like it is more in line with current guidance. The scheduled task can be run from any server, not just the SharePoint box, and it can be re-pointed to Office 365 with almost no effort,
Here are few more differences between Timer Job and Windows Task schedulers:
- Timer jobs require downtime to deploy.
- Control via Central Admin.
- Schedule of Timer Job will be backed up and restore in your normal process of SharePoint backup and restore.
- Can be deployed using standard WSP solution.
- Custom Timer Jobs provides the power to specify Job LockTypes (i.e. SPJobLockTypes) which guarantees that multiple instances of same job will never execute at the same point in time.
Windows Task Scheduler
Windows Scheduled task doesn't require downtime to install/update.
The task will only run on the server that you've installed it on.
Administrator needs to manually manage backup and restore of Schedule Tasks
No standard built in deployment method
No multiple instance guarantee. Administrator needs to make sure that no two instances are running at the same time.
Benefits of Sharepoint Timer jobs over Windows Task Scheduler are :
Single point of failure : Windows Task Scheduler need to be configured on all the web servers. If you configure to run the job on 1 server only, and this server crashes, job will not work at all.
Status Reporting : Windows Task Scheduler doesn't have any reporting on when was the last time job got executed and what was the status. Only option is logging. Whereas SharePoint have a UI to show status of all the jobs and their status.
Security : In case of Windows Task Scheduler, you will need go to IT Admins and request for a special username/password to run such jobs where as SharePoint Timer Jobs automatically run under SharePoint Timer Job account.
Deployment : There is no easy way to deploy Windows Task Scheduler tasks and application which need to executed in a FARM environment. This will require lot of manual steps by IT Admin. SharePoint jobs can be deployed using WSP's.
Our operations team says when you have a task that is really related to SharePoint, maybe list items iteration, logging etc etc. you should use Timer Job..
But if you have tasks not related to SharePoint at all or directly.. Than use Task Scheduler..
we had an External Content Type made from SQL Server Database and the user wanted to iterate through the SharePoint Lists and put some values in SQL Database.. The operations team denied using Timer Job in this case.. Said the affected part is on SQL Database and its not directly related to SharePoint.. We had to make a Task Scheduler in this case.
However once we had to Iterate through one of the Lists and send emails to people on the basis of expiry or so.. In this case Operations had no issue on creating Timer Job :)
|
OPCFW_CODE
|
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributions import Categorical
from lagom.agents.base import BaseAgent
from lagom.core.preprocessors import Standardize
from lagom.core.preprocessors import ExponentialFactorCumSum
class A2CAgent(BaseAgent):
"""
Advantage Actor-Critic (A2C) with Generalized Advantage Estimate (GAE)
"""
def __init__(self, policy, optimizer, lr_scheduler, config):
self.policy = policy
self.optimizer = optimizer
self.lr_scheduler = lr_scheduler
super().__init__(config)
def choose_action(self, obs, mode):
assert mode == 'sampling' or mode == 'greedy'
out_policy = self.policy(obs)
# Unpack output from policy network
action_probs = out_policy['action_probs']
state_value = out_policy['state_value']
# Create a categorical distribution
# TODO: automatic distribution select according to action space
action_dist = Categorical(action_probs)
# Calculate entropy of the policy conditional on state
entropy = action_dist.entropy()
# Calculate perplexity of the policy, i.e. exp(entropy)
perplexity = action_dist.perplexity()
if mode == 'greedy': # greedily select an action, useful for evaluation
action = torch.argmax(action_probs, 1)
logprob_action = None # due to greedy selection, no log-probability available
elif mode == 'sampling': # sample an action according to distribution
action = action_dist.sample()
logprob_action = action_dist.log_prob(action) # calculate log-probability
#print(f'#######{action_probs}')
#print(f'!!!!!!!{action.item()}')
# Dictionary of output data
output = {}
output['action'] = action
output['logprob_action'] = logprob_action
output['state_value'] = state_value
output['entropy'] = entropy
output['perplexity'] = perplexity
return output
def learn(self, batch):
batch_policy_loss = []
batch_value_loss = []
batch_entropy_loss = []
batch_total_loss = []
for episode in batch: # Iterate over batch of episodes
# Get all returns
Qs = episode.all_returns
# Standardize returns to [-1, 1], prevent loss explosion of value head
# Very important, otherwise cannot learn good policy at all !
Qs = Standardize().process(Qs)
# Get all values
Vs = episode.all_info('state_value')
# Get all action log-probabilities
log_probs = episode.all_info('logprob_action')
# Get all entropies
entropies = episode.all_info('entropy')
# Generalized Advantage Estimation (GAE)
all_TD = episode.all_TD
alpha = episode.gamma*self.config['GAE_lambda']
GAE_advantages = ExponentialFactorCumSum(alpha=alpha).process(all_TD)
# Standardize advantages to [-1, 1], encourage/discourage half of actions
GAE_advantages = Standardize().process(GAE_advantages)
# Calculate losses
policy_loss = []
value_loss = []
entropy_loss = []
# iterate over time steps
for logprob, V, Q, GAE_advantage, entropy in zip(log_probs, Vs, Qs, GAE_advantages, entropies):
policy_loss.append(-logprob*GAE_advantage)
value_loss.append(F.mse_loss(V, torch.Tensor([Q]).unsqueeze(0)).unsqueeze(0))
entropy_loss.append(-entropy)
# Sum up losses for each time step
policy_loss = torch.cat(policy_loss).sum()
value_loss = torch.cat(value_loss).sum()
entropy_loss = torch.cat(entropy_loss).sum()
# Calculate total loss
total_loss = policy_loss + self.config['value_coef']*value_loss + self.config['entropy_coef']*entropy_loss
# Record all losses for current episode
batch_policy_loss.append(policy_loss)
batch_value_loss.append(value_loss)
batch_entropy_loss.append(entropy_loss)
batch_total_loss.append(total_loss)
# Average total loss over the batch
# TODO: keep track of new feature to cat zero dimensional Tensor
batch_total_loss = [total_loss.unsqueeze(0) for total_loss in batch_total_loss]
loss = torch.cat(batch_total_loss).mean()
# Zero-out gradient buffer
self.optimizer.zero_grad()
# Backward pass and compute gradients
loss.backward()
# Clip gradient norms if required
if 'max_grad_norm' in self.config:
nn.utils.clip_grad_norm(self.policy.parameters(), self.config['max_grad_norm'])
# Update learning rate scheduler
self.lr_scheduler.step()
# Update for one step
self.optimizer.step()
# Output dictionary for different losses
output = {}
output['loss'] = loss
output['batch_policy_loss'] = batch_policy_loss
output['batch_value_loss'] = batch_value_loss
output['batch_entropy_loss'] = batch_entropy_loss
output['batch_total_loss'] = batch_total_loss
return output
|
STACK_EDU
|
<?php
namespace Database\Seeders;
use App\Models\Course;
use DateTime;
use Illuminate\Database\Seeder;
class CourseSeeder extends Seeder
{
/**
* Run the database seeds.
*
* @return void
*/
public function run()
{
$start_date = '2021-06-07';
$end_date = '2010-08-07';
$min = strtotime($start_date);
$max = strtotime($end_date);
// Generate random number using above bounds
$val = rand($min, $max);
$weeks = rand(1, 52);
$week1 = rand(1, 52);
$week2 = rand(1, 52);
$week3 = rand(1, 52);
// Convert back to desired date format
$date1 = new DateTime(date('Y-m-d', $val));
$date2 = $date1->modify('+' . $weeks . ' weeks');
$date3 = $date1->modify('+' . $week1 . ' weeks');
$date4 = $date1->modify('+' . $week2 . ' weeks');
$date5 = $date1->modify('+' . $week3 . ' weeks');
$courses = [
0 => [
'title' => 'Google Data Analytics Professional Certificate',
'description' => 'Prepare for an entry-level job as a data analyst. In this program, you’ll learn how to collect, transform, and organize data in order to help draw new insights and make informed business decisions.
This is for you if you enjoy working with numbers, uncovering trends, and visualizations.',
'date' => $date1,
'time' => now(),
],
1 => [
'title' => 'Google Project Management Professional Certificate',
'description' => 'Prepare for an entry-level job as a project manager. In this program, you’ll learn how project managers successfully start, plan, and execute a project using both traditional and agile project management approaches.
This is for you if you enjoy solving problems, working with people, and organization.',
'date' => $date2,
'time' => now(),
],
2 => [
'title' => 'Google UX Design Professional Certificate',
'description' => 'Prepare for an entry-level job as a UX designer. In this program, you’ll learn the foundations of UX design, how to conduct user research, and design prototypes in tools like Figma and Adobe XD.
This is for you if you enjoy thinking creatively, design, and research.',
'date' => $date3,
'time' => now(),
],
3 => [
'title' => 'Google IT Support Professional Certificate',
'description' => 'Prepare for an entry-level job as an IT support specialist. In this program, you’ll learn the fundamentals of operating systems and networking, and how to troubleshoot problems using code to ensure computers run correctly.
This is for you if you enjoy solving problems, learning new tools, and helping others.',
'date' => $date4,
'time' => now(),
],
3 => [
'title' => 'Google IT Automation Professional Certificate',
'description' => 'This is an advanced program for learners who have completed the Google IT Support Professional Certificate.
This is for you if you want to build on your IT skills with Python and automation.',
'date' => $date5,
'time' => now(),
]
];
foreach ($courses as $course) {
Course::create([
'title' => $course['title'],
'description' => $course['description'],
'date' => $course['date'],
'time' => $course['time'],
]);
}
}
}
|
STACK_EDU
|
Brad presented a session that tries to help you identify and fix tempdb problems. Performance monitor is your friend. the local disks, avg read and write counters should help you determine if I/O is an issue. Watch for systems where you have >20ms for the averages, this is a gross number, and not an exact measure, but something to be aware of.
Wait states can help you to identify contention for tempdb on allocation structures. Using sys.dm_OS_waiting_tasks, you can look for pagelatches on the pfs, gam, and sgam pages. If you query and find that you have lots of waits for these allocations, you might want to add more tempdb files to help alleviate contention.
There is no reason a DBA should allow a database to run out of space. A quote from Brad and for the most part i agree. You ought to have alerting setup, monitor space, and grow files as needed. There are cases where a runaway event might cause issues, but for the most part, you should be managing space actively.
Optimizing: a variety of suggestions, with the idea that you ought to assume tempdb will be an issue over time. So pre-plan for performance.
– minimize usage. Dont return more rows than you need. Don’t sort data you don’t need to, don’t use order by or distinct if not needed, keep transactions short, index well, avoid temp tables.
– Avoid cursors, especially static or keyset driven cursors.
– avoid LOB columns if you can, consider vertical parittioning.
– avoid table variables
– avoid triggers where you can
– avoid aggregating large sets of data
– avoid snapshot isolation or read committed isolation levels
– if you use sort in tempdb for index rebuilds, schedule it during off hours
You can use these features if you need them, but understand they impact tempdb. Be smart and minimize the usage where you can.
Isolating tempd on separate physical disks can help speed it up, adding RAM might help if sql server can avoid spilling to tempdb, but that depends if you have memory pressure. If you do not have any memory pressure, then you may not get any benefit from adding ram.
SSDs are used for tempdb, but the problem is that tempdb has lots of read.write and can “wear out” an SSd drive.
“The DBA life is about compromise”. I would agree with that. We can’t always do what we want and dont have the money so we need to make tradeoffs.
Preallocate tempdb space, monitor, and then resize as needed. Use IFI for data growth if needed. Note this does not apply to log growth.
Multiple files can help prevent tempdb contention, recommendation is 1/4 to 1/2 the number of cores, up to 8. Note that you want to make all your files the same size.
If you enable TDE, which is a good feature, be aware that tempdb is encrypted. Also be aware that if you remove TDE from all databases, tempdb remains encrypted. Be careful of “testing” TDE on servers.
|
OPCFW_CODE
|
I need a hash function to use in a program I'm writing. I know exactly how big the table needs to be (88,799). I need to store strings in the table, with strings as the keys for those elements being stored.
Basically I'm storing a list of names and passwords associated with those names. I need to be able to enter the name in the hash table and have it return the password. Security isn't an issue because this is a school project (we have to employ some weak encryption to the passwords before they are put in the hash table anyway). I am planning on implementing an externally chained hash table, but I'm having a hard time coming up with a good hash function. I've heard its good that I know exactly how big the table will be, so I can come up with a better function.
I'm planning on using an array for the table and storing a linked list in each location to deal with collisions.
Can anyone possibly point me in the right direction?
Greg Roberts<br />CIS Student<br />University of West Florida
I believe we have to build our own hash table. The exact wording is "You will build an externally chained hash table of userids and passwords." He's also asking to detail how we came up with the load factor. The way this prof works, we'll have to write a custom hash table. I did just email him about it though. I'll post again as soon as I get a response.
We have to write our own hash function for this problem. One that will (hopefully) evenly distribute all 88,799 names across a hash table. Another problem is how big to make the hash table. The table doesn't need to be 88,799 long, because there will be linked lists to deal with collisions. I just need to figure out how big to make the table and what hash function to use to maximize efficiency.
Joined: Feb 05, 2005
Does anybody here know how to implement a hash function specific like this one?
Isn't that what this forum is for? I'd rather get advice from you guys than reading what some yahoo put on the internet. I'd rather have discussion about different functions than copy and paste from a website.
I've been to both sites and looked at what they have to offer, but I'm not supposed to use an algorithm off the internet. I'm supposed to come up with an efficient algorithm specific to this project, not generic hash algorithms somebody else came up with. I'm looking for advice on ways to come up with project-specific algorithms.
author and iconoclast
Originally posted by Greg Roberts: Isn't that what this forum is for?
Arguably, not so much. Maybe the "General Computing" forum down the list. In "real" Java programming, it's very uncommon to use anything but String.hashCode() to hash a String. Now, implementing hashCode() for other classes, that does come up, but even there, most of the time the solution is to compose the results of calling hashCode() on members.
Joined: Feb 05, 2005
I totally understand what you're saying. But I'm not in what you would call a "real" Java programming situation. I'm still a student at a university. Their intention is for us to learn a little bit about hashing. In the learning process, they want us to design our own hash function to implement our own hash table. Which is why I am seeking advice on creating a custom hash function.
I posted all the requirements that I had for the hash function. I tried to get it across that I had to write a custom hash function for the project. All I got was suggestions on built-in Java functions.
Really, let's just drop it. I got a hash function working that I didn't lift from some web site. Trial and error. And got no help from anybody here.
I don't want anybody to go to any trouble and help out a student anyway.
|
OPCFW_CODE
|
After Matthew said he’d be interested in a new “robust client” for ZF, I decided to sum things up. I have to sign a CLA, write a proposal and do some coding stuff if I want to contribute my code to ZF. But before going into the Zend Framework community, it’s always a good idea to keep your head cool and think about some things first.
I need to think about the following topics:
- Functional requirements
- Structure of the service
- What won’t be supported initially
The functional requirements
I think it’ll be a good idea to make a comparison to the existing Zend_Service_Flickr first. To list all the things about the existing implementation, it has support for:
- Tags (limited to a search for photos)
- Pools (fetching the group pool’s photos)
- The usual stuff for users and photos (read only)
It’s quite clear the current Zend_Service_Flickr has tags and pools supports where mine implementation currently lacks these things. At the other hand, Sozfo_Service_Flickr has:
- Support for users and photos (read only)
- Support for sets (sets in collection, sets from user, photo in sets)
- Support for collections (collections from user, collection from sets)
- Support to authenticate your application (you start with a key, you end with a token)
- An object relational type of handling the data
If you combine those two lists, I think Sozfo_Service_Flickr needs a couple more things until it might get approval. First, support for tags is pretty clear. Because tags are not really related to all other data types exept photos, its easy to implement.
The other thing is the pool support. If you want to use pools, groups and photos I think the minimum you need is searching groups, listing all photos from the group, fetching the context of a photo from a pool and list all members from a group.
So the have some use cases, I think it’d look like this:
//Photos is an array of Sozfo_Service_Flickr_Photo objects $photos = $flickr->factory('tag')->search('delft'); //Groups is an array of Sozfo_Service_Flickr_Group objects $groups = $flickr->factory('group')->search('your query'); $group = current($groups); $users = $group->getUsers(); $photos = $group->getPhotos():
The OOP structure
Since I’m not a very experiented programmer, this isn’t the most simple task. I have chosen a structure which might not be good at all, but please drop a comment if you have a suggestion.
The Sozfo_Service_Flickr class is at top level. It has a factory method to create objects (like photos, users, sets and so on) which I call “childs” (is there a better word for it?). Furthermore, it only does some storage for api keys, api secrets and authentication tokens.
The Sozfo_Service_Flickr_Abstract is the basic class for all childs. It
does the request to the Flickr server, checks for errors and strips the result
to be more usable. Furthermore it has the usual __construct(), __get(),
__set() and setOptions() methods to provide a minimum of work to create a
The abstract class can sign the calls automatically, when you have set the secret and token to the Sozfo_Service_Flickr instance.
The Sozfo_Service_Flickr_Photo for example has the getTitle() method and
so on. More important: all childs have a _loadInfo() method which loads all
information of the class. The method is called automatically if you try to
fetch a property which isn’t loaded yet.
E.g. you only need to do $photo->getTitle(). If the title isn’t known at all, the photo object will call the _loadInfo() method after which the title is known and returned to you.
Not supported features
Because you can drown yourself if you want to do too much, it’s a good idea to provide some ' IT WILL NOT ' items to your requirement listing. Sozfo_Service_Flickr will at first place not :
- Have write access, it’s read only
- No support for activities, blogs, favorites, interestingness, machinetags, pandas, geolocation, comments, notes and places
Will this be good enough? I hope so. Please leave a message if you have some suggestions!
|
OPCFW_CODE
|
The concurrent_queue class does not provide assignment operator or copy constructor, but I am stuck in a condition where i want to assign a concurrent_queue variable to another variable of concurrent_queue. Is there any way or trick around?
Thanks in advance
Let me ask why do you need to copy a queue?
The concurrent queue is not likea integral type; it's a complex object, and in fact the tbb::concurrent_queue class only represents the high-level interface while implementation is done in internal classes. I do not see how a copy operation on a queue would be useful; copying it without user data is of no value because you might just create a new queue object, while copying it with all user data is questionable from many standpoints. The same is true for any container, actually. Also for TBB containers and concurrent_queue in particular, thread safety is another concern: what if another thread inserts more elements into the queue while it is being copied?
If you provide any idea what so special you try to do that requires copying a concurrent_queue object, we might give you some idea how to do it another way.
Agreed that my design has some flaws, that I/we need to re-think and re-design it.
But the current design forces me to have copy-constructor for concurrent_queue. I will take care of multithreading and related issues.
It may sound bad to you, the TBB developers, but the fact is that a container should provide flexibility in its usage. The container (of any type) can be used in millions of ways, so it is irrelevant where and why I (or anyone) is putting a container in software design. Just like 'resize' is provided for CV, and well mentioned that it's not thread-safe, CQ should have similar constructs (CTOR is one of them) - that may be thread-unsafe as well.
In a spreadsheet making a 3D bar-chart on the basis of lastname may sound absurd, but developer can't escape saying this type of chart doesn't make sense. The chart may be bizzare, ugly and making no-sense, but the spreadsheet program is still "flexible" enough to facilitate the same. I guess you got my point.
Your points are well taken. I agree that a generic container should be flexible enough to help developers to get their job done easily and quickly. Unfortunately, flexibility does not come for free and we have to consider return-on-investment for each feature that we want to support. As Alexey mentioned before, we don't see any compelling use cases for copy constructor(s) of the concurrent queue container, and few customers have asked for them. If we see a sufficiently large traction for them, weMIGHTCONSIDER supportingthem in the future.
In the meantime, if you really need to have a copy constructor of concurrent queue, you may create your own cq, let it inherit from TBB concurrent queue and define the copy constructor there. Either you can use 'iterator' to copy over all the elements to the new queue or you can pop each element from the old queue and push it back into the old queue as well as the new queue.Please note that none of the options guarantee thread-safety as Alexeywarned before.
nitinsayare:Agreed that my design has some flaws, that I/we need to re-think and re-design it.
I did not mean anything like this. If my post sounded this way to you, I apologize.
nitinsayare:But the current design forces me to have copy-constructor for concurrent_queue. I will take care of multithreading and related issues.
... it is irrelevant where and why I (or anyone) is putting a container in software design.
Again, I meant nothing except willing to help you use the container as it is now, and possibly collect the requirements for its improvement; thusthe questions. It sounds like you got me wrong. Nevertheless, TBB's concurrent_queue does not allow copy construction or assignment at the moment. I am sorry but for now you have to use some workaround. I am sure you will find the best way for that.
That said, I think that providing a constructor that takes a couple of input iterators [begin, end) and adds every element in between to the queue would probably be generic and non-contradictory enough to be reasonable to implement. The question is, could we make it more efficient than explicitly pushing each element?
Thanks for the suggestions. I deeply regret for any rudeness caused.We'll think about deriving class from CQ.
The thing is that 2 instances of tbb::concurrent_queue are inside a structure (along with other data-members). This structure is "type" for another vector. The vector is extensively used throughout the application. The 2 instances of CQ inside play important role for the sockets (more details are unneeded, I guess).
Now the each element of vector is moving around, requiring assignment operator and C-CTOR for struct (and thus the CQ instances).
We also thought of using pointers (of CQ, or of the struct) instead of instance variables, as you might suggest. But that adds complexity, re-design, code-reviews and re-testing. This all can be done, but we may still need copy-constructable CQ in future.
Your answer suggests me that you actually need a constructor that supports move semantics, as in the end you are not interested in the original element of the vector, only in the new one. Does that sound right? A moving constructor for a container makes perfect sense to me; it allows "shallow" copying which is more efficient and also does not duplicate the contained data. Unfortunately at the moment, move semantics is not supported by the C++ language, so we all have to use copy construction instead, which affects performance, increases memory usage (at least temporarily),and creates potential data duplication issues. The upcoming C++0x standard addresses this problem; and I think future TBB containers will provide efficient moving constructors.
If you use CQ inside another structure, you possibly don't need to derive a class from CQ, if you could use the copy constructor of that structure to pop all elements from the original queue and push to the newly createdone. Assumed moving semantics of your whole operation suggests popping from the old queue instead of iterating over it; the latter would create a "duplicate" while the former "moves" data by deleting those from the original queue.
|
OPCFW_CODE
|
- A brief history of APL Conferences
- APL98 Conference site
- APL2003 Conference call for papers
- APL2007 Conference home page
Past Other Things
Some Older Papers
- An Adaptive Query System by E. Kellerman - (1969) Describes an adaptive query program, coded in APL, to allow users to ask questions in everyday English and to receive answers with minimal delay.
- APL as a Notation for Statistical Analysis by K. W. Smillie - (1969) Discusses using APL as a notation for statistical analysis and presents an example deriving the chi-square statistic for independence in a two-way contingency table.
- Is APL Epidemic? Or a Study of its Growth Through an Extended Bibliography by J. C. Rault and G. Demars - (1971) An attempt to demonstrate that the use of APL is growing in an epidemic fashion.
- APL\360 History by Adin Falkoff - (1969) A talk on the history of APL from 1957 to 1969.
- A Generalized Digraph Simulator by Brooke Allen - (1976) An introduction to directed graphs with an implementation in APL.
- A Collection of Graph Analysis APL Functions by E. Girard, D. Bastin and J. C. Rault - (1969) Presents a set of functions dealing with graph theory.
- Graphics in APL by Alfred M. Bork - (1972) Describes an experimental graphic facility within APL on timeshared graphic terminals.
- The Hunting of the Snark by Philip R. Chastney - (1972) A consideration of the formal semantics of APL particularized to the consideration of the unusual properties of the null expression with reference to these properties having been fully documented by Lewis Carroll.
- Welcoming Address to APL69 by G. Bruce Dearing - (1969) Some thoughts on adapting education to the 21st century.
- Modeling the Arithmetic of Statistical Distributions by Leo H. Groner - (1986) Examples of using APL2's defined operators and general arrays to extend arithmetic to handle ranges, polynomials, extended precision and fault tolerance.
- Questionnaire Results - (1969) A priority list of desired additional APL features and support, with some pictures of the conference banquet.
- A Time Study in Numerical Methods Programming by Glen B. Alleman and John L. Richardson - (1974) Tests of the effectiveness of APL and FORTRAN in providing solutions to numerical analysis problems found in scientific investigations.
- What's Wrong With APL? by Philip S. Abrams - (1975) An examination of criticisms of "Iverson Notation", in the form of APL, with considerations of how the language might be improved.
|
OPCFW_CODE
|
The New Sadism
In /r/perl, I responded to The Pervert’s Guide to Computer Programming Language. There were many things I excised from my response before I posted it, so I capture those here. I might want these ideas later.
In brief, Watson’s talk connects the rantings of Slavoj Žižek and his Critical Theory reinterpretation to the classification of programming languages. Ignoring the fact that programming languages have as much animus or consciousness as a garden rake, the classification is facile and broad. That’s not odd for sophomoric Žižek fanfic—I suspect Žižek’s actually Tony Clifton.
I was comparing Perl’s and Python’s different design fundamentals, and strayed into comparing HTTP libraries. If you’ve never used anything else, Python’s
requests library seems really cool. If you have used other things, you’re annoyed at its dumb limitations:
Compare your favorite HTTP library among languages, then look at the design of Mojolicious. You probably wouldn’t guess that a web framework would have one-liners for anything other than toys, but Mojo does. I have some as shell aliases even. I spent a lot of time trying any HTTP library I could find as I was writing Mojo Web Clients, so I’ve felt this pain—trying logging requests in Python’s
The presentation classified languages superficially without much consideration for their context (although the New Jersey-MIT dichomoty was there):
The small-tools value is difficult to enjoy when your entire world is services exchanging JSON instead of unix files and pipelines—especially when you aren’t the one creating any of the services. These aren’t magical beans in distant lands. Someone did a lot of work for you. Some of those people even invented new languages to do that work.
Along with that, using current thoughts to evaluate long past decisions is a bit dishonest. There’s a tendency to emphasize the good of the current situation, but to emphasize the bad in past decisions. This is an odd state because most the current stuff will never make it into history while the derided past is exactly the stuff that survived:
Most of us know that “serverless deployments” aren’t really serverless. We mostly understand the fictive elements of the marketing and that some site reliability engineer has to actually go to an actual data center to actually fix the actual rack that tipped over in an actual gravity field. At some point, it’s not turtles all the way down and there is some iron.
Our relationship to technology is a consequence to how we decide to interact with it. I didn’t quite develop this, but biases and priors have more to do with the classifications than innate characteristics of the language:
In a Makefile, I can do just about anything I want with whatever small tools I can scavenge. Rake purposely drops a lot of
make-like functionality when they made the decision to pressure you to only use Ruby, even if it’s to shell out. I’ve spent a fair amount of time in Rake (with GitHub Pages, might as well), and for everything they’ve said about “make can’t do this”, I usually have “no, that’s really easy in make”. But then, I’ve been using
makeforever and started by reading the manual (well, the nutshell book Managing Projects with GNU Make) rather than learning as I go (but I also read cover-to-cover the Rake books too, so there’s that). That’s a difference in approach itself that dictates behavior and decisions. If you’re a read the manual type, you do things in a particular way. If you are the “dive right in” type, you have other tactics to model your world. This is part of the psychology in the talk.
|
OPCFW_CODE
|
Android Camera won't take picture depending on what is after it in code
I have the following code with which I'm trying to take a picture and save some of the photo's information into a database. The database portion has been thoroughly tested and works fine in all other circumstances. Unfortunately, if I uncomment the commented code below, my code times out. For some reason, if there is code following the this.camera.takePicture() method, takePicture() won't call the overridden onPictureTaken method at all (the first thing it's supposed to do is print out a line of text, but it doesn't even do that). If no code follows it, it works fine.
Before installing the latch, I would get an error because ph.getPhoto() was returning null (ph's .photo member variable wasn't yet set by onPictureTaken(), because it hadn't yet been called). After installing the latch, it will wait until timeout (or forever, if no timeout value is specified).
Can someone please tell me what I'm missing?
public void takePicture(View view) throws Exception {
CountDownLatch latch = new CountDownLatch(1);
System.out.println("Taking Photo!");
PhotoHandler photoHandler = new PhotoHandler(getApplicationContext(),latch);
this.camera.takePicture(null, null, photoHandler);
/* PhotoHandler has an overridden "onPictureTaken()" method which releases the latch as its final
* action; however, its first instruction is to print a confirmation that it has been accessed.
* Unfortunately, for some reason, onPictureTaken() is not called if the following code is
* uncommented; it deadlocks for the five seconds before timing out. However, with out the following,
* the camera.takePicture method invokes onPictureTaken() and it works just fine. */
// latch.await(5,TimeUnit.SECONDS);
// DatabaseHandler db = new DatabaseHandler(this);
// Photo p = photoHandler.getPhoto();
// db.addPhoto(p);
// List<Photo> photos = new ArrayList<Photo>();
// photos.add(p);
// this.addPhotosToMap(photos);
}
onPictureTaken() is (afaik) executed on the main / UI thread. If your takePicture method is also exectued there than it must happen simply because a thread cannot wait for itself.
Besides, you must not block the mainthread or your Activity will ANR.
If you move the commented code (minus that latch) to onPictureTaken() then everything should be fine. Those onSomethingHappened callbacks are made exactly for that task
The takePicture method is part of the main activity, and it's executed by a button. I have no idea how to check which thread it's part of. I would agree that it should have to happen; however, like I said, it doesn't happen unless the latter portion of the code is removed. It doesn't even execute the first instruction, which is a simple sysout.
Secondly, I know I shouldn't block the thread, but the getPhoto() method shouldn't execute before the method that sets the photo. I can move that around later or remove the latch altogether, but that doesn't fix my problem right now.
Thirdly, I originally had the database code in onPictureTaken(), but it didn't like that either, so I moved it around in an attempt to figure out what was wrong. I can move it back, but it doesn't solve the problem. Anyway, besides the db stuff, the rest of that currently-commented-out code is responsible for populating the mapview with the new photo's marker. Should that not stay part of the UI? And if not, how should I tell onPictureTaken to manipulate the UI?
onPictureTaken isn't executed before the picture is taken. You know that it is once the method is called (like you know that the button was pressed once you get onClick). Since callbacks are executed on the main thread you can simply set the image for the UI from there. onPictureTaken is UI part. And the way how you modify the UI is the same you would do in e.g. onCreate (which is exactly such a callback itself)
I rewrote the program to do the filepath generation in the Activity's takePicture method instead of onPictureTaken() to avoid the problem altogether (I never have to get any information out of onPictureTaken). I still do the filesystem work in onPictureTaken() and there are no problems, so it is unrelated to the speed of the filesystem (anyway, it shouldn't take a minute to write a tiny image file to an SD card, so file I/O isn't causing the problem).
Put all of your post-processing (i.e., the code you have commented out) into the appropriate callback, not after takePicture. And put slow actions such as file system access into a background thread (e.g., AsyncTask).
Most of the post processing is responsible for changing the UI. I originally had the db code as part of the onPictureTaken() method, but I moved it out to see if it was the cause of the problem. I can put it back, but it won't fix my problem. Anyway, I can execute all of the filesystem stuff instantly--that's not a problem atm (thanks for the advice though)--with the latter portion of the code commented out.
Why would having code after camera.takePicture() affect whether or not it calls back to onPictureTaken()?
Maybe the code is running, but has an error. It's possible for that code to be reached before the picture data is ready.
Okay, after 5 seconds, probably not.
|
STACK_EXCHANGE
|
[dv/top] Add usb clock calibration test
addresses #14137
fake the usb portions since usb chip level test is not ready
measurement the clocks pre/post calibration to ensure it is
as expected
tweak the ast logic slightly such that we can introduce a
large "drift" using run time options
Signed-off-by: Timothy Chen<EMAIL_ADDRESS>
@a-will let me know if you think i've picked a reasonable force point. I wanted to go further deeper, but then I ran into a bunch of packet completion and valid logic.
@a-will let me know if you think i've picked a reasonable force point. I wanted to go further deeper, but then I ran into a bunch of packet completion and valid logic.
I think what you have is reasonable, though this might be a little closer:
Ensure sense is high (pinmux config, set to tie-high) and usbdev is configured to connect (enable via CSR)
Suppress usb_fs_rx.rx_idle_det_o (to avoid suspending, force to 0)
Change usbdev_linkstate.link_state_q to LinkActiveNoSOF or LinkActive
Force usb_fs_nb_pe.sof_valid_o and change usb_fs_rx.frame_num_q when you want to pulse SOF
thanks @a-will ! i'll give that a try.
i'm going to tweak the test following @a-will suggestions.
@Jacob-Levy can you have a look at the ast related changes?
From the last point of view, the 1ms reference window is used to count the USB clock and fix the drift. It usually takes 3 to 6 valid pulses to fix the drift.
Will review the changes of the usb_osc.sv to see if it does the work. BTW, the drift can be positive and negative.
From the last point of view, the 1ms reference window is used to count the USB clock and fix the drift. It usually takes 3 to 6 valid pulses to fix the drift. Will review the changes of the usb_osc.sv to see if it does the work. BTW, the drift can be positive and negative.
yep understood. so i'm not sure if you guys want to actually model the fact that it takes a few pulses to correct. The pulse info isn't used right now, so after the first pulse the clock just becomes highly accurate.
The only reason to model it, in my mind, is so that when this test is run with proprietary AST there arent any timing discrepancies.
Don't limit the test to one pulse, I would like to run the test with the real AST too.
Don't limit the test to one pulse, I would like to run the test with the real AST too.
yes that's right. The test i think does 5 right now, but my point is more I have no idea how many it should be.
If you expose a parameter via ast.hjson, something like MIN_PULSES, then that can be translated into a C header, and I can use that directly in the test.
The number of pulse needed depend on the drift size direction and process. 3-6 pulses is typical convergence. The USB clock spec say 30 or 50 ms to fix the drift...
Let me review your code and see if I can come up with something that can work for the real AST too.
The number of pulse needed depend on the drift size direction and process. 3-6 pulses is typical convergence. The USB clock spec say 30 or 50 ms to fix the drift... Let me review your code and see if I can come up with something that can work for the real AST too.
sounds good, thanks Jacob.
@a-will please have a look. I changed the forcing points. I tried to avoid directly forcing the state.
In the future I can probably the se0 forcing slightly as well. But this seems to do the trick for now.
please have a look and approve if it looks good.
@tjaychen I reviewed the changes you made and I need to make some changes in order to emulate the functiion of the beacon.
The usb_ref_val_i only qualifies the usb_ref_pulse_i. I will update the usb_osc.sv model to reflect the functionality of beacon.
@tjaychen I reviewed the changes you made and I need to make some changes in order to emulate the functiion of the beacon. The usb_ref_val_i only qualifies the usb_ref_pulse_i. I will update the usb_osc.sv model to reflect the functionality of beacon.
thanks @Jacob-Levy . do you want me to hold off on merging this until you have a chance to introduce your change?
I am not clear how the test verify the beacon function, but I need more time to describe the model and make sure it will represent the real AST too.
I am not clear how the test verify the beacon function, but I need more time to describe the model and make sure it will represent the real AST too.
so what the test currently does is that it "fakes" the usb sof function, and sends periodic 1ms time-stamps to the ast. Since right not, the ast instantly corrects the clocks, it just waits for that to happen, and then does a clock frequency measurement on the usb clock. So the frequency goes from "way off" to "correct".
@tjaychen In order for the usb_ref_pulse_i & usb_ref_val_i affect to be checked, the usb_clk.sv & the usb_osc.sv need to be changed. At the moment, the usb_ref_pulse_i that goes into the usb_clk.sv is not used. I used only the usb_ref_val_i that
goes to the usb_osc.sv and force calibration when set to '1'. To make the calibration depend on the `usb_ref_pulse_i too, I need to add it to the usb_osc.sv and force the calibration only when at least 'N' pulses with valid arrived.
It is a simple change that will enable the test to run with the real AST too. 'N' Can be a parameter as you suggested.
sounds good @Jacob-Levy . do you want me to hold on the merge? Or do you prefer I merge it first, then you can run this test with your change?
@tjaychen Go ahead and merge.
I will code the change into a draft PR and let you adjust your test code into a new PR.
@tjaychen Go ahead and merge. I will code the change into a draft PR and let you adjust your test code into a new PR.
sounds good!
@tjaychen [AST] USB Clock Calibrate Test Supports #14618 Added the support needed for the test
@Jacob-Levy you'll probably need to add some waivers for your recent changes.
https://reports.opentitan.org/prod/opentitan/github/presubmit/3285/20220826-150336/HEAD/reports/hw/top_earlgrey/lint/ascentlint/latest/report.html
@tjaychen NTIL does not have AscentLint tool and @msfschaffner was updating the waivers after my review.
yes that's right, but we have ascentlint blocking so that the submitters are able to fix issues themselves.
You can have a look at the warnings issued, and the waiver file here to get a sense on how to add more.
|
GITHUB_ARCHIVE
|
I've just finished reading Steve Grand's second book, Growing Up With Lucy (how to build an android in twenty easy steps). He points out on the page just before the introduction that he lied about the number of steps, but never mind that. I'll even give away the ending. By the end of the phase of the project described in this book, his orangutan-inspired android "daughter" Lucy has learned to (sometimes) point at a banana.
This may not sound like much, but it's actually a good step beyond Steve's earlier artificial life project, the computer game Creatures. The development of that game and of Steve's ideas about the nature of life and intelligence were described in his earlier book, Creation, which I wrote about in 2006. It's pretty amazing stuff - all the more so because the creatures in Creatures actually learn to do everything they do in their fairly complex 2D world. They have brains and immune systems and so on. Richard Dawkins has said that Creatures could be the closest that anyone has come to actually creating artificial life.
But Steve was not content to stop there. As complex and realistic in some respects as the artificial life forms of Creatures could be, they still developed in a very limited 2D world. Steve makes the very logical point that we (humans and other mammals, mainly) do not exist as isolated "intelligences" but that intelligence and other attributes are consequences of having bodies that have to learn to survive in a real 3D world. With Lucy, he set out to create a "creature" that would actually have use for intelligence. In the Lucy book, he describes his ideas and his self-funded "non-disciplinary" research in more detail, in addition to describing the trials and tribulations of Lucy and her software-emulated muscles (driving real and very troublesome hardware) and brain structures (modeled on the overall architecture of the mammalian brain, with a visual and motor cortex and so on). It's a "synthesis" approach that makes a lot of sense, and his telling of it is quite clear and often quite funny.
At the end of the book, Steve reports that his self-funding had just about run out when he fortunately received a UK foundation grant to continue the work for another year, allowing him to start work on a "Mark II" version of Lucy. That was around 2000 I think, and to learn more I'm afraid you will have to explore his web site (which has frames, alas, the de-framed Lucy part is here). It seems that Lucy is again (still?) on hold, but maybe things will change, and maybe someday Lucy-type robots will be common and will be called grandroids in Steve's honor. Then again, maybe not.
In any case, this is a very clever, sometimes funny, and always thought-provoking book, and I highly recommend it. Now I have to pack for a lightning trip to Taiwan, leaving for the airport in about 11 hours.
|
OPCFW_CODE
|
# frozen_string_literal: true
module DFormed
class KeyValue < Field
attr_of Element, :key_field, :value_field, serialize: true, default: { type: :text }
def self.type
:key_value
end
def value=(val)
return unless val
val.each do |k, v|
key_field.value = k if key_field
value_field.value = v if value_field
end
end
# These methods are only available if the engine is Opal
if DFormed.in_opal?
def to_element
@element = super.append(key_field.to_element, value_field.to_element)
end
def retrieve_value
{ key_field.retrieve_value => value_field.retrieve_value }
end
end
protected
def inner_html
return nil if DFormed.in_opal?
[key_field.to_html, value_field.to_html].join
end
end
end
|
STACK_EDU
|
"use strict";
var expect=require('expect.js')
var json4all=require('json4all')
function AdaptWithArrayMethods(objectData, objectBase){
Object.defineProperty(objectData, '_object', { value: objectBase||objectData});
}
function anonymous(o){
AdaptWithArrayMethods(this, o);
}
var ObjectWithArrayMethods = anonymous;
/*
function ObjectWithArrayMethods(o){
AdaptWithArrayMethods(this, o);
}
*/
function id(x){ return x; };
function object2Array(o){
return new ObjectWithArrayMethods(o);
}
function ArrayAndKeys2Object(result, keys){
var adapted = {};
keys.forEach(function(arrayKey, arrayIndex){
adapted[arrayKey]=result[arrayIndex];
});
return adapted;
}
function Argument3Adapt(__,___,x){ return x; };
[
{name:'forEach'},
{name:'map' , resultAdapt: Argument3Adapt, stepAdapt:function(x, v, n, a){ a[n]=x; }},
{name:'filter' , resultAdapt: Argument3Adapt, stepAdapt:function(x, v, n, a){ if(x){a[n]=v;} }},
].forEach(function(method){
ObjectWithArrayMethods.prototype[method.name] = function (f, fThis){
var oThis=this._object;
var keys=Object.keys(oThis);
var acumulator=object2Array();
var result=keys[method.name](function(arrayKey, arrayIndex){
var arrayValue=oThis[arrayKey]
return (method.stepAdapt||id)(f.call(fThis, arrayValue, arrayKey, oThis), arrayValue, arrayKey, acumulator);
}, fThis);
return (method.resultAdapt||id)(result, keys, acumulator);
}
});
describe("array", function(){
var algo;
beforeEach(function(){
algo=['7', '8', '9']
})
it("filter with modifies", function(){
var res = algo.filter(function(valor, indice, contenedor){
if(indice==1){
contenedor[indice]='z';
}
return valor!='9';
});
expect(res).to.eql([
'7',
'8',
])
expect(algo).to.eql(['7', 'z', '9'])
});
});
describe("object2Array", function(){
var algo;
beforeEach(function(){
algo={a:'7', b:'8', c:'9'};
})
it("forEach", function(){
var res=[];
object2Array(algo).forEach(function(valor, indice, contenedor){
res.push([valor, indice, contenedor]);
if(indice=='b'){
contenedor[indice]='x';
}
});
expect(res).to.eql([
['7', 'a', algo],
['8', 'b', algo],
['9', 'c', algo],
]);
expect(algo).to.eql({a:'7', b:'x', c:'9'})
});
it("map", function(){
var res = object2Array(algo).map(function(valor, indice, contenedor){
if(indice=='b'){
contenedor[indice]='y';
}
return [valor, indice, contenedor];
});
expect(res).to.eql({
a:['7', 'a', algo],
b:['8', 'b', algo],
c:['9', 'c', algo],
})
expect(algo).to.eql({a:'7', b:'y', c:'9'})
});
it("filter", function(){
var res = object2Array(algo).filter(function(valor, indice, contenedor){
if(indice=='b'){
contenedor[indice]='z';
}
return indice!='c';
});
expect(res).to.eql({
a:'7',
b:'8',
})
expect(algo).to.eql({a:'7', b:'z', c:'9'})
});
it("chaining map filter map", function(){
var res = object2Array(algo)
.map(function(valor, indice, contenedor){
if(indice=='c'){
contenedor[indice]='w';
}
return valor+'!';
}).filter(function(valor, indice, contenedor){
return valor!='8!';
}).map(function(valor, indice, contenedor){
return valor+'?';
});
expect(res).to.eql({
a:'7!?',
c:'9!?',
})
expect(JSON.stringify(res)).to.eql('{"a":"7!?","c":"9!?"}');
expect(json4all.stringify(res)).to.eql('{"a":"7!?","c":"9!?"}');
expect(algo).to.eql({a:'7', b:'8', c:'w'})
});
});
|
STACK_EDU
|
Thanks so much, TyeDye! I didn’t realize when I made this tool how helpful it would be to so many people. I love seeing about all the games folx have made using it.
Recent community posts
was hoping to see if this would run on my pocketChip, but it’s not in the bbs/splore. Is there a url I could dl the cartridge from?
(Btw, you mobile UI has issues, the discord link is triggering underneath the control overlay)
You could make a sprite that saves a random number to a variable. Or more likely, you could include that same dialog that generates the number in the middle of whatever dialog where it's required.
As far as 16x16 sprites would go, it's not possible for the player avatar to be more than a single 8x8 sprite without hacks. (There is a hack for that.) Some people have created vanilla Bitsy games that create an illusion of a larger avatar using exits, but that has limits. You can of course have a game with an 8x8 avatar but with larger, multisprite characters that they interact with.
(oh hey, love your art!)
I think the best practice for accomplishing this without hacks is two steps:
- Hidden item before exit that performs the logic to see which ending text the player should be shown. This item stores the ending text to a variable.
- Ending displays appropriate ending text using 'say'
I wrote a tutorial which covers this and variables in general: https://ayolland.itch.io/trevor/devlog/29520/bitsy-variables-a-tutorial
Hey, just in case anyone is poking around this thread in the future for answers about variables, wrote a tutorial covering all this:
Hey, sorry to bump an old thread, just posting here to help anyone who goes looking for help in the future.
The correct syntax would be:
I believe when you are adding curly brackets around the variable name, you're just unnecessarily adding a nested code block for Bitsy to process. Probably doesn't hurt anything, but will save you time writing less characters.
I wrote up a tutorial about Bitsy Variables that should cover most basics like this:
Yeah, manually placing each dirt item did get a little tedious. I did use a few hacks, although the bulk of what's being done is just Bitsy. I used the directional avatar, end from dialog, and dynamic background hacks.
I can't hammer down exactly when this bug started in my project, but it looking back through different versions, it looks like some time after I deleted the initial room the editor starts you with, I no longer am able to add exits properly.
I can add one more exit, but any further exits I try to add the editor instead acts as if I have selected my most recent exit, regardless of what tile/room I clicked.
Also the 'place new exit' / 'click space in room to add exit' seems to stop toggling properly.
There was an exit in that initial room at one point. Is data from that exit not being cleanly deleted?
|
OPCFW_CODE
|
Whole Foods Market is seeking a Principal Retail Infrastructure Architect to join our Technical Architecture team responsible for designing and distributing new technology solutions across the retail footprint. This Senior-level Retail Engineer role is responsible for the development and management of our retail (within the walls of the store) infrastructure. Day-to-day activities include working with the business and internal/external development groups to engineer solutions within the customer facing technologies group. Specifically, the Principal Retail Architect provides engineering and solutions architecture services for Microsoft Server and Windows-based systems across the retail enterprise. The ideal candidate must live and breathe collaboration, cooperation and communication as this role will work closely with other technology teams, vendors and business partners to build win-win relationships.
This position is located in Austin, Texas. The internal job title is Principal DevOps Engineer.
* Participate in the ongoing development of the infrastructure roadmap * Participate in the development of short and long-range plans * Work with application teams and vendors to develop and lead the implementation of system solutions for customer facing technologies; including Point of Sale (POS), scales, mobile devices, etc. * Provide technical leadership and supervision of other technical personnel * Serve as a trusted advisor to internal leaders by seeing the big picture and translating business strategies into actionable technology roadmaps and project plans * Assist in developing IT polices to support the implementation of strategies set by upper management * Establish and manage senior-level technical relationships with other internal tech teams, as well as our third-party vendors * Design, develop and lead the implementation of automated processes, tools and applications to support large-scale deployments and ongoing support/management of the system * Participate as a senior advisor in the diagnosis and solution of environment related issues; including network, hardware, permissions, certificates, etc. * Lead systems design meetings with the customer and vendor * Apply new solutions through research and collaboration with team to determine course of action for new application initiatives * Actively participate in Point of Sale (POS) software release and deployment planning to ensure functionality is supported on existing hardware * Adhere to established security, testing policies and procedures
* 10+ years of experience in Retail Infrastructure and Point of Sale (POS) systems * 3 years of experience as a Solution Architect, Domain Architect or Enterprise Architect * Demonstrated proficiency in: * Infrastructure and applications architecture modeling and design * Design documentation * System qualities and tradeoffs - solution evaluation experience * Software security methods and models * Application frameworks
* Working Security/PCI/SOX compliance knowledge * Software development knowledge (programming models, runtime execution environments, software design patterns, software engineering, software documentation); Point of Sale (POS) preferred * Understanding of business functionality, interfaces, data flow and functional architecture of systems supported * Aptitude for creative thinking, decision-making, learning, problem-solving, systems thinking, personal ethics, organizational skills and trustworthiness * Experience making hosted vs. on premises decisions and leading migration to the cloud * Working knowledge of application containers and containerization pros/cons * Strong analytical, diagnostic and problem-solving skills with ability to work independently to prioritize and handle multiple tasks * Specific knowledge of Windows 7 and 10 clients, Server 2012 and 2016 and Hyper-V * Working knowledge in administering Windows Active Directory (AD) and Group Policy Object (GPO) * Proficiency in system hardware architecture and implementation * Knowledge of infrastructure and server theories, principles and concepts; application infrastructure and standards; networking fundamentals; Windows; Physical Server architecture; Virtualization technologies (e.g. Hyper-V, VMware) and LAN/WAN/Firewall/VPN network technologies * Mobile device knowledge and ability to lead technical solutions involving mobile device selection, deployment, Mobile Device Management and operational support for mobile device solutions * Ability to define data reporting criteria and how to use these reports to identify trends and as a means of troubleshooting recurring problems * Understanding of both business and systems processes and an ability to understand and explain issues from both a technical and a business functional point of view * Strong interdisciplinary problem-solving skills, demonstrated by frequent and successful application of technical standards, theories, concepts, and techniques * Ability to work as a member of a highly collaborative team * Demonstration of strong oral and written communications, teaching, facilitation and negotiation, leadership and influencing skills * Ability to work in a fast paced, dynamic environment * 4-year degree in Computer Science preferred or equivalent experience
At Whole Foods Market, we provide a fair and equal employment opportunity for all Team Members and candidates regardless of race, color, religion, national origin, gender, pregnancy, sexual orientation, gender identity/expression, age, marital status, disability, or any other legally protected characteristic. Whole Foods Market hires and promotes individuals solely based on qualifications for the position to be filled and business needs.
About Whole Foods Market
Whole Foods Market is a company operating a chain of natural and organic foods supermarkets.
|
OPCFW_CODE
|
HER2-positive breast cancer is a heterogeneous disease, presenting tumor and microenvironment features which can impact prognosis and treatment response. Here, we aimed at better understanding the heterogeneity of this disease by performing spatial transcriptomics (ST) on HER2-positive breast cancer samples.
Spatial transcriptomics (Visium) was performed on 33 frozen HER2-positive breast cancer surgical samples, including 6 residual disease samples. H&E images of the ST slides were annotated for morphological structures at the single-cell/structure level (QuPath software). Clusters identified on integrated data (harmony R package) were characterized calculating gene expression signatures, including HER2DX gene modules, at the spot level, and by morphological annotation. Gene signatures were computed on pseudo-bulk RNA data as well.
A total of 25 integrated clusters were identified (range 15-21 in each sample). As each spot/cluster represents a mixture of different cell types, using gene expression and morphological data we defined a total of 9 tumor-enriched clusters (of which 5 sample-specific), as well as 12 clusters mainly enriched for stroma, 3 for adipose tissue, and 1 for tumor-infiltrating lymphocytes. All samples presented more than 1 tumor cluster, in various proportions. Interestingly, when comparing tumor clusters, levels of HER2DX signatures depicting HER2 amplicon, luminal phenotype, proliferation, B cell infiltration, as well as signatures related to stroma activation, signaling pathways and metabolism differed, demonstrating heterogeneity in tumor-enriched areas. Of note, within the same sample, tumor clusters with high/low levels of the HER2DX modules and other signatures could co-exist, and samples presenting signature scores above/below the cohort median at the pseudo-bulk level (also influenced by the stroma composition) showed the co-presence of tumor clusters with high/low signature levels.
Our findings highlight the heterogeneity of HER2-positive breast cancer. Spatial transcriptomics may help in refining gene expression signatures computed on bulk RNA, and these results open to further analyses aimed at better understanding the tumor microenvironment in this disease.
Fonds de la Recherche Scientifique - FNRS (F.R.S.-FNRS); Fondation Jules Bordet; Breast Cancer Research Foundation (BCRF); Fondation contre le Cancer.
C. Sotiriou: Financial Interests, Institutional, Advisory Board: Astellas, Vertex, Seattle Genetics, Amgen, Inc., Merck & Co.; Financial Interests, Personal, Advisory Board: Cepheid, Puma; Financial Interests, Personal, Invited Speaker: Eisai, Prime oncology, Teva; Financial Interests, Institutional, Other, Travel: Roche; Financial Interests, Institutional, Other, Internal speaker: Genentech; Financial Interests, Personal, Other, Regional speaker: Pfizer; Financial Interests, Institutional, Invited Speaker: Exact Sciences. All other authors have declared no conflicts of interest.
|
OPCFW_CODE
|
Instructions to authors
Before proceeding to submit your abstract, please read the following Submission guidelines to ensure you have all of the information that will be needed to complete the submission process. Please pay particular attention to these instructions as abstracts that do not conform may not be accepted by the editors of the proceedings.
Manuscript style – general
Manuscripts are to be prepared in MS Word. Please do not submit a PDF.
Margins: top, bottom, left and right: 2.5cm each
Header and footer: 1.0 cm each. Do not enter anything in the header and footer; these will be used by the editor for running page headers and line numbers
Font size: 11 point except in cases specified below
Papers should include title, introduction, methods, results, discussion/conclusions and references (should any be cited).
Please restrict your title to a maximum of 20 words. The paper title is to be centred, have a font size of 14 point and be in bold style.
Type all headings in lower case letters (sentence case) with only the first letter of the first word plus any proper name capitalised.
Headings (Introduction, Methods, etc.) are on a separate line, start at the left margin, have 12 point font size, bold style and no full stop at the end.
Separate all headings, except the paper title, from the previous section by one blank line.
Author organisation and location
Author names are to be typed centred under the paper title, separated by one blank line from the title, in 12 point font size and italic style. Capitalised alphabetic superscripts (A, B, etc) after each name are to be used to indicate locations and corresponding author.
Author organisation and location are to be typed centred under the author names, separated by one blank line, in 11 point font size and normal style. Each location and the corresponding author are to be proceeded by the appropriate superscript relating them to the author.
Include an email address at bottom left of the paper of the author to be contacted for further information. Remove the hyperlink from the email address, if present.
Maximum length of the abstract submission is one A4 page.
- Justification: full (left and right)
- Line spacing: single (no additional spacing before and after)
- Footnotes: place these in the text area of the page, not in the page footer
- Numbers: In the text, type all numbers as numerals except at the start of a sentence. In headings, spell out the numbers from 1 to 9.
Each figure should be embedded in the appropriate place within the manuscript. A caption containing the figure number and legend should be placed below the figure and separated from it by a blank line. In both the text and captions refer to the figures using the abbreviation “Fig.” and Arabic numerals (e.g. Fig. 1). The captions, including figure label, are to be left and right justified in 11 point font size. The figure labels are in bold style and followed by a full stop (e.g. Fig. 1.). Continue the legend on the same line with the keys to symbols used appearing in the legend and not on the face of the figure. Do not rely on colours to distinguish lines or areas in the figures as printed editions of the proceedings may be in greyscale.
Tables should be inserted in the appropriate place in the text. Each table is to have a title comprising the table number (e.g. Table 1.) and a description ending with a full stop, all in bold, 11 point type. The table is to be separated from its title by one blank line. Content in the body of the table is to be in 10 point type. Only the first letter of the first word and proper names of row or column headings, or in table entries, should be in capitals. The dimensions of units should be shown in the headings in brackets. However, if this is not possible, they may be inserted into the body of the table. Where the measure of variation is presented in a separate column or row, ± should not be repeated before each value. There should be no vertical rulings between columns. Superscripts (alphabet capitals) should be clearly indicated, with notations being immediately below the table. When tables are referred to in the text they should be typed with a capital e.g. Table 1.
Nomenclature and units
Present all results in metric (S.I.) units
Time of day must be indicated by the 24 hour clock.
Use kg/ha, g/m2, kg/ha.day etc.
The format for dates is 30 November 2006.
Time units should be expressed in full, e.g. day, second (not d, sec).
Within the text references should be restricted to the authors’ names followed by the year of publication. They should conform to the following examples. (Smith 1984), (Smith et al. 1986), (Smith 1984a, 1984b; Smith and Jones 1990; Robert et al. 1992).
A complete list of references cited in the text must be arranged alphabetically at the end of the text and preceded by the first order heading References.
|
OPCFW_CODE
|